Over the past year as I increasingly focused on the incoming and inexorable volcanic lava flow that is the LLM slice of AI, much like everybody else it was hard not to be captured by the various dimensions of the discussion and technology. Particularly with a social media and political landscape purpose-engineered to be sources of distraction, sometimes it can take awhile before you realize you need to step back. You must let the details drift away to begin to see what is solidifying, and what is burning to ash.
View Series => Prev: And So It Begins | Next: The Foreshadowing
One of the more contentious debates has been around the capacity of LLMs to reason, and what the future holds with Artificial General Intelligence (AGI). Many claim that LLMs already reason, and conversely many claim that the LLM model itself is fundamentally mathematically insufficient as a platform to ever achieve reasoning.
I lean a bit towards the latter, in part due to humanity’s intense wiring for apophenia. We would deify a dog’s left nut if we could convince ourselves we would then feel better about the chaotic world around us. That said, there are some topological data quirks and a host of LLM ecosystem improvements to increasingly allow for. LLMs cannot be underestimated, and their capabilities are a fast-moving target. I also don’t think the debate much matters in terms of societal impact. Neither reasoning nor AGI are what will cause the anvil to hit the hammer for humanity.
It will be the decisions made by those with the power to broadcast those effects to thousands and tens of thousands of people at a time.
The Evolving Impact
Some companies have already become talking points through how they reacted to the potential of LLMs. IBM, for example, laid off eight thousand H/R staff because of a belief in what LLM-based automation would immediately provide. It made IBM short-lived joke fodder because they rapidly had to re-hire many of those employees. Other companies like Duolingo and Klarna had similar missteps. The result was that the anti-LLM crowd laughed and felt that their position was substantiated.
Unfortunately, an import point was being missed. The corporate world had tipped its hand on strategy, and that strategy is what will actually matter going forward.
The layoffs, and similar recent policy statements by many other companies, made it very clear that the difference between employment and unemployment for hundreds, even thousands of people, depended on a finger just itching to pull the trigger. Entire categories of staff that had invested portions of their lives in those firms were unwanted, and past visions of “company culture” were laid waste as those employees learned that their very participation had long been silently begrudged. The only difference that AGI could make in the future, is to measure fate in terms of “how soon” and “how many.” Even if LLMs today are found to not achieve some necessary performance threshold, it will simply motivate companies to spend their way into achieving that goal. Careers in many large companies from now on will, at best, be measured a fiscal quarter at a time.
The Reckoning
We have barely begun to grapple with what this will mean to ourselves individually and the economy broadly. Career progressions or equity-based compensation packages associated with longevity now warrant a radical risk-adjustment. Personal debt and savings management strategies elevate from “good practices to work towards” to “critical for survival on short notice.” The meaning of skill itself becomes a very complicated issue in a world that rushes to homogenize many career paths into their simplest interpretations as low-variance and thus automatable commodities.
Some may view the situation differently, and construe change as opportunity. They aren’t wrong, or at least not wrong for awhile. Unfortunately, some of that opportunity is just the next arbitrage opportunity that corporations will harvest with a slightly better AI model. The value proposition of LLMs is that the manual effort input is less than what the manual effort would be to create the resulting output. That input effort is paid for in staff time, and thus will always be the next obvious target for automation as it would eliminate both the doers and likely their supervisors simultaneously.
Any view held that somehow large corporations or governments will buffer people from these forces, better have some practical empirical examples to point to. It took fifteen years for most governments to begin grappling with cryptocurrency, and AI is moving faster than crypto did. When companies lay off thousands at a time, they are not examining your individual skills or contributions to make that call. They’re just drawing a line and saying “go!” to those on the wrong side of a corporate demographic. Governmental policy mechanisms to restrain that are minimal, and unlikely to change much in the near future.
So far as I can see, at this moment, we’re all on our own…. and it’s time to have a much better plan in place.
The Experimentalist : AGI is Irrelevant © 2025 by Reid M. Pinchback is licensed under CC BY-SA 4.0