
Sign up to save your podcasts
Or


The anti-monopolist writer Matt Stoller, author of the book Goliath and of the newsletter Big, is more than just a thinker I respect deeply. He’s also been a friend for over two decades. When I disagree with him on something, it’s with that foundation of trust beneath it.
I disagree with Matt on his complacency around AI-induced job decimation.
“Complacency” may be an unfair word. Matt is anything but a techno-optimist. He’s not saying don’t worry, your job will be fine, or even that AI will create more jobs than it destroys. He’s well aware that AI is going to transform society and the economy profoundly. It’s just that he thinks that the impact we’re seeing on jobs, at least so far, is not, in fact, the inevitable consequence of the technology itself. Rather, it is the outcome of policy choices made by corporate leaders and politicians, who then turn around and deflect the blame onto technology, which is a little like blaming it on the gods.
This has always been Matt’s greatest contribution to the political discourse: he understands the power of laws and won’t allow us to ignore it. Our social structures are not just impassive objects tossed to and fro by social and economic forces, he reminds us. They are also the result of conscious decisions we either make or fail to make. It’s an empowering reminder, insofar as it helps us understand that the forces of the economy are not mystical powers outside of our control. It also forces accountability on the powerful, by rejecting their pretenses that the self-serving choices they make aren’t really choices at all, but just the laws of the natural economy at work, that neither they nor anyone else has the power or the right to stop.
Matt’s insight is no less relevant to the current AI threat than it is to every other market dynamic. Many of the harms we’re supposedly seeing from AI, he insists, are human-made.
Take accounting, for instance. Accounting is a profession ripe for displacement by AI. But as Matt points out, accounting jobs started disappearing long before generative AI appeared on the landscape. People are avoiding careers in accounting because it’s become a bad job, with low pay, low prestige, and lots of stress. This is the result of political choices we’ve made, such as allowing the corporate actors who are audited by accountants to amass so much power that accounting has become a charade. Blaming AI is just ex-post-facto deflection.
Or take creative professions, like writing novels. Authors aren’t so much being displaced by AI as seeing their work stolen from them wholesale, in plain sight, by the AI labs. If criminals routinely rob the convenience store you manage with impunity, it’s not a problem of supply and demand; it’s a problem of lack of law enforcement. Intellectual property is no different. We shouldn’t allow thieves to get away with pretending that their crimes were just the inevitable outcome of the supernatural forces of economics and technological progress. A few broken eggs to make an omelette.
I agree with Matt on all those points, and on basically everything else he notes in his piece. But it barely touches the surface of what is to come.
AI is going to destroy jobs not because anyone necessarily wants them destroyed, but because it will render those jobs superfluous. That’s the very purpose of its creation. Its existence alone will compel these harms, whether or not corporate executives and politicians are on board. This is not just a political problem, and it will soon no more be a matter of our choosing than natural disasters or the spread of viruses are.
In their report “The Intelligence Curse,” which I highly recommend, Luke Drago and Rudolf Laine, two AI experts, describe what we are already beginning to see as “pyramid replacement” — a progressive process of obsolescence that starts at the bottom of a firm and gradually works its way up to the tippity-top until there’s nothing left of economic value that the firm produces that requires human labor.
It’s important to bear in mind, the authors point out, that AI is not like other technologies, like laptop computers, which increase worker productivity. It’s easy to mistake it for such at this moment in its development, because it’s still so crude. We can still pretend that AI will merely free us from the tedious tasks that entry-level workers are forced to do because it’s not worth wasting the valuable time of anyone more important to the firm to do them. This frees up those workers to take on more meaningful assignments that produce more value and that better advance their professional development. Not only will they enjoy their jobs more; they can then be promoted more quickly to better-paying positions.
But AI isn’t a widget. It’s obviously not going to stop there. “Remember,” Dragos and Laine write, “that today is the worst these systems will ever be.” AI labs are currently creating “agentic” systems — AIs that don’t just complete discrete tasks, but that take on entire projects and see them through to completion, mimicking human goal-setting, decision-making, and discretion-exercising. These will be autonomous actors that will require less supervision than your typical human employee does. They will be cheaper, faster, and better at their jobs than entry-level workers. That future is already here, though still in its nascency. Drive through San Francisco and every other billboard on the freeway is selling some AI agent or another.
As AI agents proliferate, employers will stop hiring entry-level workers. The entry-level workers they already have will either lose their jobs or will be promoted into junior positions as their former jobs are filled by AI. One can frame this as a “political” choice but it’s really not a choice at all. Firms will simply be forced by competition to adopt the new practice.
As agentic AI evolves, the water level will rise. Soon it will replace those same junior level positions the former entry-level workers were promoted into. In little more time, agentic AI will become advanced enough that the humans overseeing the junior-level AI agents — moving, as they do, at human speed — will become an impediment to the AIs’ productivity. Middle management will see its positions replaced, then senior management. Finally, the top executives will discover that even their jobs are disposable. After all, people aren’t perfect. “CEOs are forgetful, and they don't have total insight into everything their company is doing — but their AI systems do,” the authors of The Intelligence Curse write. With the replacement of the C-Suite, the entire firm will become human-free. Multiply that by every company on earth.
It may be the case, as Matt describes, that at the present time, corporations are making socially destructive but profit-maximizing decisions and then pointing the finger at AI. But as agentic AI progresses, these explanations will no longer be mere excuses. Certainly the C-Suite executives don’t wish to see their own positions replaced. As their boards force them out in the face of AI-induced hyper-competition, it will become clear that these processes are no longer being driven by human interest or initiative, even that of the most powerful among us. They will be driven by the requirements of the technology itself, which by that time will be wholly alienated from us and will, indeed, constitute an actual mystical force, one that is beyond both our control and our comprehension.
Today, there is still human agency at work. We still have a say in how the future unfolds, and there is a single political choice that governs all of this: the choice to continue researching, training, and advancing Artificial Intelligence. We could make laws to stop doing so. We could put bans or moratoria in place. We could enact pauses. We could proceed but within the parameters of international treaties. We are doing none of these things, and that is, indeed, a problem of our own creation.
But the longer we wait, the more irrelevant we will become to the course of events. By various estimates, we are within a year, or a few years, or perhaps a decade from Artificial General Intelligence — the point at which AI is as smart as the smartest humans. Certainly most of us will see it in our lifetimes.
AGIs will not require food and sleep as we mere mortals do. They won’t need breaks or time for family and hobbies. They will not have to spend years in school to learn their crafts, and they won’t have to train other AGIs to pass along their expertise. They will simply self-replicate, with all of their learning intact. It’s hard to conceive of a single cognitive job that could be preserved as a uniquely human domain in a world with AGI. As robotics advance, we may soon say the same about blue-collar work.
Nor is AGI the end of it. Human intelligence isn’t some theoretical threshold beyond which Artificial Intelligence cannot advance. Far from it. As AGIs multiply into legions, limited in their number only by the physical limits of the natural resources that power their data centers, AI research will accelerate to its exponential limits. We may then see our first glimpse of ASI — Artificial Super Intelligence — the intelligence of gods. When you start conceiving of the dystopian possibilities of a world of ASI, it becomes hard to take yourself seriously. You start to sound to yourself like a paranoid schizophrenic whose delusions were shaped by watching The Matrix. But the expectation of those in the field is that ASI is around the corner. Sam Altman calls OpenAI “a superintelligence research company,” and expects ASI to be around in the 2030s. (He thinks we’ll be just fine and the world will be a land of milk and honey.) Mark Zuckerberg is poaching OpenAI to try to get there even sooner. This is happening. And when it does, mass joblessness may be the least of our concerns. But at that point we’ll have no more say in the matter. The choices we’re failing to make now will have doomed us.
By Leighton WoodhouseThe anti-monopolist writer Matt Stoller, author of the book Goliath and of the newsletter Big, is more than just a thinker I respect deeply. He’s also been a friend for over two decades. When I disagree with him on something, it’s with that foundation of trust beneath it.
I disagree with Matt on his complacency around AI-induced job decimation.
“Complacency” may be an unfair word. Matt is anything but a techno-optimist. He’s not saying don’t worry, your job will be fine, or even that AI will create more jobs than it destroys. He’s well aware that AI is going to transform society and the economy profoundly. It’s just that he thinks that the impact we’re seeing on jobs, at least so far, is not, in fact, the inevitable consequence of the technology itself. Rather, it is the outcome of policy choices made by corporate leaders and politicians, who then turn around and deflect the blame onto technology, which is a little like blaming it on the gods.
This has always been Matt’s greatest contribution to the political discourse: he understands the power of laws and won’t allow us to ignore it. Our social structures are not just impassive objects tossed to and fro by social and economic forces, he reminds us. They are also the result of conscious decisions we either make or fail to make. It’s an empowering reminder, insofar as it helps us understand that the forces of the economy are not mystical powers outside of our control. It also forces accountability on the powerful, by rejecting their pretenses that the self-serving choices they make aren’t really choices at all, but just the laws of the natural economy at work, that neither they nor anyone else has the power or the right to stop.
Matt’s insight is no less relevant to the current AI threat than it is to every other market dynamic. Many of the harms we’re supposedly seeing from AI, he insists, are human-made.
Take accounting, for instance. Accounting is a profession ripe for displacement by AI. But as Matt points out, accounting jobs started disappearing long before generative AI appeared on the landscape. People are avoiding careers in accounting because it’s become a bad job, with low pay, low prestige, and lots of stress. This is the result of political choices we’ve made, such as allowing the corporate actors who are audited by accountants to amass so much power that accounting has become a charade. Blaming AI is just ex-post-facto deflection.
Or take creative professions, like writing novels. Authors aren’t so much being displaced by AI as seeing their work stolen from them wholesale, in plain sight, by the AI labs. If criminals routinely rob the convenience store you manage with impunity, it’s not a problem of supply and demand; it’s a problem of lack of law enforcement. Intellectual property is no different. We shouldn’t allow thieves to get away with pretending that their crimes were just the inevitable outcome of the supernatural forces of economics and technological progress. A few broken eggs to make an omelette.
I agree with Matt on all those points, and on basically everything else he notes in his piece. But it barely touches the surface of what is to come.
AI is going to destroy jobs not because anyone necessarily wants them destroyed, but because it will render those jobs superfluous. That’s the very purpose of its creation. Its existence alone will compel these harms, whether or not corporate executives and politicians are on board. This is not just a political problem, and it will soon no more be a matter of our choosing than natural disasters or the spread of viruses are.
In their report “The Intelligence Curse,” which I highly recommend, Luke Drago and Rudolf Laine, two AI experts, describe what we are already beginning to see as “pyramid replacement” — a progressive process of obsolescence that starts at the bottom of a firm and gradually works its way up to the tippity-top until there’s nothing left of economic value that the firm produces that requires human labor.
It’s important to bear in mind, the authors point out, that AI is not like other technologies, like laptop computers, which increase worker productivity. It’s easy to mistake it for such at this moment in its development, because it’s still so crude. We can still pretend that AI will merely free us from the tedious tasks that entry-level workers are forced to do because it’s not worth wasting the valuable time of anyone more important to the firm to do them. This frees up those workers to take on more meaningful assignments that produce more value and that better advance their professional development. Not only will they enjoy their jobs more; they can then be promoted more quickly to better-paying positions.
But AI isn’t a widget. It’s obviously not going to stop there. “Remember,” Dragos and Laine write, “that today is the worst these systems will ever be.” AI labs are currently creating “agentic” systems — AIs that don’t just complete discrete tasks, but that take on entire projects and see them through to completion, mimicking human goal-setting, decision-making, and discretion-exercising. These will be autonomous actors that will require less supervision than your typical human employee does. They will be cheaper, faster, and better at their jobs than entry-level workers. That future is already here, though still in its nascency. Drive through San Francisco and every other billboard on the freeway is selling some AI agent or another.
As AI agents proliferate, employers will stop hiring entry-level workers. The entry-level workers they already have will either lose their jobs or will be promoted into junior positions as their former jobs are filled by AI. One can frame this as a “political” choice but it’s really not a choice at all. Firms will simply be forced by competition to adopt the new practice.
As agentic AI evolves, the water level will rise. Soon it will replace those same junior level positions the former entry-level workers were promoted into. In little more time, agentic AI will become advanced enough that the humans overseeing the junior-level AI agents — moving, as they do, at human speed — will become an impediment to the AIs’ productivity. Middle management will see its positions replaced, then senior management. Finally, the top executives will discover that even their jobs are disposable. After all, people aren’t perfect. “CEOs are forgetful, and they don't have total insight into everything their company is doing — but their AI systems do,” the authors of The Intelligence Curse write. With the replacement of the C-Suite, the entire firm will become human-free. Multiply that by every company on earth.
It may be the case, as Matt describes, that at the present time, corporations are making socially destructive but profit-maximizing decisions and then pointing the finger at AI. But as agentic AI progresses, these explanations will no longer be mere excuses. Certainly the C-Suite executives don’t wish to see their own positions replaced. As their boards force them out in the face of AI-induced hyper-competition, it will become clear that these processes are no longer being driven by human interest or initiative, even that of the most powerful among us. They will be driven by the requirements of the technology itself, which by that time will be wholly alienated from us and will, indeed, constitute an actual mystical force, one that is beyond both our control and our comprehension.
Today, there is still human agency at work. We still have a say in how the future unfolds, and there is a single political choice that governs all of this: the choice to continue researching, training, and advancing Artificial Intelligence. We could make laws to stop doing so. We could put bans or moratoria in place. We could enact pauses. We could proceed but within the parameters of international treaties. We are doing none of these things, and that is, indeed, a problem of our own creation.
But the longer we wait, the more irrelevant we will become to the course of events. By various estimates, we are within a year, or a few years, or perhaps a decade from Artificial General Intelligence — the point at which AI is as smart as the smartest humans. Certainly most of us will see it in our lifetimes.
AGIs will not require food and sleep as we mere mortals do. They won’t need breaks or time for family and hobbies. They will not have to spend years in school to learn their crafts, and they won’t have to train other AGIs to pass along their expertise. They will simply self-replicate, with all of their learning intact. It’s hard to conceive of a single cognitive job that could be preserved as a uniquely human domain in a world with AGI. As robotics advance, we may soon say the same about blue-collar work.
Nor is AGI the end of it. Human intelligence isn’t some theoretical threshold beyond which Artificial Intelligence cannot advance. Far from it. As AGIs multiply into legions, limited in their number only by the physical limits of the natural resources that power their data centers, AI research will accelerate to its exponential limits. We may then see our first glimpse of ASI — Artificial Super Intelligence — the intelligence of gods. When you start conceiving of the dystopian possibilities of a world of ASI, it becomes hard to take yourself seriously. You start to sound to yourself like a paranoid schizophrenic whose delusions were shaped by watching The Matrix. But the expectation of those in the field is that ASI is around the corner. Sam Altman calls OpenAI “a superintelligence research company,” and expects ASI to be around in the 2030s. (He thinks we’ll be just fine and the world will be a land of milk and honey.) Mark Zuckerberg is poaching OpenAI to try to get there even sooner. This is happening. And when it does, mass joblessness may be the least of our concerns. But at that point we’ll have no more say in the matter. The choices we’re failing to make now will have doomed us.