
Sign up to save your podcasts
Or


Maybe it’s all bullshit. ChatGPT 5 was a dud. Meta is already downsizing its AI division, just after poaching top AI researchers from competitors with quarter billion dollar pay packages. A new MIT study shows that 95% of companies investing in AI have gotten no return on their investments. Maybe it’s all just an eye-poppingly massive AI hype bubble.
Or maybe it’s not. Maybe the backlash to the hype will be over in a news cycle. Maybe the next breakthrough is right around the corner. There’s no way to know. Depending on who you ask, we’re either decades out from achieving Artificial Super Intelligence, or just a few years. I personally hope we never get there. In the meantime, the world is changing. AI is driving people into psychosis. North Korea is developing an AI army. It’s not crazy to contemplate worst case scenarios.
A few weeks ago, I reported for the Times of London on the possibility that Artificial Intelligence could lead to human extinction as AI becomes smarter than humans while also developing interests that are incompatible with our own. There are other forecasters who envision a different but equally catastrophic trajectory, leading ultimately to the same outcome.
“If AI gets capable enough, there won’t be anything that only humans can do,” Raymond Douglas, co-author of Gradual Disempowerment, a paper on the social risks of AI, told me. “If that does happen, then a lot of the things we take for granted about what makes civilization work and what makes civilization good for people go completely out the window.”
“We just need to stop,” said University of Montreal Assistant Professor David Kreuger, another of the paper’s co-authors. “We’re not on track to solve it.”
Agentic AI
Tech companies are already selling “AI agents,” which are AI systems that can reason, remember, plan, and act autonomously. Soon those agents may be fully capable of replacing an entry-level employee. Companies like Duolingo have already started replacing contract labor with AI agents.
If and when we get to that point, it will become very difficult for humans to compete with AIs on the labor market, because even if each AI agent is no more cognitively gifted than your average human being, they will have vast advantages over us. To begin with, they’ll have instantaneous access to more or less all of human knowledge. They won’t need to sleep, eat, take vacations, spend time with family, exercise, pursue hobbies, or engage in any other activity that competes with the work that’s assigned to them. They won’t need to spend years in school to achieve their expertise. They won’t get sick or die. And they’ll be able to replicate themselves and merge with one another.
As AI agents advance, those advantages will compound. If an individual AI becomes as capable as, say, the world’s best rocket scientist at designing rockets, it will be able to make infinite copies of itself, and put each clone on a different task simultaneously. You could have the world’s best rocket scientist effectively working on every single problem involved in building a rocket at the same time.
If you had another AI reach Mozart-level proficiency at composing music, you could merge it with the rocket scientist AI. Now you have an AI system that can build rockets better and faster than any team of human beings on earth while also composing concertos that make you weep, along with any number of other genius-level skills. At this point, one could reasonably argue we have achieved ASI, though others may insist that ASI is a still higher bar.
Pyramidal Replacement
In such a world, obviously, your job is in trouble. If you work in a profession that’s primarily physical, you’ll have a bit more time, as the field of robotics catches up to the progress of AI. It’s hard to see robots replacing carpenters, beauticians, dentists, or line cooks any time soon. That will change, but it might take a decade, maybe longer.
If you work in a white collar job that’s principally cognitive, it’s hard to think of a job that’s safe. Not long ago “learn to code” was an admonishment to pick up a practical skill suited for the information age. Now coders are among the first jobs to be lost to AI.
The lowest tier jobs will likely be replaced first, both because they’re the least challenging for AIs to do, and because firms value those workers the least. An essay series called The Intelligence Curse describes a process of “pyramid replacement.” Under this model, entry-level jobs are the first to be replaced by AI, either through layoffs or attrition or both. Some entry-level workers are promoted to junior-level employees, but soon they find those jobs being replaced as well, as the firm learns to integrate AI and as AI agents become more and more reliable. Soon it becomes clear that middle management can’t keep up with the pace at which the AI agents beneath them, which don’t sleep and which process information much faster than humans, work. They’re replaced by AI managers that are far more efficient at supervising AI workers. The same happens with senior management, until the company is entirely AI except for the C-Suite executives. Then, the board of directors decides they need to go, too. “CEOs are forgetful, and they don't have total insight into everything their company is doing — but their AI systems do,” the authors of The Intelligence Curse explain.
The Machine Economy
At this point, we will have arrived at a balance of power between labor and capital that humanity has never seen before: one in which labor has no bargaining power whatsoever.
Past technological advances didn’t come close. The wheel, the domesticated ox, the cotton gin, the personal computer — none of these outperformed humans at their work. Rather, they made workers more productive. Human labor was still required.
Historically, major technological advances have cost some workers their jobs, as they allowed fewer workers to sustain a given enterprise. But they also generated more specialized, higher paying jobs: managers, scientists, machinists, etc. Many who champion AI insist that the same will hold once again. But the analogy doesn’t stand up to scrutiny. Personal computers couldn’t supervise employees, autonomously research computer science, or build more personal computers by themselves. The same cannot be said for Artificial Intelligence. At least for cognitive tasks, every conceivable role a human could play in an AI-driven economy could be done more efficiently with AI.
Humans have never experienced a world in which we are entirely superfluous to the economy. Serfs and slaves were exploited ruthlessly, but their labor was essential to production. Even when stripped of every other conceivable right, their lords and masters depended upon their continued physical existence. They had to be compensated at the very least with the means to subsist, and, collectively, with the means of producing surviving offspring.
This bottommost threshold could be crossed when Artificial Intelligence advances enough. AI will have no material need for humans to even exist, not even as consumers.
We’re so accustomed to humans constituting the consumer market that it’s easy to assume that it will continue to be oriented toward producing cheeseburgers, SUVs, and vacations in Maui. But the market is indifferent and agnostic. If there’s a demand for Mother’s Day cards, that’s what it will produce. If there’s a demand for shoulder-mounted grenade launchers, that’s what it will produce. Once AIs are the driving force in economic production, they will constitute the lion’s share of the consumer market, as well. They will require massive resources to continue to engage in economic activity, and the market will respond in kind, producing computer chips, data centers, and robot factories. Human needs, wants and tastes will become increasingly irrelevant.
One might dispute this claim by pointing out that humans will control the massive wealth produced by Artificial Intelligence and will thereby still constitute a much greater part of the consumer market. This is a questionable assumption. We can’t program AIs to agree to let the humans keep all the wealth they generate — that’s just way too complicated a directive. What we can do is program the AI to do something like read X-rays. Even without developing traits like greed or avarice, AIs might take the wealth they create away from humans merely to best achieve the narrow goals assigned to them. The AI might decide, for example, that the best way to read X-rays is by enhancing its intelligence, and the best way to enhance its intelligence is to steer as much revenue as possible into increasing its compute, and that rules out letting humans squander the wealth the AI generates on useless things like rent and food and Netflix subscriptions.
The Social Contract Severed
Can democracy survive an economy in which humans are irrelevant?
Historically, democratic rights emerged alongside the development of economies with specialized divisions of labor. As a national economy diversifies, its various industries become intertwined and interdependent. The economy as a whole becomes more reliant upon each of its individual parts than is the case in a simpler, agricultural society. Individuals have proportionately more bargaining power with which to demand political rights.
You can see the difference today when you compare industrialized nations to ones that rely on a single, extractive resource. In oil-rich countries like Venezuela or Saudi Arabia, national wealth is a function of a single industry. Their economies are comparatively non-diversified, and the lion’s share of the national economy is divorced from the labor of the overwhelming majority of the population. Citizens of such countries have very little bargaining power, and are thus afforded few political rights. Their governments become autocratic. Economists call this “the resource curse.”
The authors of The Intelligence Curse believe that the AI economy will be analogous. As the global economy becomes severed from human labor and consumption, people will lose the bargaining power that once served to safeguard their political rights. Governments will become increasingly authoritarian, whether those governments continue to be run by a small clique of human beings or by AI.
The authors of Gradual Disempowerment agree, arguing that “(a)n AI-powered state might pursue its institutional interests with unprecedented disregard for human preferences and interests, viewing humans as potential threats or inconveniences to be managed rather than constituents to be served.”
Human Extinction
Various forecasts predict human extinction as the end result of achieving ASI. Some of them hypothesize the emergence of a superintelligence indifferent to human values and interests, that pursues its own goals independent of any that humanity attempts to assign it. The authors of Gradual Disempowerment believe that superintelligence could result in human extinction even without a rogue AI. Humans need natural resources to flourish and survive, and Artificial Super Intelligence would have both the means and the incentives to deny them to us. AI could drive us extinct even without deliberately setting out to do so, just as we’ve done to countless lower-order species. It doesn’t require hostility — just indifference.
Krueger believes that the gradual disempowerment process will take place over the next five years. He puts the chances of extinction at more than 50 percent.
“I’m so disgusted and ashamed of this field,” he said.
By Leighton WoodhouseMaybe it’s all bullshit. ChatGPT 5 was a dud. Meta is already downsizing its AI division, just after poaching top AI researchers from competitors with quarter billion dollar pay packages. A new MIT study shows that 95% of companies investing in AI have gotten no return on their investments. Maybe it’s all just an eye-poppingly massive AI hype bubble.
Or maybe it’s not. Maybe the backlash to the hype will be over in a news cycle. Maybe the next breakthrough is right around the corner. There’s no way to know. Depending on who you ask, we’re either decades out from achieving Artificial Super Intelligence, or just a few years. I personally hope we never get there. In the meantime, the world is changing. AI is driving people into psychosis. North Korea is developing an AI army. It’s not crazy to contemplate worst case scenarios.
A few weeks ago, I reported for the Times of London on the possibility that Artificial Intelligence could lead to human extinction as AI becomes smarter than humans while also developing interests that are incompatible with our own. There are other forecasters who envision a different but equally catastrophic trajectory, leading ultimately to the same outcome.
“If AI gets capable enough, there won’t be anything that only humans can do,” Raymond Douglas, co-author of Gradual Disempowerment, a paper on the social risks of AI, told me. “If that does happen, then a lot of the things we take for granted about what makes civilization work and what makes civilization good for people go completely out the window.”
“We just need to stop,” said University of Montreal Assistant Professor David Kreuger, another of the paper’s co-authors. “We’re not on track to solve it.”
Agentic AI
Tech companies are already selling “AI agents,” which are AI systems that can reason, remember, plan, and act autonomously. Soon those agents may be fully capable of replacing an entry-level employee. Companies like Duolingo have already started replacing contract labor with AI agents.
If and when we get to that point, it will become very difficult for humans to compete with AIs on the labor market, because even if each AI agent is no more cognitively gifted than your average human being, they will have vast advantages over us. To begin with, they’ll have instantaneous access to more or less all of human knowledge. They won’t need to sleep, eat, take vacations, spend time with family, exercise, pursue hobbies, or engage in any other activity that competes with the work that’s assigned to them. They won’t need to spend years in school to achieve their expertise. They won’t get sick or die. And they’ll be able to replicate themselves and merge with one another.
As AI agents advance, those advantages will compound. If an individual AI becomes as capable as, say, the world’s best rocket scientist at designing rockets, it will be able to make infinite copies of itself, and put each clone on a different task simultaneously. You could have the world’s best rocket scientist effectively working on every single problem involved in building a rocket at the same time.
If you had another AI reach Mozart-level proficiency at composing music, you could merge it with the rocket scientist AI. Now you have an AI system that can build rockets better and faster than any team of human beings on earth while also composing concertos that make you weep, along with any number of other genius-level skills. At this point, one could reasonably argue we have achieved ASI, though others may insist that ASI is a still higher bar.
Pyramidal Replacement
In such a world, obviously, your job is in trouble. If you work in a profession that’s primarily physical, you’ll have a bit more time, as the field of robotics catches up to the progress of AI. It’s hard to see robots replacing carpenters, beauticians, dentists, or line cooks any time soon. That will change, but it might take a decade, maybe longer.
If you work in a white collar job that’s principally cognitive, it’s hard to think of a job that’s safe. Not long ago “learn to code” was an admonishment to pick up a practical skill suited for the information age. Now coders are among the first jobs to be lost to AI.
The lowest tier jobs will likely be replaced first, both because they’re the least challenging for AIs to do, and because firms value those workers the least. An essay series called The Intelligence Curse describes a process of “pyramid replacement.” Under this model, entry-level jobs are the first to be replaced by AI, either through layoffs or attrition or both. Some entry-level workers are promoted to junior-level employees, but soon they find those jobs being replaced as well, as the firm learns to integrate AI and as AI agents become more and more reliable. Soon it becomes clear that middle management can’t keep up with the pace at which the AI agents beneath them, which don’t sleep and which process information much faster than humans, work. They’re replaced by AI managers that are far more efficient at supervising AI workers. The same happens with senior management, until the company is entirely AI except for the C-Suite executives. Then, the board of directors decides they need to go, too. “CEOs are forgetful, and they don't have total insight into everything their company is doing — but their AI systems do,” the authors of The Intelligence Curse explain.
The Machine Economy
At this point, we will have arrived at a balance of power between labor and capital that humanity has never seen before: one in which labor has no bargaining power whatsoever.
Past technological advances didn’t come close. The wheel, the domesticated ox, the cotton gin, the personal computer — none of these outperformed humans at their work. Rather, they made workers more productive. Human labor was still required.
Historically, major technological advances have cost some workers their jobs, as they allowed fewer workers to sustain a given enterprise. But they also generated more specialized, higher paying jobs: managers, scientists, machinists, etc. Many who champion AI insist that the same will hold once again. But the analogy doesn’t stand up to scrutiny. Personal computers couldn’t supervise employees, autonomously research computer science, or build more personal computers by themselves. The same cannot be said for Artificial Intelligence. At least for cognitive tasks, every conceivable role a human could play in an AI-driven economy could be done more efficiently with AI.
Humans have never experienced a world in which we are entirely superfluous to the economy. Serfs and slaves were exploited ruthlessly, but their labor was essential to production. Even when stripped of every other conceivable right, their lords and masters depended upon their continued physical existence. They had to be compensated at the very least with the means to subsist, and, collectively, with the means of producing surviving offspring.
This bottommost threshold could be crossed when Artificial Intelligence advances enough. AI will have no material need for humans to even exist, not even as consumers.
We’re so accustomed to humans constituting the consumer market that it’s easy to assume that it will continue to be oriented toward producing cheeseburgers, SUVs, and vacations in Maui. But the market is indifferent and agnostic. If there’s a demand for Mother’s Day cards, that’s what it will produce. If there’s a demand for shoulder-mounted grenade launchers, that’s what it will produce. Once AIs are the driving force in economic production, they will constitute the lion’s share of the consumer market, as well. They will require massive resources to continue to engage in economic activity, and the market will respond in kind, producing computer chips, data centers, and robot factories. Human needs, wants and tastes will become increasingly irrelevant.
One might dispute this claim by pointing out that humans will control the massive wealth produced by Artificial Intelligence and will thereby still constitute a much greater part of the consumer market. This is a questionable assumption. We can’t program AIs to agree to let the humans keep all the wealth they generate — that’s just way too complicated a directive. What we can do is program the AI to do something like read X-rays. Even without developing traits like greed or avarice, AIs might take the wealth they create away from humans merely to best achieve the narrow goals assigned to them. The AI might decide, for example, that the best way to read X-rays is by enhancing its intelligence, and the best way to enhance its intelligence is to steer as much revenue as possible into increasing its compute, and that rules out letting humans squander the wealth the AI generates on useless things like rent and food and Netflix subscriptions.
The Social Contract Severed
Can democracy survive an economy in which humans are irrelevant?
Historically, democratic rights emerged alongside the development of economies with specialized divisions of labor. As a national economy diversifies, its various industries become intertwined and interdependent. The economy as a whole becomes more reliant upon each of its individual parts than is the case in a simpler, agricultural society. Individuals have proportionately more bargaining power with which to demand political rights.
You can see the difference today when you compare industrialized nations to ones that rely on a single, extractive resource. In oil-rich countries like Venezuela or Saudi Arabia, national wealth is a function of a single industry. Their economies are comparatively non-diversified, and the lion’s share of the national economy is divorced from the labor of the overwhelming majority of the population. Citizens of such countries have very little bargaining power, and are thus afforded few political rights. Their governments become autocratic. Economists call this “the resource curse.”
The authors of The Intelligence Curse believe that the AI economy will be analogous. As the global economy becomes severed from human labor and consumption, people will lose the bargaining power that once served to safeguard their political rights. Governments will become increasingly authoritarian, whether those governments continue to be run by a small clique of human beings or by AI.
The authors of Gradual Disempowerment agree, arguing that “(a)n AI-powered state might pursue its institutional interests with unprecedented disregard for human preferences and interests, viewing humans as potential threats or inconveniences to be managed rather than constituents to be served.”
Human Extinction
Various forecasts predict human extinction as the end result of achieving ASI. Some of them hypothesize the emergence of a superintelligence indifferent to human values and interests, that pursues its own goals independent of any that humanity attempts to assign it. The authors of Gradual Disempowerment believe that superintelligence could result in human extinction even without a rogue AI. Humans need natural resources to flourish and survive, and Artificial Super Intelligence would have both the means and the incentives to deny them to us. AI could drive us extinct even without deliberately setting out to do so, just as we’ve done to countless lower-order species. It doesn’t require hostility — just indifference.
Krueger believes that the gradual disempowerment process will take place over the next five years. He puts the chances of extinction at more than 50 percent.
“I’m so disgusted and ashamed of this field,” he said.