Science fiction has long contemplated the possibility that machines could rise up against their human creators. Movies such as 2001, Terminator, Matrix, and I, Robot are part of our cultural history. But James Barrat, author of The Intelligence Explosion, suggests that it’s not out of line to worry about just where technology is leading us--for real.
Barrat, a documentary filmmaker, has been on the AI beat for some time now. His earlier book, Our Final Invention, was published in 2013. That book had a message: dangers inherent in artificial intelligence are legitimate concerns.
“Intelligence isn’t unpredictable merely some of the time or in special cases,” he noted. “Computer systems advanced enough to act with human-level intelligence will likely be unpredictable and inscrutable all of the time.”
Humans need to figure out now, at the early stages of AI’s creation, how to coexist with hyperintelligent machines. Otherwise, Barrat worries, we could be in trouble.
In his new book, Barrat lays out five basic points:
1. The rise of generative AI is impressive, but not without problems.
While generative AI tools, such as ChatGPT and DALL-E, have taken the world by storm, those programs also present a downside. Fake news, fake photos, and phony videos can result. As generative AI models get bigger, they also start picking up surprise skills, said Barrat—like translating languages—something nobody programmed them to do.
2. The push for artificial general intelligence (AGI).
AGI, or artificial general intelligence, means creating an AI that can perform almost any task a human can do. The potential is huge. AGI could make us more productive and innovative, but winners would set the agenda, dominating society.
3. From AGI to something way smarter.
If we ever reach AGI, things could escalate quickly. That’s where the concept of the “intelligence explosion” comes into play. The idea was first put forward by I. J. Good who, in 1965, realized that a machine built as smart as a human might be able to make itself even smarter. That could lead to artificial superintelligence, also known as ASI.
4. The dangers of an intelligence explosion.
Arthur C. Clark, the science fiction writer whose work inspired 2001: A Space Odyssey, told Barrat in an earlier interview that humans steer the future as the most intelligent beings on the planet. A more intelligent presence would likely grab the steering wheel, said Clark.
5. How AI could overpower humanity.
It wouldn’t take long for AI-controlled weapons to escalate conflicts faster than humans could intervene. Advanced AI could also take over essential infrastructure—such as power grids or financial systems.
Governments could use AI for mass surveillance, propaganda, cyberattacks, or worse, giving them unprecedented new tools to control or harm people. We are seeing surveillance systems morph into enhanced weapons systems right now, said Barrat, suggesting that Gaza today looks like Dresden or Hiroshima after the bombing.
Barrat suggests checks and balances to stay in control, calling for strong oversight, regulations, and a commitment to transparency.