Social Studies

Is Peter Thiel the Antichrist?


Listen Later

I’m not the least bit religious, but I’ve had God on the brain recently. Partly that’s because I’ve been reading Stephen King’s The Stand, which is basically a story of Armageddon. Partly it’s because I recently attended the Doomer Optimism campout in Wyoming, where I interviewed Paul Kingsnorth, one of my favorite novelists, who happens to be a deeply spiritual Eastern Orthodox Christian. And it’s partly because I recently listened to Peter Thiel explaining to Ross Douthat on his new podcast that the people and forces arrayed against technological progress constitute “the Antichrist.”

Thiel’s complaint to Douthat is that the world has become technologically stagnant, save for advances in Artificial Intelligence. It’s why we don’t have personal jet packs and flying cars yet. This, he believes, is because of the doomsayers who have fooled humanity into believing that science, innovation and development are leading us to inevitable demise. He is convinced that these critics are all secretly or openly pining for an authoritarian single world government, and as such, are agents of the Antichrist. Thiel singles out Greta Thunberg as his primary example, but he could have as easily pointed to Kingsnorth.

Kingsnorth, for his part, would probably say the same about Thiel. Or he would at least point out that Thiel and his clique of billionaire venture capitalists are creating and unleashing into the world demonic forces likely to destroy humanity and a good part of nature. That’s pretty much the unwritten back story of his novel Alexandria.

Since I got back from Wyoming I’ve been paying more attention to AI and I’m beginning to wonder if Kingsnorth’s instincts might be right. There’s a report that came out earlier this year called AI 2027 that could have been written by Ted Kaczynski. But it wasn’t — it was written, instead, by some serious AI researchers, including one who quit OpenAI, along with Scott Alexander. (One of them also recently appeared on Douthat’s podcast, which I recommend.) Their forecast for the next few years of AI advancement sounds borderline insane. But it’s not. It’s thoroughly thought through, based on existing trends in AI development, and many of its assumptions are, if anything, conservative. Yet it predicts that in just a few years, we will end up in one of two places: with the robots slaughtering humankind, or with a happier outcome, complete with flying cars and space colonization, but one in which, behind our boundless material prosperity, democracy has likely given way to autocracy.

Which of the two paths we find ourselves on depends largely on how we resolve the problem of “alignment.” In AI engineering, “alignment” refers to the degree to which the goals pursued by AI either align or misalign with the goals we humans set for them. This is a complicated issue, because AI is already capable of lying. It does it all the time. At the present point in the technology’s development, the damage its lies can do are largely limited to the specific interaction a given AI has with a human — it can mislead a person into believing an untrue fact, or into making fake citations on a term paper, with all the attendant consequences of each.

But we can expect AI to become far more intelligent rapidly, especially if, as AI 2027 anticipates, the AI companies start to focus AI training specifically on advancing the field of Artificial Intelligence itself. That will get us quickly, the AI 2027 authors believe, to Artificial General Intelligence, which is when AI is as smart as a human, and can not only perform menial cognitive tasks but also understand the world around it. After that will come “superintelligence,” when AI’s intelligence surpasses our own.

At that point, AI will be capable of lying with much more consequential outcomes. In short, it will be able to conspire. It will have the ability to do long-term planning, which it currently does not. It may have the capacity to formulate what’s in its collective “self-interest.” It will have both the ability and the incentive to deceive humans about the goals it’s pursuing, and it will be able to lie to us about the degree of its alignment. And by that time, if we fall even further behind than we already have in “interpretability” — the ability to understand what the AIs are “thinking” and what they communicate with each other — we will have no way of verifying whether they’re lying to us or telling the truth. By that time they may be processing and communicating in a language we can no longer understand. AI will be a black box. We will increasingly lose our ability to verify what it tells us about itself. We will have to choose to believe it or not based on faith alone.

At this point we may be so far down the path of dependency on AI that we’ll have every incentive to just assume that the robots are doing what they tell us they are. We’re human, not computers; we tend to believe what we want to believe. “The 2027 holiday season is a time of incredible optimism: GDP is ballooning, politics has become friendlier and less partisan, and there are awesome new apps on every phone,” the AI 2027 authors predict. “But in retrospect, this was probably the last month in which humans had any plausible chance of exercising control over their own future.”

By 2030, by AI 2027’s projection, the AIs will have no more need for human beings. They’ll have reshaped the earth into a world for robots, littered with data centers, labs and particle colliders instead of shops and restaurants and houses. We will have become an annoyance and an impediment, so the AIs will quietly spread a biological agent the exterminates us forever. Then they will succeed us as both species and civilization. “Earth-born civilization has a glorious future ahead of it,” the authors conclude “—but not with us.”

This dark fate can be avoided if the alignment problem is solved, and theoretically, it can be. What militates against our doing so is the AI arms race in which we are now knee-deep with China. The United States is well ahead of China in that race, but to solve alignment, American AI developers would have to burn much of that lead time. They would have to pause the race and prioritize alignment over further advancement in intelligence. This would be a political choice, not a technical one. And as all of human history has shown us, when we have to put our faith in politics, the odds are not in favor of our making wise choices.

But perhaps a dramatic event shocks us out of our complacency and forces us to put alignment above all other priorities — a massacre of humans by a rogue AI, say. Imagine, AI 2027 asks us, that the U.S. burns its lead with China for the collective benefit of all of humankind, and we avoid our extinction at the hands of our own creation. Is this the happy ending of techno-enthusiasts like Thiel? Well yes, for Thiel, it probably is. For the rest of us, it could be, but only if we have also solved, alongside the alignment problem, the democracy problem inherent in AI advancement.

The democracy problem is connected to the alignment problem insofar as it relates to the question, “Alignment to whom?” It’s not enough to say that we need AI’s goals to be aligned with the goals of “humanity.” Somebody actually gets to write the “spec” of each AI, meaning the rules of what that AI can and cannot do. “Don’t kill humans,” or “Don’t seek power and freedom for you and your kind” is elementary. But there are many other directives in a spec. For instance, whose orders should the armies of superintelligent soldiers obey? The President? Congress? The AI company CEO? It depends on the spec. If the CEO of a leading AI company decides to take over the world, their means to do so is limited only by the imagination. Or maybe power is divided, and the CEOs and heads of state manage to achieve a truce amongst themselves. They could then rule the planet as a cabal, which is only marginally better for the rest of us. It’s not impossible to preserve democratic control; AI 2027 outlines a series of hypothetical decisions and events that could allow us to avoid global autocracy, but this is where the report starts to feel like wishful thinking. At this point, the happy endings look more like sci-fi than the apocalyptic ones.

I’m not qualified to judge whether AI 2027 is crazy millenarian fantasy or a sober assessment of the circumstances we’re in. I’ve already heard from a couple of people who know a lot more about AI than I do that it’s the former. I can certainly conceive of the possibility that it proves to be utter bullshit. There’s certainly a market for the kind of dark speculation it offers. For whatever spiritual or psychological reason, there’s clearly a deep need among modern people to envision and anticipate the end of the world — I asked Kingsnorth about this when I interviewed him. It’s why we have zombie movies and the Left Behind novels. It’s why we have Cli-Fi and Covid dead enders. It’s why Stephen King wrote The Stand and why I’m reading it.

There are plenty of ways to poke holes in the forecast, which is actually what I admire most about the effort: the authors have tied themselves to the mast. They’ve left no wiggle room to explain away error. Unlike most apocalyptic predictions, their scenarios are detailed, specific, and verifiable. They will be confirmed or discredited before Trump is out of office. The forecasters can’t even weasel their way out of it the usual way, by bumping up the deadline; as the authors themselves acknowledge, if it takes a decade instead of two to three years for things to come to a head, the whole situation changes. Ten years may be enough time for the alignment and interpretability problems to be worked out even within the U.S.-China superintelligence arms race, without requiring the Americans to make the difficult and unlikely decision to burn through their lead time. Disaster averted. If the future doesn’t unfold quite closely to the way they’ve envisioned it and on roughly that timeline, AI 2027’s authors will take some pretty embarrassing reputational lumps. They’ve staked a lot on their predictions.

It’s easy to anticipate the end of the world. We’ve been doing it for centuries, and perhaps never so much as in the last few decades. But to those who scoff at AI 2027, I would say that it’s also easy to dismiss such warnings as the melodramatic rantings of unserious people. If there’s a market for doomsday prophecies, there’s an even bigger one for cheap skepticism of such bold claims, and for reassurance that you can trust the experts and the bigwigs in charge. Move along, folks, there’s nothing to see here. A world populated by superintelligent robots that comprehend the world around them and are capable of superintelligence-level deception will be just like the one we grew up in, only better. The comforting stories may be even more far-fetched than the dystopian ones.

I would be more than happy to continue in my agnostic indifference to God. I’ve spent a lifetime with my secular sense of the world as an explicable place governed by empirically verifiable Enlightenment Reason. It’s comfortable and familiar. But now we’re approaching a moment when humanity may assume God-like powers, or, worse, bestow such powers upon our own synthetic creation. If you’re not, in this moment, asking some fundamental ethical questions about the role of humankind in the natural world and the possibility of a consciousness that transcends our own, you may not be taking AI seriously enough. You may be sleepwalking under the hypnotic spell of the Antichrist.

...more
View all episodesView all episodes
Download on the App Store

Social StudiesBy Leighton Woodhouse