Social Studies

Doomsday


Listen Later

Someone in the AI safety space recently told me that she was pretty sure that one of two things was going to happen to her in the next few years: She would die, or she would start to live forever.

She said it with a smile — she was being ironic. But that didn’t mean she wasn’t serious about it. The future has never been so uncertain as it is today. Literally almost anything could happen. I personally don’t think humans will ever achieve immortality, or even expand our life expectancies appreciably, but it’s not an insane idea if you take the possibilities of Artificial General Intelligence — or “superintelligence,” or whatever we’re defining the Holy Grail of AI as this year — seriously. If we ever achieve that benchmark (and I hope we don’t), the experience of being an earthling will change radically, whether for better or for worse (or, most likely, some of each). The transhumanist fantasy of digitally uploaded human consciousnesses and the apocalyptic expectation of human extinction at the hands of intelligent robots may both sound like cringe sci-fi. But perhaps the least likely prospect of a world of machine superintelligence is one in which human existence is more or less the same as it has always been, but in a sleeker, more technologically magical environment. Expecting this outcome doesn’t make you a realist. It just means your imagination has failed you.

There are really only two possibilities. Either the trillion dollar bets the AI labs are making will fail, or we’re on the brink of sudden, unimaginable change. There are those, including me, who will point out that in every age, people have believed they were at the cusp of the end of the world as we know it, a cultural tendency that can be chalked up to our enduring human narcissism — the belief that those of us alive at this particular, arbitrary moment are somehow special. But there’s another kind of narcissism that cuts the other way: the assumption that the universe is our plaything that we can bend to our will forever, without it ever pushing back against us. That we will always control what we conjure. We know that most of us may well live to see superintelligence. The idea that we could survive it is perhaps even more narcissistic than the idea that it will destroy us.

Two of my favorite cultural franchises got reboots this year: the series that began in 2003 with 28 Days Later, and the one that started in 1979 with Alien. In June, 28 Years Later was released in theaters, and this week, the first season of Alien: Earth started streaming on FX. Both of them feel, to me, weirdly plausible as visions of the next stage of human civilization.

I’ve only watched the first two episodes of Alien: Earth so far (only two have been released to date), but the lore of the series is like a first language to me. I’ve probably watched the original a dozen times, and would watch it again tonight if given the chance. From the very first shot of the first episode, it’s clear that the creators of Alien: Earth took their 1979 source material extremely seriously. The world that they build on is the world that Ridley Scott first gave us decades ago — one in which massive corporations rule the world like superpower nation-states, and in which humans’ unquestioned assumption of our mastery over nature steers us to our peril.

The TV show adds some new elements, however, to the old franchise. One of them is transhumanism. The ingenious entrepreneur running one of the massive supercorporations competing for global sovereignty has just discovered how to upload human consciousness into synthetic bodies. The series takes place two years before the original film, so the creators of Alien: Earth were obliged to set it in the distant year of 2120. But given today’s rapid drive to superintelligence, one of the story’s less plausible details is that it took us that long to figure it out.

In the real world, nearly a century before the one of Alien: Earth, we already know the name of the man who will crack the code of transhumanism, if it is ever cracked at all. It will likely be one of four men: Sam Altman, Elon Musk, Dario Amodei, or Demis Hassabis — or, more precisely, it will be the digital superintelligence that one of them beats the other three to achieving. In a world of superintelligent AI, there’s basically nothing that’s theoretically possible that’s too unrealistic to expect. A world of superintelligence is a world in which, according to the forecasts of serious, sane, intelligent adults who think about such things, AIs harness the powers of stars to colonize galaxies. The very first task the tech oligarchs will assign to the machine gods will be to upload their brains and make them immortal. That is, if the machines don’t destroy us first and send us back to the stone age.

I didn’t actually love 28 Years Later, but that might only be because my expectations were too high. I’m somewhat obsessed with zombie movies and with post-apocalyptic stories in general, so I was in a state of anticipation months up to its release. The world of 28 Years Later is the inverse of Alien: Earth — one in which modern technology and the civilizations it spawned have been wiped off the face of the planet. This is the other prospective future of mankind in the age of superintelligence. As different as the two worlds are, though, morally speaking, the themes of the two franchises are the same: humankind’s hubris and its promethean tinkering with nature leads it to its undoing.

The world-ending technology of 28 Years Later is an older and almost quainter one than AI: biological weaponry. Covid comparisons aside, the movie isn’t really a forecast of what’s to come; there’s no reason to believe that superintelligence will lead to the rise of the undead, and any AI-induced obliteration of human society will necessarily be accompanied by super-advanced technology in the hands of our newly-invented oppressors. The broad strokes, however, fit nicely into a plausible mythology of the future. Ruined by our own creation, those of us who survive are forced back into our primitive animal condition. If we’re lucky, in our struggle to survive, we rediscover the communal folkways we sacrificed long ago to technological progress. It isn’t enough, but at least it’s something. At least in this one, narrow sense, the vision of 28 Years Later is a little less bleak than the one of Alien: Earth. Humans, reduced to almost nothing, at least retain our humanity. We are defiant in the face of our extinction. That’s more than one can say for the transhumanist gambit, which is a willing, even ecstatic surrender to what Paul Kingsnorth calls “The Machine.”

Speaking of whom, Kingsnorth’s brilliant, beautiful novel Alexandria is a fusion of these two dystopian worlds. Set a millennium in the future, humanity has long ago fused itself with its technological creation, uploading its collective consciousness into a sublime world of ones and zeroes. But a few were left behind, to be reclaimed by the natural world. Their tools, their social structures, even their language have descended back into the earth. They have become primeval once again, but also liberated from the enslavement of the Machine. It’s hard to tell if this as a happy ending or a tragic one. Possibly the difference is just one of perspective.

Kingsnorth’s vision of a post-human future is both darker and lovelier than Peter Thiel’s, which is snake oil. If the big AI labs with their mammoth data centers actually create the world they dream of, the children of today will grow into adulthood in a world unrecognizable to us. In some ways, at least for a time, it may be much better. But if our popular myths have any value beyond mere entertainment, we should prepare ourselves for a destiny quite different from the one we’re being promised.

...more
View all episodesView all episodes
Download on the App Store

Social StudiesBy Leighton Woodhouse