Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Views on when AGI comes and on strategy to reduce existential risk, published by TsviBT on July 8, 2023 on LessWrong.
Summary: AGI isn't super likely to come super soon. People should be working on stuff that saves humanity in worlds where AGI comes in 20 or 50 years, in addition to stuff that saves humanity in worlds where AGI comes in the next 10 years.
Thanks to Alexander Gietelink Oldenziel, Abram Demski, Daniel Kokotajlo, Cleo Nardo, Alex Zhu, and Sam Eisenstat for related conversations.
My views on when AGI comes
AGI
By "AGI" I mean the thing that has very large effects on the world (e.g., it kills everyone) via the same sort of route that humanity has large effects on the world. The route is where you figure out how to figure stuff out, and you figure a lot of stuff out using your figure-outers, and then the stuff you figured out says how to make powerful artifacts that move many atoms into very specific arrangements.
This isn't the only thing to worry about. There could be transformative AI that isn't AGI in this sense. E.g. a fairly-narrow AI that just searches configurations of atoms and finds ways to do atomically precise manufacturing would also be an existential threat and a possibility for an existential win.
Conceptual capabilities progress
The "conceptual AGI" view:
The first way humanity makes AGI is by combining some set of significant ideas about intelligence. Significant ideas are things like (the ideas of) gradient descent, recombination, probability distributions, universal computation, search, world-optimization. Significant ideas are to a significant extent bottlenecked on great natural philosophers doing great natural philosophy about intelligence, with sequential bottlenecks between many insights.
The conceptual AGI doesn't claim that humanity doesn't already have enough ideas to make AGI. I claim that - - though not super strongly.
Timelines
Giving probabilities here doesn't feel great. For one thing, it seems to contribute to information cascades and to shallow coalition-forming. For another, it hides the useful models. For yet another thing: A probability bundles together a bunch of stuff I have models about, with a bunch of stuff I don't have models about. For example, how many people will be doing original AGI-relevant research in 15 years? I have no idea, and it seems like largely a social question. The answer to that question does affect when AGI comes, though, so a probability about when AGI comes would have to depend on that answer.
But ok. Here's some butt-numbers:
3%-10% probability of AGI in the next 10-15ish years. This would be lower, but I'm putting a bit of model uncertainty here.
40%-45% probability of AGI in the subsequent 45ish years. This is denser than the above because, eyeballing the current state of the art, it seems like we currently lack some ideas we'd need - - but I don't know how many insights would be needed, so the remainder could be only a couple decades around the corner. It also seems like people are distracted now.
Median 2075ish. IDK. This would be further out if an AI winter seemed more likely, but LLMs seem like they should already be able to make a lot of money.
A long tail. It's long because of stuff like civilizational collapse, and because AGI might be really really hard to make. There's also a sliver of a possibility of coordinating for a long time to not make AGI.
If I were trying to make a model with parts, I might try starting with a mixture of Erlang distributions of different shapes, and then stretching that according to some distribution about the number of people doing original AI research over time.
Again, this is all butt-numbers. I have almost no idea about how much more understanding is needed to make AGI, except that it doesn't seem like we're there yet.
Responses to some arguments for...