Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Transformative AGI by 2043 is <1% likely, published by Ted Sanders on June 6, 2023 on LessWrong.
(Crossposted to the EA forum)
Abstract
The linked paper is our submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%.
Specifically, we argue:
The bar is high: AGI as defined by the contest—something like AI that can perform nearly all valuable tasks at human cost or less—which we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI.
Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors.
No step is guaranteed: For each step, we estimate a probability of success by 2043,conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%.
Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely.
Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.
Executive summary
For AGI to do most human work for <$25/hr by 2043, many things must happen.
We forecast cascading conditional probabilities for 10 necessary events, and find they multiply to an overall likelihood of 0.4%:
We invent algorithms for transformative AGI60%We invent a way for AGIs to learn faster than humans40%AGI inference costs drop below $25/hr (per human equivalent)16%We invent and scale cheap, quality robots60%We massively scale production of chips and power46%We avoid derailment by human regulation70%We avoid derailment by AI-caused delay90%We avoid derailment from wars (e.g., China invades Taiwan)70%We avoid derailment from pandemics90%We avoid derailment from severe depressions95%Joint odds0.4%
Event
Forecast
by 2043 or TAGI,conditional onprior steps
If you think our estimates are pessimistic, feel free to substitute your own here. You’ll find it difficult to arrive at odds above 10%.
Of course, the difficulty is by construction. Any framework that multiplies ten probabilities together is almost fated to produce low odds.
So a good skeptic must ask: Is our framework fair?
There are two possible errors to beware of:
Did we neglect possible parallel paths to transformative AGI?
Did we hew toward unconditional probabilities rather than fully conditional probabilities?
We believe we are innocent of both sins.
Regarding failing to model parallel disjunctive paths:
We have chosen generic steps that don’t make rigid assumptions about the particular algorithms, requirements, or timelines of AGI technology
One opinionated claim we do make is that transformative AGI by 2043 will almost certainly be run on semiconductor transistors powered by electricity and built in capital-intensive fabs, and we spend many pages justifying this belief
Regarding failing to really grapple with conditional probabilities:
Our conditional probabilities are, in some cases, quite different from our unconditional probabilities. In particular, we assume that a world on track to transformative AGI will.
Construct semiconductor fabs and power plants at a far faster pace than today (our unconditional probability is substantially lower)
Have invented very che...