Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Immortality or death by AGI, published by ImmortalityOrDeathByAGI on September 22, 2023 on LessWrong.
AKA My Most Likely Reason to Die Young is AI X-Risk
TL;DR: I made a model which takes into account AI timelines, the probability of AI going wrong, and probabilities of dying from other causes. I got that the main "end states" for my life are either dying from AGI due to a lack of AI safety (at 35%), or surviving AGI and living to see aging solved (at 43%).
Meta: I'm posting this under a pseudonym because many people I trust had a strong intuition that I shouldn't post under my real name, and I didn't feel like investing the energy to resolve the disagreement. I'd rather people didn't de-anonymize me.
The model & results
I made a simple probabilistic model of the future, which takes seriously the possibility of AGI being invented soon, its risks, and its effects on technological development (particularly in medicine):
Without AGI, people keep dying at historical rates (following US actuarial tables)
At some point, AGI is invented (following Metaculus timelines)
At the point AGI is invented, there are two scenarios (following my estimates of humanity's odds of survival given AGI at any point in time, which are relatively pessimistic):
We survive AGI.
We don't survive AGI.
If we survive AGI, there are two scenarios:
We never solve aging (maybe because aging is fundamentally unsolvable or we decide not to solve it).
AGI is used to solve aging.
If AGI is eventually used to solve aging, people keep dying at historical rates until that point.
I model the time between AGI and aging being solved as an exponential distribution with a mean time of 5 years.
Using this model, I ran Monte Carlo simulations to predict the probability of the main end states of my life (as someone born in 2001 who lives in the US):
I die before AGI: 10%
I die from AGI: 35%
I survive AGI but die because we never solve aging: 11%
I survive AGI but die before aging is solved: 1%
I survive AGI and live to witness aging being solved: 43%
There is a jupyter notebook where you can play around with the parameters and see what the probability distribution looks like for you (scroll to the last section).
Here's what my model implies for people based on their year of birth, conditioning on them being alive in 2023:
As is expected, the earlier people are born, the likelier it is that they will die before AGI. The later someone is born, the likelier it is that they will either die from AGI or have the option to live for a very long time due to AGI-enabled advances in medicine.
Following my (relatively pessimistic) AI safety assumptions, for anyone born after ~1970, dying by AGI and having the option to live "forever" are the two most likely scenarios. Most people alive today have a solid chance at living to see aging cured. However, if we don't ensure that AI is safe, we will never be able to enter that future.
I also ran this model given less unconventional estimates of timelines and P(everyone dies | AGI), where the timelines are twice as long as the Metaculus timelines, and the P(everyone dies | AGI) is 15% in 2023 and exponentially decays at a rate where it hits 1% in 2060.
For the more conventional timelines and P(everyone dies | AGI), the modal scenarios are dying before AGI, and living to witness aging being solved. Dying from AGI hovers around 1-4% for most people.
Assumptions
Without AGI, people keep dying at historical rates
I think this is probably roughly correct, as we're likely to see advances in medicine before AGI, but nuclear and biorisk roughly counteract that (one could model how these interact, but I didn't want to add more complexity to the model). I use the US actuarial life table for men (which is very similar to the one for women) to determine the probability of dying at any particular ag...