Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Microdooms averted by working on AI Safety, published by nikola on September 18, 2023 on LessWrong.
Disclaimer: the models presented are extremely naive and simple, and assume that existential risk from AI is higher than 20%. Play around with the models using this (mostly GPT-4 generated) jupyter notebook.
1 microdoom = 1/1,000,000 probability of existential risk
Diminishing returns model
The model has the following assumptions:
Absolute Risk Reduction: There exists an absolute decrease in existential risk that could be achieved if the AI safety workforce were at an "ideal size." This absolute risk reduction is a parameter in the model.
Note that this is absolute reduction, not relative reduction. So, a 10% absolute reduction means going from 20% x-risk to 10% x-risk, or from 70% x-risk to 60% x-risk.
Current and Ideal Workforce Size: The model also takes into account the current size of the workforce and an "ideal" size (some size that would lead to a much higher decrease in existential risk than the current size), which is larger than the current size. These are both parameters in the model.
Diminishing Returns: The model assumes diminishing returns on adding more people to the AI safety effort. Specifically, the returns are modeled to increase logarithmically with the size of the workforce.
The goal is to estimate the expected decrease in existential risk that would result from adding one more person to the current AI safety workforce. By inputting the current size of the workforce, the ideal size, and the potential absolute risk reduction, the model gives the expected decrease.
If we run this with:
Current size = 350
Ideal size = 100,000
Absolute decrease (between 0 and ideal size) = 20%
we get that one additional career averts 49 microdooms. Because of diminishing returns, the impact from an additional career is very sensitive to how big the workforce currently is.
Pareto distribution model
We assume that the impact of professionals in the field follows a Pareto distribution, where 10% of the people account for 90% of the impact.
Model Parameters
Workforce Size: The total number of people currently working in AI safety.
Total Risk Reduction: The absolute decrease in existential risk that the AI safety workforce is currently achieving.
If we run this with:
Current size = 350
Absolute risk reduction (from current size) = 10%
We get that, if you're a typical current AIS professional (between 10th and 90th percentile), you reduce somewhere between 10 and 270 microdooms. Because of how skewed the distribution is, the mean is at 286 microdooms, which is higher than the 90th percentile.
A 10th percentile AI Safety professional reduces x-risk by 14 microdooms
A 20th percentile AI Safety professional reduces x-risk by 16 microdooms
A 30th percentile AI Safety professional reduces x-risk by 20 microdooms
A 40th percentile AI Safety professional reduces x-risk by 24 microdooms
A 50th percentile AI Safety professional reduces x-risk by 31 microdooms
A 60th percentile AI Safety professional reduces x-risk by 41 microdooms
A 70th percentile AI Safety professional reduces x-risk by 61 microdooms
A 80th percentile AI Safety professional reduces x-risk by 106 microdooms
A 90th percentile AI Safety professional reduces x-risk by 269 microdooms
Linear growth model
If we just assume that going from 350 current people to 10,000 people would decrease x-risk by 10% linearly, we get that one additional career averts 10 microdooms.
One microdoom is A Lot Of Impact
Every model points at the conclusion that one additional AI safety professional decreases existential risks from AI by one microdoom at the very least.
Because there are 8 billion people alive today, averting one microdoom roughly corresponds to saving 8 thousand current human lives (especially under short timelines, where the...