Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nuclear war tail risk has been exaggerated?, published by Vasco Grilo on February 26, 2024 on The Effective Altruism Forum.
The views expressed here are my own, not those of Alliance to Feed the Earth in Disasters (
ALLFED), for which I work as a contractor.
Summary
I calculated a nearterm annual risk of
human extinction from
nuclear war of 5.93*10^-12 (
more).
I consider grantmakers and donors interested in decreasing extinction risk had better focus on artificial intelligence (
AI) instead of nuclear war (
more).
I would say the case for sometimes prioritising nuclear extinction risk over AI extinction risk is much weaker than the case for sometimes prioritising
natural extinction risk over nuclear extinction risk (
more).
I get a sense the extinction risk from nuclear war was massively overestimated in The Existential Risk Persuasion Tournament (
XPT) (
more).
I have the impression Toby Ord greatly overestimated tail risk in
The Precipice (
more).
I believe interventions to decrease deaths from nuclear war should be assessed based on standard
cost-benefit analysis (
more).
I think increasing calorie production via new food sectors is less cost-effective to save lives than measures targeting distribution (
more).
Extinction risk from nuclear war
I calculated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12 (= (6.36*10^-14*5.53*10^-10)^0.5) from the
geometric mean between[1]:
My prior of
6.36*10^-14 for the annual probability of a war causing human extinction.
My
inside view estimate of 5.53*10^-10 for the nearterm annual probability of human extinction from nuclear war.
By nearterm annual risk, I mean that in a randomly selected year from 2025 to 2050. I computed my inside view estimate of 5.53*10^-10 (= 0.0131*0.0422*10^-6) multiplying:
1.31 % annual probability of a nuclear weapon being detonated as an act of war.
4.22 % probability of insufficient calorie production given at least one nuclear detonation.
10^-6 probability of human extinction given insufficient calorie production.
I explain the rationale for the above estimates in the next sections. Note nuclear war might have
cascade effects which lead to
civilisational collapse[2], which could increase longterm extinction risk while simultaneously having a negligible impact on the nearterm one I estimated. I do not explicitly assess this in the post, but I guess the nearterm annual risk of human extinction from nuclear war is a good proxy for the
importance of decreasing nuclear risk from a
longtermist perspective:
My prior implicitly accounts for the cascade effects of wars. I derived it from historical data on the deaths of combatants due to not only fighting, but also disease and starvation, which are ever-present indirect effects of war.
Nuclear war might have
cascade effects, but so do other catastrophes.
Global civilisational collapse due to nuclear war seems very unlikely to me. For instance, the maximum destroyable area by any country in a
nuclear 1st strike was estimated to be
65.3 k km^2 in
Suh 2023 (for a strike by Russia), which is just 70.8 % (= 65.3*10^3/(
92.2*10^3)) of the area of Portugal, or 3.42 % (= 65.3*10^3/(
1.91*10^6)) of the global urban area.
Even if nuclear war causes a global civilisational collapse which eventually leads to extinction, I
guess full recovery would be extremely likely. In contrast, an extinction caused by advanced AI would arguably not allow for a full recovery.
I am open to the idea that nuclear war can have longterm implications even in the case of full recovery, but considerations along these lines would arguably be more pressing in the context of AI risk.
For context, William MacAskill
said the following on The 80,000 Hours Podcast. "It's quite plausible, actually, when we look to the very long-term future, that that's [whether artificial...