This is a link post.Summary.
As we approach and pass through an AI takeoff period, the risk of nuclear war (or other all-out global conflict) will increase.
An AI takeoff would involve the automation of scientific and technological research. This would lead to much faster technological progress, including military technologies. In such a rapidly changing world, some of the circumstances which underpin the current peaceful equilibrium will dissolve or change. There are then two risks[1]:
- Fundamental instability. New circumstances could give a situation where there is no peaceful equilibrium it is in everyone's interests to maintain.
- e.g. —
- If nuclear calculus changes to make second strike capabilities infeasible
- If one party is racing ahead with technological progress and will soon trivially outmatch the rest of the world, without any way to credibly commit not to completely disempower them after it has done so
- Failure [...]
---
Outline:
(02:41) Why do(n’t) people go to war?
(03:34) Rational reasons to go to war
(05:44) Irrational reasons to go to war
(06:21) Impacts of AI takeoff on reasons to go to war
(06:56) Impacts on rational reasons for war
(07:00) Commitment issues
(08:18) Private information
(09:05) Issue indivisibility
(09:59) Impacts on irrational reasons for war
(10:04) Irrational decision-making
(11:29) Misaligned decision-makers
(12:12) National pride
(13:01) Strategies for reducing risk of war
(13:06) Strategies for averting failure to navigate takeoff
(13:25) Research and dissemination
(14:02) Spreading “we’re all in this together” frames
(14:50) Agreements/treaties about sharing power of AI
(16:00) Differential technological development
(18:46) What about an AI pause?
(19:36) Closing thoughts
(19:40) What about non-nuclear warfare?
(19:56) How big a deal is this?
The original text contained 2 footnotes which were omitted from this narration.
---