
Sign up to save your podcasts
Or


The recent book “If Anyone Builds It Everyone Dies” (September 2025) by Eliezer Yudkowsky and Nate Soares argues that creating superintelligent AI in the near future would almost certainly cause human extinction:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
The goal of this post is to summarize and evaluate the book's key arguments and the main counterarguments critics have made against them.
Although several other book reviews have already been written I found many of them unsatisfying because a lot of them are written by journalists who have the goal of writing an entertaining piece and only lightly cover the core arguments, or don’t seem understand them properly, and instead resort to weak arguments like straw-manning, ad hominem attacks or criticizing the style of the book.
So my goal is to write a book review that has the following properties:
---
Outline:
(07:43) Background arguments to the key claim
(09:21) The key claim: ASI alignment is extremely difficult to solve
(12:52) 1. Human values are a very specific, fragile, and tiny space of all possible goals
(15:25) 2. Current methods used to train goals into AIs are imprecise and unreliable
(16:42) The inner alignment problem
(17:25) Inner alignment introduction
(19:03) Inner misalignment evolution analogy
(21:03) Real examples of inner misalignment
(22:23) Inner misalignment explanation
(25:05) ASI misalignment example
(27:40) 3. The ASI alignment problem is hard because it has the properties of hard engineering challenges
(28:10) Space probes
(29:09) Nuclear reactors
(30:18) Computer security
(30:35) Counterarguments to the book
(30:46) Arguments that the books arguments are unfalsifiable
(33:19) Arguments against the evolution analogy
(37:38) Arguments against counting arguments
(40:16) Arguments based on the aligned behavior of modern LLMs
(43:16) Arguments against engineering analogies to AI alignment
(45:05) Three counterarguments to the books three core arguments
(46:43) Conclusion
(49:23) Appendix
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThe recent book “If Anyone Builds It Everyone Dies” (September 2025) by Eliezer Yudkowsky and Nate Soares argues that creating superintelligent AI in the near future would almost certainly cause human extinction:
If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die.
The goal of this post is to summarize and evaluate the book's key arguments and the main counterarguments critics have made against them.
Although several other book reviews have already been written I found many of them unsatisfying because a lot of them are written by journalists who have the goal of writing an entertaining piece and only lightly cover the core arguments, or don’t seem understand them properly, and instead resort to weak arguments like straw-manning, ad hominem attacks or criticizing the style of the book.
So my goal is to write a book review that has the following properties:
---
Outline:
(07:43) Background arguments to the key claim
(09:21) The key claim: ASI alignment is extremely difficult to solve
(12:52) 1. Human values are a very specific, fragile, and tiny space of all possible goals
(15:25) 2. Current methods used to train goals into AIs are imprecise and unreliable
(16:42) The inner alignment problem
(17:25) Inner alignment introduction
(19:03) Inner misalignment evolution analogy
(21:03) Real examples of inner misalignment
(22:23) Inner misalignment explanation
(25:05) ASI misalignment example
(27:40) 3. The ASI alignment problem is hard because it has the properties of hard engineering challenges
(28:10) Space probes
(29:09) Nuclear reactors
(30:18) Computer security
(30:35) Counterarguments to the book
(30:46) Arguments that the books arguments are unfalsifiable
(33:19) Arguments against the evolution analogy
(37:38) Arguments against counting arguments
(40:16) Arguments based on the aligned behavior of modern LLMs
(43:16) Arguments against engineering analogies to AI alignment
(45:05) Three counterarguments to the books three core arguments
(46:43) Conclusion
(49:23) Appendix
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,081 Listeners

132 Listeners

7,271 Listeners

530 Listeners

16,299 Listeners

4 Listeners

14 Listeners

2 Listeners