
Sign up to save your podcasts
Or


Where ‘it’ is superintelligence, an AI smarter and more capable than humans.
And where ‘everyone dies’ means that everyone dies.
No, seriously. They’re not kidding. They mean this very literally.
To be precise, they mean that ‘If anyone builds [superintelligence] [under anything like present conditions using anything close to current techniques] then everyone dies.’
My position on this is to add a ‘probably’ before ‘dies.’ Otherwise, I agree.
This book gives us the best longform explanation of why everyone would die, with the ‘final form’ of Yudkowsky-style explanations of these concepts for new audiences.
This review is me condensing that down much further, transposing the style a bit, and adding some of my own perspective.
Scott Alexander also offers his review at Astral Codex Ten, which I found very good. I will be stealing several of his lines in the future, and [...]
---
Outline:
(01:22) What Matters Is Superintelligence
(03:56) Rhetorical Innovation
(05:14) Welcome To The Torment Nexus
(06:50) Predictions Are Hard, Especially About the Future
(08:31) Humans That Are Not Concentrating Are Not General Intelligences
(11:18) Orthogonality
(12:08) Intelligence Lets You Do All The Things
(14:39) No Seriously We Mean All The Things
(16:59) How To Train Your LLM (In Brief)
(19:50) What Do We Want?
(21:51) You Don't Only Get What You Train For
(24:25) What Will AI Superintelligence Want?
(25:51) What Could A Superintelligence Do?
(27:43) One Extinction Scenario
(33:14) So You're Saying There's A Chance
(37:55) Oh Look It's The Alignment Plan
(40:50) The Proposal: Shut It Down
(48:26) Hope Is A Vital Part Of Any Strategy
(48:55) I'm Doing My Part
(53:34) Their Closing Words
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Where ‘it’ is superintelligence, an AI smarter and more capable than humans.
And where ‘everyone dies’ means that everyone dies.
No, seriously. They’re not kidding. They mean this very literally.
To be precise, they mean that ‘If anyone builds [superintelligence] [under anything like present conditions using anything close to current techniques] then everyone dies.’
My position on this is to add a ‘probably’ before ‘dies.’ Otherwise, I agree.
This book gives us the best longform explanation of why everyone would die, with the ‘final form’ of Yudkowsky-style explanations of these concepts for new audiences.
This review is me condensing that down much further, transposing the style a bit, and adding some of my own perspective.
Scott Alexander also offers his review at Astral Codex Ten, which I found very good. I will be stealing several of his lines in the future, and [...]
---
Outline:
(01:22) What Matters Is Superintelligence
(03:56) Rhetorical Innovation
(05:14) Welcome To The Torment Nexus
(06:50) Predictions Are Hard, Especially About the Future
(08:31) Humans That Are Not Concentrating Are Not General Intelligences
(11:18) Orthogonality
(12:08) Intelligence Lets You Do All The Things
(14:39) No Seriously We Mean All The Things
(16:59) How To Train Your LLM (In Brief)
(19:50) What Do We Want?
(21:51) You Don't Only Get What You Train For
(24:25) What Will AI Superintelligence Want?
(25:51) What Could A Superintelligence Do?
(27:43) One Extinction Scenario
(33:14) So You're Saying There's A Chance
(37:55) Oh Look It's The Alignment Plan
(40:50) The Proposal: Shut It Down
(48:26) Hope Is A Vital Part Of Any Strategy
(48:55) I'm Doing My Part
(53:34) Their Closing Words
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,383 Listeners

2,426 Listeners

1,083 Listeners

107 Listeners

289 Listeners

93 Listeners

489 Listeners

5,479 Listeners

132 Listeners

13 Listeners

133 Listeners

151 Listeners

509 Listeners

0 Listeners

133 Listeners