
Sign up to save your podcasts
Or


Where ‘it’ is superintelligence, an AI smarter and more capable than humans.
And where ‘everyone dies’ means that everyone dies.
No, seriously. They’re not kidding. They mean this very literally.
To be precise, they mean that ‘If anyone builds [superintelligence] [under anything like present conditions using anything close to current techniques] then everyone dies.’
My position on this is to add a ‘probably’ before ‘dies.’ Otherwise, I agree.
This book gives us the best longform explanation of why everyone would die, with the ‘final form’ of Yudkowsky-style explanations of these concepts for new audiences.
This review is me condensing that down much further, transposing the style a bit, and adding some of my own perspective.
Scott Alexander also offers his review at Astral Codex Ten, which I found very good. I will be stealing several of his lines in the future, and [...]
---
Outline:
(01:22) What Matters Is Superintelligence
(03:56) Rhetorical Innovation
(05:14) Welcome To The Torment Nexus
(06:50) Predictions Are Hard, Especially About the Future
(08:32) Humans That Are Not Concentrating Are Not General Intelligences
(11:18) Orthogonality
(12:08) Intelligence Lets You Do All The Things
(14:39) No Seriously We Mean All The Things
(17:00) How To Train Your LLM (In Brief)
(19:51) What Do We Want?
(21:51) You Don't Only Get What You Train For
(24:25) What Will AI Superintelligence Want?
(25:52) What Could A Superintelligence Do?
(27:43) One Extinction Scenario
(33:14) So You're Saying There's A Chance
(37:55) Oh Look It's The Alignment Plan
(40:50) The Proposal: Shut It Down
(48:26) Hope Is A Vital Part Of Any Strategy
(48:55) I'm Doing My Part
(53:34) Their Closing Words
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongWhere ‘it’ is superintelligence, an AI smarter and more capable than humans.
And where ‘everyone dies’ means that everyone dies.
No, seriously. They’re not kidding. They mean this very literally.
To be precise, they mean that ‘If anyone builds [superintelligence] [under anything like present conditions using anything close to current techniques] then everyone dies.’
My position on this is to add a ‘probably’ before ‘dies.’ Otherwise, I agree.
This book gives us the best longform explanation of why everyone would die, with the ‘final form’ of Yudkowsky-style explanations of these concepts for new audiences.
This review is me condensing that down much further, transposing the style a bit, and adding some of my own perspective.
Scott Alexander also offers his review at Astral Codex Ten, which I found very good. I will be stealing several of his lines in the future, and [...]
---
Outline:
(01:22) What Matters Is Superintelligence
(03:56) Rhetorical Innovation
(05:14) Welcome To The Torment Nexus
(06:50) Predictions Are Hard, Especially About the Future
(08:32) Humans That Are Not Concentrating Are Not General Intelligences
(11:18) Orthogonality
(12:08) Intelligence Lets You Do All The Things
(14:39) No Seriously We Mean All The Things
(17:00) How To Train Your LLM (In Brief)
(19:51) What Do We Want?
(21:51) You Don't Only Get What You Train For
(24:25) What Will AI Superintelligence Want?
(25:52) What Could A Superintelligence Do?
(27:43) One Extinction Scenario
(33:14) So You're Saying There's A Chance
(37:55) Oh Look It's The Alignment Plan
(40:50) The Proposal: Shut It Down
(48:26) Hope Is A Vital Part Of Any Strategy
(48:55) I'm Doing My Part
(53:34) Their Closing Words
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,387 Listeners

2,423 Listeners

8,491 Listeners

4,149 Listeners

92 Listeners

1,584 Listeners

9,833 Listeners

89 Listeners

489 Listeners

5,470 Listeners

16,072 Listeners

534 Listeners

133 Listeners

96 Listeners

508 Listeners