
Sign up to save your podcasts
Or


I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…
00:00 Introduction
07:59 The Dangerous Threshold to Runaway Superintelligence
18:57 Superhuman Goal Optimization = Infinite Time Horizon
21:21 Goal-Completeness by Analogy to Turing-Completeness
26:53 Intellidynamics
29:13 Goal-Optimization Is Convergent
31:15 Early AIs Lose Control of Later AIs
34:46 The Superhuman Threshold Is Real
38:27 Expecting Rapid FOOM
40:20 Rocket Alignment
49:59 Stability of Values Under Self-Modification
53:13 The Way to Heaven Passes Right By Hell
57:32 My Mainline Doom Scenario
01:17:46 What Values Does The Goal Optimizer Have?
Show Notes
My recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80g
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
By Liron Shapira4.3
1414 ratings
I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…
00:00 Introduction
07:59 The Dangerous Threshold to Runaway Superintelligence
18:57 Superhuman Goal Optimization = Infinite Time Horizon
21:21 Goal-Completeness by Analogy to Turing-Completeness
26:53 Intellidynamics
29:13 Goal-Optimization Is Convergent
31:15 Early AIs Lose Control of Later AIs
34:46 The Superhuman Threshold Is Real
38:27 Expecting Rapid FOOM
40:20 Rocket Alignment
49:59 Stability of Values Under Self-Modification
53:13 The Way to Heaven Passes Right By Hell
57:32 My Mainline Doom Scenario
01:17:46 What Values Does The Goal Optimizer Have?
Show Notes
My recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80g
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates

26,391 Listeners

4,292 Listeners

2,463 Listeners

2,279 Listeners

374 Listeners

1,619 Listeners

592 Listeners

98 Listeners

555 Listeners

3,246 Listeners

8,542 Listeners

155 Listeners

10 Listeners

878 Listeners

1,165 Listeners