
Sign up to save your podcasts
Or
What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.
Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.
This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.
00:00 Introducing Jim Babcock
01:29 The Evolution of LessWrong Doom Scenarios
02:22 LessWrong’s Mission
05:49 The Rationalist Community and AI
09:37 What’s Your P(Doom)™
18:26 What Are Yudkowskians Surprised About?
26:48 Moral Philosophy vs. Goal Alignment
36:56 Sandboxing and AI Containment
42:51 Holding Yudkowskians Accountable
58:29 Understanding Next Word Prediction
01:00:02 Pre-Training vs Post-Training
01:08:06 The Rocket Alignment Problem Analogy
01:30:09 FOOM vs. Gradual Disempowerment
01:45:19 Recapping the Mainline Doom Scenario
01:52:08 Liron’s Outro
Show Notes
Jim’s LessWrong — https://www.lesswrong.com/users/jimrandomh
Jim’s Twitter — https://x.com/jimrandomh
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Optimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth
Doom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferences
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
4.3
66 ratings
What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.
Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.
This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.
00:00 Introducing Jim Babcock
01:29 The Evolution of LessWrong Doom Scenarios
02:22 LessWrong’s Mission
05:49 The Rationalist Community and AI
09:37 What’s Your P(Doom)™
18:26 What Are Yudkowskians Surprised About?
26:48 Moral Philosophy vs. Goal Alignment
36:56 Sandboxing and AI Containment
42:51 Holding Yudkowskians Accountable
58:29 Understanding Next Word Prediction
01:00:02 Pre-Training vs Post-Training
01:08:06 The Rocket Alignment Problem Analogy
01:30:09 FOOM vs. Gradual Disempowerment
01:45:19 Recapping the Mainline Doom Scenario
01:52:08 Liron’s Outro
Show Notes
Jim’s LessWrong — https://www.lesswrong.com/users/jimrandomh
Jim’s Twitter — https://x.com/jimrandomh
The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem
Optimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth
Doom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferences
Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
582 Listeners
2,401 Listeners
87 Listeners
247 Listeners
8,780 Listeners
89 Listeners
350 Listeners
132 Listeners
90 Listeners
125 Listeners
64 Listeners
62 Listeners
135 Listeners
433 Listeners
116 Listeners