
Sign up to save your podcasts
Or


Let’s see where the attendees of Manifest 2025 get off the Doom Train, and whether I can convince them to stay on and ride with me to the end of the line!
00:00 Introduction to Doom Debates
03:21 What’s Your P(Doom)?™
05:03 🚂 “AGI Isn't Coming Soon”
08:37 🚂 “AI Can't Surpass Human Intelligence”
12:20 🚂 “AI Won't Be a Physical Threat”
13:39 🚂 “Intelligence Yields Moral Goodness”
17:21 🚂 “Safe AI Development Process”
17:38 🚂 “AI Capabilities Will Rise at a Manageable Pace”
20:12 🚂 “AI Won't Try to Conquer the Universe”
25:00 🚂 “Superalignment Is A Tractable Problem”
28:58 🚂 “Once We Solve Superalignment, We’ll Enjoy Peace”
31:51 🚂 “Unaligned ASI Will Spare Us”
36:40 🚂 “AI Doomerism Is Bad Epistemology”
40:11 Bonus 🚂: “Fine, P(Doom) is high… but that’s ok!”
42:45 Recapping the Debate
See also my previous episode explaining the Doom Train: https://lironshapira.substack.com/p/poking-holes-in-the-ai-doom-argument
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
By Liron Shapira4.3
1414 ratings
Let’s see where the attendees of Manifest 2025 get off the Doom Train, and whether I can convince them to stay on and ride with me to the end of the line!
00:00 Introduction to Doom Debates
03:21 What’s Your P(Doom)?™
05:03 🚂 “AGI Isn't Coming Soon”
08:37 🚂 “AI Can't Surpass Human Intelligence”
12:20 🚂 “AI Won't Be a Physical Threat”
13:39 🚂 “Intelligence Yields Moral Goodness”
17:21 🚂 “Safe AI Development Process”
17:38 🚂 “AI Capabilities Will Rise at a Manageable Pace”
20:12 🚂 “AI Won't Try to Conquer the Universe”
25:00 🚂 “Superalignment Is A Tractable Problem”
28:58 🚂 “Once We Solve Superalignment, We’ll Enjoy Peace”
31:51 🚂 “Unaligned ASI Will Spare Us”
36:40 🚂 “AI Doomerism Is Bad Epistemology”
40:11 Bonus 🚂: “Fine, P(Doom) is high… but that’s ok!”
42:45 Recapping the Debate
See also my previous episode explaining the Doom Train: https://lironshapira.substack.com/p/poking-holes-in-the-ai-doom-argument
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates

26,331 Listeners

2,462 Listeners

589 Listeners

936 Listeners

4,172 Listeners

1,607 Listeners

97 Listeners

523 Listeners

26 Listeners

209 Listeners

133 Listeners

228 Listeners

265 Listeners

636 Listeners

1,100 Listeners