
Sign up to save your podcasts
Or


Thanks for everyone who participated in the live Q&A on Friday!
The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.
00:00 Advice for Comp Sci Students
01:14 The $500B Stargate Project
02:36 Eliezer's Recent Podcast
03:07 AI Safety and Public Policy
04:28 AI Disruption and Politics
05:12 DeepSeek and AI Advancements
06:54 Human vs. AI Intelligence
14:00 Consciousness and AI
24:34 Dark Forest Theory and AI
35:31 Investing in Yourself
42:42 Probability of Aliens Saving Us from AI
43:31 Brain-Computer Interfaces and AI Safety
46:19 Debating AI Safety and Human Intelligence
48:50 Nefarious AI Activities and Satellite Surveillance
49:31 Pliny the Prompter Jailbreaking AI
50:20 Can’t vs. Won’t Destroy the World
51:15 How to Make AI Risk Feel Present
54:27 Keeping Doom Arguments On Track
57:04 Game Theory and AI Development Race
01:01:26 Mental Model of Average Non-Doomer
01:04:58 Is Liron a Strict Bayesian and Utilitarian?
01:09:48 Can We Rename “Doom Debates”
01:12:34 The Role of AI Trustworthiness
01:16:48 Minor AI Disasters
01:18:07 Most Likely Reason Things Go Well
01:21:00 Final Thoughts
Show Notes
Previous post where people submitted questions: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates
By Liron Shapira4.3
1414 ratings
Thanks for everyone who participated in the live Q&A on Friday!
The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.
00:00 Advice for Comp Sci Students
01:14 The $500B Stargate Project
02:36 Eliezer's Recent Podcast
03:07 AI Safety and Public Policy
04:28 AI Disruption and Politics
05:12 DeepSeek and AI Advancements
06:54 Human vs. AI Intelligence
14:00 Consciousness and AI
24:34 Dark Forest Theory and AI
35:31 Investing in Yourself
42:42 Probability of Aliens Saving Us from AI
43:31 Brain-Computer Interfaces and AI Safety
46:19 Debating AI Safety and Human Intelligence
48:50 Nefarious AI Activities and Satellite Surveillance
49:31 Pliny the Prompter Jailbreaking AI
50:20 Can’t vs. Won’t Destroy the World
51:15 How to Make AI Risk Feel Present
54:27 Keeping Doom Arguments On Track
57:04 Game Theory and AI Development Race
01:01:26 Mental Model of Average Non-Doomer
01:04:58 Is Liron a Strict Bayesian and Utilitarian?
01:09:48 Can We Rename “Doom Debates”
01:12:34 The Role of AI Trustworthiness
01:16:48 Minor AI Disasters
01:18:07 Most Likely Reason Things Go Well
01:21:00 Final Thoughts
Show Notes
Previous post where people submitted questions: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates

26,320 Listeners

2,451 Listeners

593 Listeners

935 Listeners

4,178 Listeners

1,601 Listeners

95 Listeners

512 Listeners

28 Listeners

209 Listeners

130 Listeners

227 Listeners

265 Listeners

608 Listeners

1,088 Listeners