
Sign up to save your podcasts
Or


Former Machine Intelligence Research Institute (MIRI) researcher Tsvi Benson-Tilsen is championing an audacious path to prevent AI doom: engineering smarter humans to tackle AI alignment.
I consider this one of the few genuinely viable alignment solutions, and Tsvi is at the forefront of the effort. After seven years at MIRI, he co-founded the Berkeley Genomics Project to advance the human germline engineering approach.
In this episode, Tsvi lays out how to lower P(doom), arguing we must stop AGI development and stigmatize it like gain-of-function virus research. We cover his AGI timelines, the mechanics of genomic intelligence enhancement, and whether super-babies can arrive fast enough to save us.
I’ll be releasing my full interview with Tsvi in 3 parts. Stay tuned for part 2 next week!
Timestamps
0:00 Episode Preview & Introducing Tsvi Benson-Tilsen
1:56 What’s Your P(Doom)™
4:18 Tsvi’s AGI Timeline Prediction
6:16 What’s Missing from Current AI Systems
10:05 The State of AI Alignment Research: 0% Progress
11:29 The Case for PauseAI 15:16 Debate on Shaming AGI Developers
25:37 Why Human Germline Engineering
31:37 Enhancing Intelligence: Chromosome Vs. Sperm Vs. Egg Selection
37:58 Pushing the Limits: Head Size, Height, Etc.
40:05 What About Human Cloning?
43:24 The End-to-End Plan for Germline Engineering
45:45 Will Germline Engineering Be Fast Enough?
48:28 Outro: How to Support Tsvi’s Work
Show Notes
Tsvi’s organization, the Berkeley Genomics Project — https://berkeleygenomics.org
If you’re interested to connect with Tsvi about germline engineering, you can reach out to him at [email protected].
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏
By Liron Shapira4.1
99 ratings
Former Machine Intelligence Research Institute (MIRI) researcher Tsvi Benson-Tilsen is championing an audacious path to prevent AI doom: engineering smarter humans to tackle AI alignment.
I consider this one of the few genuinely viable alignment solutions, and Tsvi is at the forefront of the effort. After seven years at MIRI, he co-founded the Berkeley Genomics Project to advance the human germline engineering approach.
In this episode, Tsvi lays out how to lower P(doom), arguing we must stop AGI development and stigmatize it like gain-of-function virus research. We cover his AGI timelines, the mechanics of genomic intelligence enhancement, and whether super-babies can arrive fast enough to save us.
I’ll be releasing my full interview with Tsvi in 3 parts. Stay tuned for part 2 next week!
Timestamps
0:00 Episode Preview & Introducing Tsvi Benson-Tilsen
1:56 What’s Your P(Doom)™
4:18 Tsvi’s AGI Timeline Prediction
6:16 What’s Missing from Current AI Systems
10:05 The State of AI Alignment Research: 0% Progress
11:29 The Case for PauseAI 15:16 Debate on Shaming AGI Developers
25:37 Why Human Germline Engineering
31:37 Enhancing Intelligence: Chromosome Vs. Sperm Vs. Egg Selection
37:58 Pushing the Limits: Head Size, Height, Etc.
40:05 What About Human Cloning?
43:24 The End-to-End Plan for Germline Engineering
45:45 Will Germline Engineering Be Fast Enough?
48:28 Outro: How to Support Tsvi’s Work
Show Notes
Tsvi’s organization, the Berkeley Genomics Project — https://berkeleygenomics.org
If you’re interested to connect with Tsvi about germline engineering, you can reach out to him at [email protected].
---
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏

2,426 Listeners

289 Listeners

89 Listeners

489 Listeners

132 Listeners

90 Listeners

133 Listeners

50 Listeners

97 Listeners

60 Listeners

558 Listeners

151 Listeners

9 Listeners

41 Listeners

133 Listeners