
Sign up to save your podcasts
Or


Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:
Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.
Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.
00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion
Follow Mike Israetel:
* instagram.com/drmikeisraetel
* youtube.com/@MikeIsraetelMakingProgress
Get the full Doom Debates experience:
* Subscribe to youtube.com/@DoomDebates
* Subscribe to this Substack: DoomDebates.com
* Search "Doom Debates" to subscribe in your podcast player
* Follow me at x.com/liron
By Liron Shapira4.3
1414 ratings
Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:
Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.
Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more likely than not. That’s the crux of why he and I disagree, and why I see most of his episode as talking past most other intelligent positions people take on AI alignment. I hope he’ll keep engaging with the topic and rethink his position.
00:00 Introduction
03:08 AI Risks and Scenarios
06:42 Superintelligence Arms Race
12:39 The Importance of AI Alignment
18:10 Challenges in Defining Human Values
26:11 The Outer and Inner Alignment Problems
44:00 Transhumanism and AI's Potential
45:42 The Next Step In Evolution
47:54 AI Alignment and Potential Catastrophes
50:48 Scenarios of AI Development
54:03 The AI Alignment Problem
01:07:39 AI as a Helper System
01:08:53 Corporations and AI Development
01:10:19 The Risk of Unaligned AI
01:27:18 Building a Superintelligent AI
01:30:57 Conclusion
Follow Mike Israetel:
* instagram.com/drmikeisraetel
* youtube.com/@MikeIsraetelMakingProgress
Get the full Doom Debates experience:
* Subscribe to youtube.com/@DoomDebates
* Subscribe to this Substack: DoomDebates.com
* Search "Doom Debates" to subscribe in your podcast player
* Follow me at x.com/liron

26,319 Listeners

2,452 Listeners

592 Listeners

937 Listeners

4,176 Listeners

1,601 Listeners

95 Listeners

514 Listeners

28 Listeners

209 Listeners

131 Listeners

227 Listeners

265 Listeners

608 Listeners

1,087 Listeners