
Sign up to save your podcasts
Or


My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
By Liron Shapira4.3
1414 ratings
My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates

26,320 Listeners

2,451 Listeners

593 Listeners

935 Listeners

4,178 Listeners

1,601 Listeners

95 Listeners

512 Listeners

28 Listeners

209 Listeners

130 Listeners

227 Listeners

265 Listeners

608 Listeners

1,088 Listeners