
Sign up to save your podcasts
Or


My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates
By Liron Shapira4.3
1414 ratings
My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.
I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.
00:00 Introduction
02:36 Gil & Liron’s Early Doom Days
04:58: AIs : Humans :: Humans : Ants
08:02 The Convergence of AI Goals
15:19 What’s Your P(Doom)™
19:23 Multiple AIs and Human Welfare
24:42 Gil’s Alignment Claim
42:31 Cheaters and Frankensteins
55:55 Superintelligent Game Theory
01:01:16 Slower Takeoff via Resource Competition
01:07:57 Recapping the Disagreement
01:15:39 Post-Debate Banter
Show Notes
Gil’s LinkedIn: https://www.linkedin.com/in/gilmark/
Gil’s Twitter: https://x.com/gmfromgm
Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence
PauseAI, the volunteer organization I’m part of: https://pauseai.info
Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates

26,380 Listeners

4,270 Listeners

2,461 Listeners

2,267 Listeners

379 Listeners

1,635 Listeners

617 Listeners

101 Listeners

551 Listeners

3,333 Listeners

8,447 Listeners

147 Listeners

10 Listeners

936 Listeners

1,149 Listeners