Doom Debates

The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team


Listen Later

What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.

Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.

This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.

00:00 Introducing Jim Babcock

01:29 The Evolution of LessWrong Doom Scenarios

02:22 LessWrong’s Mission

05:49 The Rationalist Community and AI

09:37 What’s Your P(Doom)™

18:26 What Are Yudkowskians Surprised About?

26:48 Moral Philosophy vs. Goal Alignment

36:56 Sandboxing and AI Containment

42:51 Holding Yudkowskians Accountable

58:29 Understanding Next Word Prediction

01:00:02 Pre-Training vs Post-Training

01:08:06 The Rocket Alignment Problem Analogy

01:30:09 FOOM vs. Gradual Disempowerment

01:45:19 Recapping the Mainline Doom Scenario

01:52:08 Liron’s Outro

Show Notes

Jim’s LessWrong — https://www.lesswrong.com/users/jimrandomh

Jim’s Twitter — https://x.com/jimrandomh

The Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem

Optimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth

Doom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferences

Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.online

Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligence

PauseAI, the volunteer organization I’m part of: https://pauseai.info

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
...more
View all episodesView all episodes
Download on the App Store

Doom DebatesBy Liron Shapira

  • 4.1
  • 4.1
  • 4.1
  • 4.1
  • 4.1

4.1

9 ratings


More shows like Doom Debates

View all
Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,396 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

268 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

355 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

128 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

89 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

123 Listeners

Unsupervised Learning by by Redpoint Ventures

Unsupervised Learning

40 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

76 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

443 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

144 Listeners

For Humanity: An AI Safety Podcast by John Sherman

For Humanity: An AI Safety Podcast

8 Listeners

Training Data by Sequoia Capital

Training Data

36 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

116 Listeners