Doom Debates

Rob Miles, Top AI Safety Educator: Humanity Isn’t Ready for Superintelligence!


Listen Later

Rob Miles is the most popular AI safety educator on YouTube, with millions of views across his videos explaining AI alignment to general audiences. He dropped out of his PhD in 2011 to focus entirely on AI safety communication – a prescient career pivot that positioned him as one of the field's most trusted voices over a decade before ChatGPT made AI risk mainstream.

Rob sits firmly in the 10-90% P(Doom) range, though he admits his uncertainty is "hugely variable" and depends heavily on how humanity responds to the challenge. What makes Rob particularly compelling is the contrast between his characteristic British calm and his deeply serious assessment of our situation. He's the type of person who can explain existential risk with the measured tone of a nature documentarian while internally believing we're probably headed toward catastrophe.

Rob has identified several underappreciated problems, particularly around alignment stability under self-modification. He argues that even if we align current AI systems, there's no guarantee their successors will inherit those values – a discontinuity problem that most safety work ignores. He's also highlighted the "missing mood" in AI discourse, where people discuss potential human extinction with the emotional register of an academic conference rather than an emergency.

We explore Rob's mainline doom scenario involving recursive self-improvement, why he thinks there's enormous headroom above human intelligence, and his views on everything from warning shots to the Malthusian dynamics that might govern a post-AGI world. Rob makes a fascinating case that we may be the "least intelligent species capable of technological civilization" – which has profound implications for what smarter systems might achieve.

Our key disagreement centers on strategy: Rob thinks some safety-minded people should work inside AI companies to influence them from within, while I argue this enables "tractability washing" that makes the companies look responsible while they race toward potentially catastrophic capabilities. Rob sees it as necessary harm reduction; I see it as providing legitimacy to fundamentally reckless enterprises.

The conversation also tackles a meta-question about communication strategy. Rob acknowledges that his measured, analytical approach might be missing something crucial – that perhaps someone needs to be "running around screaming" to convey the appropriate emotional urgency. It's a revealing moment from someone who's spent over a decade trying to wake people up to humanity's most important challenge, only to watch the world continue treating it as an interesting intellectual puzzle rather than an existential emergency.

Timestamps

* 00:00:00 - Cold Open

* 00:00:28 - Introducing Rob Miles

* 00:01:42 - Rob's Background and Childhood

* 00:02:05 - Being Aspie

* 00:04:50 - Less Wrong Community and "Normies"

* 00:06:24 - Chesterton's Fence and Cassava Root

* 00:09:30 - Transition to AI Safety Research

* 00:11:52 - Discovering Communication Skills

* 00:15:36 - YouTube Success and Channel Growth

* 00:16:46 - Current Focus: Technical vs Political

* 00:18:50 - Nuclear Near-Misses and Y2K

* 00:21:55 - What’s Your P(Doom)™

* 00:27:31 - Uncertainty About Human Response

* 00:31:04 - Views on Yudkowsky and AI Risk Arguments

* 00:42:07 - Mainline Catastrophe Scenario

* 00:47:32 - Headroom Above Human Intelligence

* 00:54:58 - Detailed Doom Scenario

* 01:01:07 - Self-Modification and Alignment Stability

* 01:17:26 - Warning Shots Problem

* 01:20:28 - Moving the Overton Window

* 01:25:59 - Protests and Political Action

* 01:33:02 - The Missing Mood Problem

* 01:40:28 - Raising Society's Temperature

* 01:44:25 - "If Anyone Builds It, Everyone Dies"

* 01:51:05 - Technical Alignment Work

* 01:52:00 - Working Inside AI Companies

* 01:57:38 - Tractability Washing at AI Companies

* 02:05:44 - Closing Thoughts

* 02:08:21 - How to Support Doom Debates: Become a Mission Partner

Links

Rob’s YouTube channel — https://www.youtube.com/@RobertMilesAI

Rob’s Twitter — https://x.com/robertskmiles

Rational Animations (another great YouTube channel, narrated by Rob) — https://www.youtube.com/RationalAnimations

Become a Mission Partner!Want to meaningfully help the show’s mission (raise awareness of AI x-risk & raise the level of debate around this crucial subject) become reality? Donate $1,000+ to the show (no upper limit) and I’ll invite you to the private Discord channel. Email me at [email protected] if you have questions or want to donate crypto.



Get full access to Doom Debates at lironshapira.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Doom DebatesBy Liron Shapira

  • 4.1
  • 4.1
  • 4.1
  • 4.1
  • 4.1

4.1

9 ratings


More shows like Doom Debates

View all
Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,424 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

289 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

488 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

90 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

133 Listeners

Unsupervised Learning by by Redpoint Ventures

Unsupervised Learning

50 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

96 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

60 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

560 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

For Humanity: An AI Risk Podcast by The AI Risk Network

For Humanity: An AI Risk Podcast

9 Listeners

Training Data by Sequoia Capital

Training Data

41 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners