Doom Debates

His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore


Listen Later

Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.

In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. 🚂

Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!

We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.

By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).

This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.

0:00 - Opening

0:42 - What’s Your P(Doom)™

01:18 - Stop 1: AGI timing (15% chance it's not coming soon)

01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)

01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)

02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins

04:42 - Moral realism vs. evolutionary explanations for morality

06:43 - The psychopath problem: smart but immoral humans exist

08:50 - Game theory and why psychopaths persist in populations

10:21 - Liam's first major update: 30% down to 15-20% on moral goodness

12:05 - Stop 5: Safe AI development process (20%)

14:28 - Stop 6: Manageable capability growth (20%)

15:38 - Stop 7: AI conquest intentions - breaking down into subcategories

17:03 - Alignment by default vs. deliberate alignment efforts

19:07 - Stop 8: Super alignment tractability (20%)

20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)

23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")

25:47 - Stop 11: Epistemological concerns about doom predictions

27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%

30:21 - Bayes factor 1: Historical precedent of doom predictions failing

33:08 - Bayes factor 2: Superforecasters think we'll be fine

39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned

45:49 - Challenging the insider knowledge argument with concrete examples

48:47 - The privilege access epistemology debate

56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%

58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge

59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels

1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs

1:04:06 - Liam's future career plans in AI policy

1:05:02 - Wrap-up and reflection on rationalist belief updating

Show Notes

* Liam Robins on Substack -

* Liam’s Doom Train post -

* Liam’s Twitter - @liamhrobins

Anthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



Get full access to Doom Debates at lironshapira.substack.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

Doom DebatesBy Liron Shapira

  • 4.3
  • 4.3
  • 4.3
  • 4.3
  • 4.3

4.3

14 ratings


More shows like Doom Debates

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,320 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,451 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

593 Listeners

The Michael Shermer Show by Michael Shermer

The Michael Shermer Show

935 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,178 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,601 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

95 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

512 Listeners

Theories of Everything with Curt Jaimungal by Theories of Everything

Theories of Everything with Curt Jaimungal

28 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

209 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

130 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

227 Listeners

Robinson's Podcast by Robinson Erhardt

Robinson's Podcast

265 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

608 Listeners

The Last Invention by Longview

The Last Invention

1,088 Listeners