Doom Debates

His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore


Listen Later

Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.

In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. đźš‚

Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!

We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.

By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).

This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.

0:00 - Opening

0:42 - What’s Your P(Doom)™

01:18 - Stop 1: AGI timing (15% chance it's not coming soon)

01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)

01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)

02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins

04:42 - Moral realism vs. evolutionary explanations for morality

06:43 - The psychopath problem: smart but immoral humans exist

08:50 - Game theory and why psychopaths persist in populations

10:21 - Liam's first major update: 30% down to 15-20% on moral goodness

12:05 - Stop 5: Safe AI development process (20%)

14:28 - Stop 6: Manageable capability growth (20%)

15:38 - Stop 7: AI conquest intentions - breaking down into subcategories

17:03 - Alignment by default vs. deliberate alignment efforts

19:07 - Stop 8: Super alignment tractability (20%)

20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)

23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")

25:47 - Stop 11: Epistemological concerns about doom predictions

27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%

30:21 - Bayes factor 1: Historical precedent of doom predictions failing

33:08 - Bayes factor 2: Superforecasters think we'll be fine

39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned

45:49 - Challenging the insider knowledge argument with concrete examples

48:47 - The privilege access epistemology debate

56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%

58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge

59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels

1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs

1:04:06 - Liam's future career plans in AI policy

1:05:02 - Wrap-up and reflection on rationalist belief updating

Show Notes

* Liam Robins on Substack -

* Liam’s Doom Train post -

* Liam’s Twitter - @liamhrobins

Anthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
...more
View all episodesView all episodes
Download on the App Store

Doom DebatesBy Liron Shapira

  • 4.1
  • 4.1
  • 4.1
  • 4.1
  • 4.1

4.1

9 ratings


More shows like Doom Debates

View all
Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,395 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

269 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

426 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

128 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

91 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

121 Listeners

Unsupervised Learning by by Redpoint Ventures

Unsupervised Learning

50 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

60 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

491 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

146 Listeners

For Humanity: An AI Safety Podcast by John Sherman

For Humanity: An AI Safety Podcast

8 Listeners

Training Data by Sequoia Capital

Training Data

43 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

123 Listeners