80,000 Hours Podcast

Why 'Aligned AI' Would Still Kill Democracy | David Duvenaud, ex-Anthropic team lead


Listen Later

Democracy might be a brief historical blip. That’s the unsettling thesis of a recent paper, which argues AI that can do all the work a human can do inevitably leads to the “gradual disempowerment” of humanity.

For most of history, ordinary people had almost no control over their governments. Liberal democracy emerged only recently, and probably not coincidentally around the Industrial Revolution.

Today's guest, David Duvenaud, used to lead the 'alignment evals' team at Anthropic, is a professor of computer science at the University of Toronto, and recently co-authored 'Gradual disempowerment.'

Links to learn more, video, and full transcript: https://80k.info/dd

He argues democracy wasn’t the result of moral enlightenment — it was competitive pressure. Nations that educated their citizens and gave them political power built better armies and more productive economies. But what happens when AI can do all the producing — and all the fighting?

“The reason that states have been treating us so well in the West, at least for the last 200 or 300 years, is because they’ve needed us,” David explains. “Life can only get so bad when you’re needed. That’s the key thing that’s going to change.”

In David’s telling, once AI can do everything humans can do but cheaper, citizens become a national liability rather than an asset. With no way to make an economic contribution, their only lever becomes activism — demanding a larger share of redistribution from AI production. Faced with millions of unemployed citizens turned full-time activists, democratic governments trying to retain some “legacy” human rights may find they’re at a disadvantage compared to governments that strategically restrict civil liberties.

But democracy is just one front. The paper argues humans will lose control through economic obsolescence, political marginalisation, and the effects on culture that’s increasingly shaped by machine-to-machine communication — even if every AI does exactly what it’s told.

This episode was recorded on August 21, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s David Duvenaud? (00:00:50)
  • Alignment isn’t enough: we still lose control (00:01:30)
  • Smart AI advice can still lead to terrible outcomes (00:14:14)
  • How gradual disempowerment would occur (00:19:02)
  • Economic disempowerment: Humans become "meddlesome parasites" (00:22:05)
  • Humans become a "criminally decadent" waste of energy (00:29:29)
  • Is humans losing control actually bad, ethically? (00:40:36)
  • Political disempowerment: Governments stop needing people (00:57:26)
  • Can human culture survive in an AI-dominated world? (01:10:23)
  • Will the future be determined by competitive forces? (01:26:51)
  • Can we find a single good post-AGI equilibria for humans? (01:34:29)
  • Do we know anything useful to do about this? (01:44:43)
  • How important is this problem compared to other AGI issues? (01:56:03)
  • Improving global coordination may be our best bet (02:04:56)
  • The 'Gradual Disempowerment Index' (02:07:26)
  • The government will fight to write AI constitutions (02:10:33)
  • “The intelligence curse” and Workshop Labs (02:16:58)
  • Mapping out disempowerment in a world of aligned AGIs (02:22:48)
  • What do David’s CompSci colleagues think of all this? (02:29:19)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Jake Morris
Coordination, transcriptions, and web: Katy Moore

...more
View all episodesView all episodes
Download on the App Store

80,000 Hours PodcastBy Rob, Luisa, and the 80000 Hours team

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

304 ratings


More shows like 80,000 Hours Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,335 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,466 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,101 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

615 Listeners

The Joe Walker Podcast by Joe Walker

The Joe Walker Podcast

141 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

294 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,623 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

200 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

98 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

519 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

511 Listeners

Hard Fork by The New York Times

Hard Fork

5,536 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

142 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

155 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

92 Listeners