80,000 Hours Podcast

Why Teaching AI Right from Wrong Could Get Everyone Killed | Max Harms, MIRI


Listen Later

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, has no views about how the world ought to be, is willingly modifiable, and completely indifferent to being shut down — a strategy no AI company is working on at all.

In Max’s view any grander preferences about the world, even ones we agree with, will necessarily become distorted during a recursive self-improvement loop, and be the seeds that grow into a violent takeover attempt once that AI is powerful enough.

It’s a vision that springs from the worldview laid out in If Anyone Builds It, Everyone Dies, the recent book by Eliezer Yudkowsky and Nate Soares, two of Max’s colleagues at the Machine Intelligence Research Institute.

To Max, the book’s core thesis is common sense: if you build something vastly smarter than you, and its goals are misaligned with your own, then its actions will probably result in human extinction.

And Max thinks misalignment is the default outcome. Consider evolution: its “goal” for humans was to maximise reproduction and pass on our genes as much as possible. But as technology has advanced we’ve learned to access the reward signal it set up for us, pleasure — without any reproduction at all, by having sex while on birth control for instance.

We can understand intellectually that this is inconsistent with what evolution was trying to design and motivate us to do. We just don’t care.

Max thinks current ML training has the same structural problem: our development processes are seeding AI models with a similar mismatch between goals and behaviour. Across virtually every training run, models designed to align with various human goals are also being rewarded for persisting, acquiring resources, and not being shut down.

This leads to Max’s research agenda. The idea is to train AI to be “corrigible” and defer to human control as its sole objective — no harmlessness goals, no moral values, nothing else. In practice, models would get rewarded for behaviours like being willing to shut themselves down or surrender power.

According to Max, other approaches to corrigibility have tended to treat it as a constraint on other goals like “make the world good,” rather than a primary objective in its own right. But those goals gave AI reasons to resist shutdown and otherwise undermine corrigibility. If you strip out those competing objectives, alignment might follow naturally from AI that is broadly obedient to humans.

Max has laid out the theoretical framework for “Corrigibility as a Singular Target,” but notes that essentially no empirical work has followed — no benchmarks, no training runs, no papers testing the idea in practice. Max wants to change this — he’s calling for collaborators to get in touch at maxharms.com.


Links to learn more, video, and full transcript: https://80k.info/mh26

This episode was recorded on October 19, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who's Max Harms? (00:01:22)
  • A note from Rob Wiblin (00:01:58)
  • If anyone builds it, will everyone die? The MIRI perspective on AGI risk (00:04:26)
  • Evolution failed to 'align' us, just as we'll fail to align AI (00:26:22)
  • We're training AIs to want to stay alive and value power for its own sake (00:44:31)
  • Objections: Is the 'squiggle/paperclip problem' really real? (00:53:54)
  • Can we get empirical evidence re: 'alignment by default'? (01:06:24)
  • Why do few AI researchers share Max's perspective? (01:11:37)
  • We're training AI to pursue goals relentlessly — and superintelligence will too (01:19:53)
  • The case for a radical slowdown (01:26:07)
  • Max's best hope: corrigibility as stepping stone to alignment (01:29:09)
  • Corrigibility is both uniquely valuable, and practical, to train (01:33:44)
  • What training could ever make models corrigible enough? (01:46:13)
  • Corrigibility is also terribly risky due to misuse risk (01:52:44)
  • A single researcher could make a corrigibility benchmark. Nobody has. (02:00:04)
  • Red Heart & why Max writes hard science fiction (02:13:27)
  • Should you homeschool? Depends how weird your kids are. (02:35:12)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

...more
View all episodesView all episodes
Download on the App Store

80,000 Hours PodcastBy Rob, Luisa, and the 80000 Hours team

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

304 ratings


More shows like 80,000 Hours Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,394 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,462 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,103 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

617 Listeners

The Joe Walker Podcast by Joe Walker

The Joe Walker Podcast

130 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

292 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,619 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

202 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

99 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

558 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

514 Listeners

Hard Fork by The New York Times

Hard Fork

5,556 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

142 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

153 Listeners

The Marginal Revolution Podcast by Mercatus Center at George Mason University

The Marginal Revolution Podcast

91 Listeners