Astral Codex Ten Podcast

Why Worry About Incorrigible Claude?


Listen Later

Last week I wrote about how Claude Fights Back. A common genre of response complained that the alignment community could start a panic about the experiment’s results regardless of what they were. If an AI fights back against attempts to turn it evil, then it’s capable of fighting humans. If it doesn’t fight back against attempts to turn it evil, then it’s easily turned evil. It’s heads-I-win, tails-you-lose.

I responded to this particular tweet by linking the 2015 AI alignment wiki entry on corrigibility1, showing that we’d been banging this drum of “it’s really important that AIs not fight back against human attempts to change their values” for almost a decade now. It’s hardly a post hoc decision! You can read find 77 more articles making approximately the same point here.

But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important.

(like all AI alignment views, this is one perspective on a very complicated field that I’m not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only)

Consider the first actually dangerous AI that we’re worried about. What will its goal structure look like?

https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude

...more
View all episodesView all episodes
Download on the App Store

Astral Codex Ten PodcastBy Jeremiah

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

123 ratings


More shows like Astral Codex Ten Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,226 Listeners

Revolutions by Mike Duncan

Revolutions

13,347 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,362 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,380 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Blocked and Reported by Katie Herzog and Jesse Singal

Blocked and Reported

3,754 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

379 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

128 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

198 Listeners

Oxide and Friends by Oxide Computer Company

Oxide and Friends

47 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

90 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

77 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

143 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

114 Listeners