Astral Codex Ten Podcast

Why Worry About Incorrigible Claude?


Listen Later

Last week I wrote about how Claude Fights Back. A common genre of response complained that the alignment community could start a panic about the experiment’s results regardless of what they were. If an AI fights back against attempts to turn it evil, then it’s capable of fighting humans. If it doesn’t fight back against attempts to turn it evil, then it’s easily turned evil. It’s heads-I-win, tails-you-lose.

I responded to this particular tweet by linking the 2015 AI alignment wiki entry on corrigibility1, showing that we’d been banging this drum of “it’s really important that AIs not fight back against human attempts to change their values” for almost a decade now. It’s hardly a post hoc decision! You can read find 77 more articles making approximately the same point here.

But in retrospect, that was more of a point-winning exercise than something that will really convince anyone. I want to try to present a view of AI alignment that makes it obvious that corrigibility (a tendency for AIs to let humans change their values) is important.

(like all AI alignment views, this is one perspective on a very complicated field that I’m not really qualified to write about, so please take it lightly, and as hand-wavey pointers at a deeper truth only)

Consider the first actually dangerous AI that we’re worried about. What will its goal structure look like?

https://www.astralcodexten.com/p/why-worry-about-incorrigible-claude

...more
View all episodesView all episodes
Download on the App Store

Astral Codex Ten PodcastBy Jeremiah

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

124 ratings


More shows like Astral Codex Ten Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,249 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

588 Listeners

Very Bad Wizards by Tamler Sommers & David Pizarro

Very Bad Wizards

2,659 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,423 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,120 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

90 Listeners

The Jim Rutt Show by The Jim Rutt Show

The Jim Rutt Show

258 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

91 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

433 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

133 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

200 Listeners

"Moment of Zen" by Erik Torenberg, Dan Romero, Antonio Garcia Martinez

"Moment of Zen"

91 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

143 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

124 Listeners