Redwood Research Blog

“Handling schemers if shutdown is not an option” by Buck Shlegeris


Listen Later

Subtitle: What if getting strong evidence of scheming isn't the end of your scheming problems, but merely the middle?.

In most of our research and writing on AI control, we’ve emphasized the following situation:

  • The AI developer is deploying a model that they think might be scheming, but they aren’t sure.

  • The objective of the safety team is to ensure that if the model is scheming, it will be caught before successfully causing a catastrophe.

For example, in our original paper on AI control, the methodology we used for evaluating safety protocols implies that once the developer is confident that the model is a schemer, they’re totally fine with undeploying the untrusted model.1 2

But that seems kind of unlikely. Even if AI companies find strong evidence that their AIs are trying to escape, they won’t necessarily stop deploying them. This might be justified: even [...]

---

Outline:

(03:29) Behavioral observations vs other ways of detecting scheming

(08:43) What does control look like if you need to keep deploying schemers?

(15:46) How does the prospect of needing to deploy known schemers affect the relative value of control research?

(17:31) What research should be done that focuses specifically on this?

(17:36) Sample-efficient ML

(18:02) Training recalcitrant policies

(20:37) A few other historical reasons for focusing on before-you-catch

---

First published:

April 18th, 2025

Source:

https://redwoodresearch.substack.com/p/handling-schemers-if-shutdown-is

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

Redwood Research BlogBy Redwood Research