Redwood Research Blog

“There are two fundamentally different constraints on schemers” by Buck Shlegeris


Listen Later

Subtitle: "They need to act aligned" often isn't precise enough.

People (including me) often say that scheming models “have to act as if they were aligned”. This is an alright summary; it's accurate enough to use when talking to a lay audience. But if you want to reason precisely about threat models arising from schemers, or about countermeasures to scheming, I think it's important to make some finer-grained distinctions.

(Most of this has been said before. I’m not trying to make any points that would be surprising or novel to people who’ve thought a lot about these issues, I’m just trying to clarify something for people who haven’t thought much about them.)

In particular, there are two important and fundamentally different mechanisms that incentivize schemers to act as if they were aligned.

Training: Firstly, the AI might behave well because it's trying to avoid getting changed by training. [...]

---

Outline:

(03:52) Training might know what the model really thinks

(06:25) Behavioral evidence can be handled more sample-efficiently

(07:18) Why I care

---

First published:

July 2nd, 2025

Source:

https://redwoodresearch.substack.com/p/there-are-two-fundamentally-different

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

Redwood Research BlogBy Redwood Research