
Sign up to save your podcasts
Or
TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.
I’d like to thank Alex Meinke for detailed feedback and discussion.
Motivation
There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, Greenblatt et al., 2024 demonstrates alignment faking, Meinke et al., 2024, shows in-context scheming and Baker et al., 2025 investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined [...]
---
Outline:
(00:47) Motivation
(02:03) Scheming is a result of goal conflicts and smart AIs
(04:56) Stock trader analogy
(05:52) Concrete failures for training against scheming
(06:22) The training against scheming distribution is too narrow
(07:08) Stock trader analogy
(07:24) Recommendations
(08:16) Overwhelming pressure from other goals
(10:17) Stock trader analogy
(10:51) Recommendations
(11:47) The AI exploits an imperfect reward model
(15:30) Stock trader analogy
(16:14) Recommendations
(17:12) Deceptive Alignment
(20:02) Stock trader analogy
(21:11) Recommendations
(22:52) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.
I’d like to thank Alex Meinke for detailed feedback and discussion.
Motivation
There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, Greenblatt et al., 2024 demonstrates alignment faking, Meinke et al., 2024, shows in-context scheming and Baker et al., 2025 investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined [...]
---
Outline:
(00:47) Motivation
(02:03) Scheming is a result of goal conflicts and smart AIs
(04:56) Stock trader analogy
(05:52) Concrete failures for training against scheming
(06:22) The training against scheming distribution is too narrow
(07:08) Stock trader analogy
(07:24) Recommendations
(08:16) Overwhelming pressure from other goals
(10:17) Stock trader analogy
(10:51) Recommendations
(11:47) The AI exploits an imperfect reward model
(15:30) Stock trader analogy
(16:14) Recommendations
(17:12) Deceptive Alignment
(20:02) Stock trader analogy
(21:11) Recommendations
(22:52) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
26,469 Listeners
2,395 Listeners
7,953 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,461 Listeners
15,321 Listeners
482 Listeners
121 Listeners
75 Listeners
461 Listeners