Redwood Research Blog

“Notes on countermeasures for exploration hacking (aka sandbagging)” by Ryan Greenblatt


Listen Later

Subtitle: How can we prevent AIs from intentionally underperforming on our metrics?.

If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very well to prevent these actions from being sampled and then reinforced. We'll refer to this as exploration hacking.

In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like.

Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control here, Evan discusses sandbagging in context of capability evaluations here, and Teun discusses sandbagging mitigations here.

Countermeasures

Countermeasures include:

  • Training on high quality human examples: One [...]

---

Outline:

(01:11) Countermeasures

(03:58) How does it generalize when the model messes up?

(06:56) Empirical evidence

(09:32) Detecting exploration hacking (or sandbagging)

(11:14) Speculation on how neuralese and other architecture changes affect exploration hacking

(13:18) Future work

---

First published:

April 4th, 2025

Source:

https://redwoodresearch.substack.com/p/notes-on-countermeasures-for-exploration

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

Redwood Research BlogBy Redwood Research