
Sign up to save your podcasts
Or
If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very well to prevent these actions from being sampled and then reinforced. We'll refer to this as exploration hacking.
In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like.
Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control here, Evan discusses sandbagging in context of capability evaluations here, and Teun discusses sandbagging mitigations here.
Countermeasures
Countermeasures include:
---
Outline:
(01:01) Countermeasures
(03:42) How does it generalize when the model messes up?
(06:33) Empirical evidence
(08:59) Detecting exploration hacking (or sandbagging)
(10:38) Speculation on how neuralese and other architecture changes affect exploration hacking
(12:36) Future work
---
First published:
Source:
Narrated by TYPE III AUDIO.
If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very well to prevent these actions from being sampled and then reinforced. We'll refer to this as exploration hacking.
In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like.
Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control here, Evan discusses sandbagging in context of capability evaluations here, and Teun discusses sandbagging mitigations here.
Countermeasures
Countermeasures include:
---
Outline:
(01:01) Countermeasures
(03:42) How does it generalize when the model messes up?
(06:33) Empirical evidence
(08:59) Detecting exploration hacking (or sandbagging)
(10:38) Speculation on how neuralese and other architecture changes affect exploration hacking
(12:36) Future work
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,368 Listeners
2,397 Listeners
7,808 Listeners
4,111 Listeners
87 Listeners
1,454 Listeners
8,774 Listeners
89 Listeners
354 Listeners
5,363 Listeners
15,028 Listeners
461 Listeners
127 Listeners
65 Listeners
432 Listeners