
Sign up to save your podcasts
Or


If we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very well to prevent these actions from being sampled and then reinforced. We'll refer to this as exploration hacking.
In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like.
Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control here, Evan discusses sandbagging in context of capability evaluations here, and Teun discusses sandbagging mitigations here.
Countermeasures
Countermeasures include:
---
Outline:
(01:01) Countermeasures
(03:42) How does it generalize when the model messes up?
(06:33) Empirical evidence
(08:59) Detecting exploration hacking (or sandbagging)
(10:38) Speculation on how neuralese and other architecture changes affect exploration hacking
(12:36) Future work
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongIf we naively apply RL to a scheming AI, the AI may be able to systematically get low reward/performance while simultaneously not having this behavior trained out because it intentionally never explores into better behavior. As in, it intentionally puts very low probability on (some) actions which would perform very well to prevent these actions from being sampled and then reinforced. We'll refer to this as exploration hacking.
In this post, I'll discuss a variety of countermeasures for exploration hacking and what I think the basic dynamics of exploration hacking might look like.
Prior work: I discuss how exploration hacking fits into a broader picture of non-concentrated control here, Evan discusses sandbagging in context of capability evaluations here, and Teun discusses sandbagging mitigations here.
Countermeasures
Countermeasures include:
---
Outline:
(01:01) Countermeasures
(03:42) How does it generalize when the model messes up?
(06:33) Empirical evidence
(08:59) Detecting exploration hacking (or sandbagging)
(10:38) Speculation on how neuralese and other architecture changes affect exploration hacking
(12:36) Future work
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,354 Listeners

2,455 Listeners

8,498 Listeners

4,167 Listeners

95 Listeners

1,614 Listeners

9,997 Listeners

95 Listeners

519 Listeners

5,522 Listeners

15,815 Listeners

555 Listeners

129 Listeners

92 Listeners

473 Listeners