
Sign up to save your podcasts
Or


We show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated[1] CoTs! The obfuscation happens in two ways:
We introduce two mitigations and they work reasonably well. Compared to regular training, our mitigations improve both monitorability and task performance in two of our three settings. Overall, our work suggests that preserving CoT monitorability is more harder than previously thought.
Read our paper on Arxiv. Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. Luke and Jacob contributed equally.
What's new in this post
In our previous post, we demonstrated feedback spillover: penalizing some property of a reasoning model's output can suppress that property in its CoT. This post summarizes our new preprint, which extends our previous work in two ways:
---
Outline:
(01:10) Whats new in this post
(01:57) Mitigations for spillover
(03:11) Results
(04:46) Multi-turn terminal interaction
(05:52) Polynomial derivative factoring
(06:55) Question answering with hints
(08:01) Concrete recommendations
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongWe show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated[1] CoTs! The obfuscation happens in two ways:
We introduce two mitigations and they work reasonably well. Compared to regular training, our mitigations improve both monitorability and task performance in two of our three settings. Overall, our work suggests that preserving CoT monitorability is more harder than previously thought.
Read our paper on Arxiv. Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. Luke and Jacob contributed equally.
What's new in this post
In our previous post, we demonstrated feedback spillover: penalizing some property of a reasoning model's output can suppress that property in its CoT. This post summarizes our new preprint, which extends our previous work in two ways:
---
Outline:
(01:10) Whats new in this post
(01:57) Mitigations for spillover
(03:11) Results
(04:46) Multi-turn terminal interaction
(05:52) Polynomial derivative factoring
(06:55) Question answering with hints
(08:01) Concrete recommendations
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,956 Listeners

132 Listeners

7,290 Listeners

548 Listeners

16,362 Listeners

4 Listeners

14 Listeners

2 Listeners