
Sign up to save your podcasts
Or


We show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated[1] CoTs! The obfuscation happens in two ways:
We introduce two mitigations and they work reasonably well. Compared to regular training, our mitigations improve both monitorability and task performance in two of our three settings. Overall, our work suggests that preserving CoT monitorability is more harder than previously thought.
Read our paper on Arxiv. Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. Luke and Jacob contributed equally.
What's new in this post
In our previous post, we demonstrated feedback spillover: penalizing some property of a reasoning model's output can suppress that property in its CoT. This post summarizes our new preprint, which extends our previous work in two ways:
---
Outline:
(01:10) Whats new in this post
(01:57) Mitigations for spillover
(03:11) Results
(04:46) Multi-turn terminal interaction
(05:52) Polynomial derivative factoring
(06:55) Question answering with hints
(08:01) Concrete recommendations
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongWe show that training against a monitor that only sees outputs (not CoTs) can cause obfuscated[1] CoTs! The obfuscation happens in two ways:
We introduce two mitigations and they work reasonably well. Compared to regular training, our mitigations improve both monitorability and task performance in two of our three settings. Overall, our work suggests that preserving CoT monitorability is more harder than previously thought.
Read our paper on Arxiv. Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. Luke and Jacob contributed equally.
What's new in this post
In our previous post, we demonstrated feedback spillover: penalizing some property of a reasoning model's output can suppress that property in its CoT. This post summarizes our new preprint, which extends our previous work in two ways:
---
Outline:
(01:10) Whats new in this post
(01:57) Mitigations for spillover
(03:11) Results
(04:46) Multi-turn terminal interaction
(05:52) Polynomial derivative factoring
(06:55) Question answering with hints
(08:01) Concrete recommendations
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,332 Listeners

2,453 Listeners

8,579 Listeners

4,183 Listeners

93 Listeners

1,598 Listeners

9,932 Listeners

95 Listeners

511 Listeners

5,518 Listeners

15,938 Listeners

546 Listeners

131 Listeners

93 Listeners

467 Listeners