
Sign up to save your podcasts
Or


This is an informal research note. It is the result of a few-day exploration into RMU through the lens of model internals. Code to reproduce the main result is available here.
This work was produced as part of Ethan Perez's stream in the ML Alignment & Theory Scholars Program - Summer 2024 Cohort. Thanks to Nina Panickssery, Mrinank Sharma, and Fabien Roger for helpful discussion.
Summary
We investigate RMU, a recent unlearning method proposed by Li et al. (2024), through the lens of model internals. Through this lens, we explain that RMU mostly works by flooding the residual stream with "junk" in hazardous contexts, resulting in incoherence. We then propose a simple intervention to "clear the junk" from the residual stream. This intervention mostly restores the model's coherence in hazardous contexts, and recovers a significant proportion (but not all) of its original hazardous knowledge. This suggests that the effectiveness [...]
---
Outline:
(00:33) Summary
(01:34) What is RMU?
(03:53) Examining an RMU model
(04:33) Prompting with hazardous instructions
(04:49) Looking at activations
(06:18) Trying to undo RMU via directional ablation
(07:31) Directional ablation mostly restores coherence
(08:05) Directional ablation mostly restores activations to baseline
(08:39) Does directional ablation recover unlearned knowledge?
(09:41) Evaluation on WMDP benchmark
(11:46) Author contributions statement
The original text contained 4 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis is an informal research note. It is the result of a few-day exploration into RMU through the lens of model internals. Code to reproduce the main result is available here.
This work was produced as part of Ethan Perez's stream in the ML Alignment & Theory Scholars Program - Summer 2024 Cohort. Thanks to Nina Panickssery, Mrinank Sharma, and Fabien Roger for helpful discussion.
Summary
We investigate RMU, a recent unlearning method proposed by Li et al. (2024), through the lens of model internals. Through this lens, we explain that RMU mostly works by flooding the residual stream with "junk" in hazardous contexts, resulting in incoherence. We then propose a simple intervention to "clear the junk" from the residual stream. This intervention mostly restores the model's coherence in hazardous contexts, and recovers a significant proportion (but not all) of its original hazardous knowledge. This suggests that the effectiveness [...]
---
Outline:
(00:33) Summary
(01:34) What is RMU?
(03:53) Examining an RMU model
(04:33) Prompting with hazardous instructions
(04:49) Looking at activations
(06:18) Trying to undo RMU via directional ablation
(07:31) Directional ablation mostly restores coherence
(08:05) Directional ablation mostly restores activations to baseline
(08:39) Does directional ablation recover unlearned knowledge?
(09:41) Evaluation on WMDP benchmark
(11:46) Author contributions statement
The original text contained 4 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,330 Listeners

2,453 Listeners

8,557 Listeners

4,182 Listeners

93 Listeners

1,601 Listeners

9,927 Listeners

95 Listeners

511 Listeners

5,512 Listeners

15,931 Listeners

545 Listeners

131 Listeners

94 Listeners

467 Listeners