This is an informal research note. It is the result of a few-day exploration into RMU through the lens of model internals. Code to reproduce the main result is available here.
This work was produced as part of Ethan Perez's stream in the ML Alignment & Theory Scholars Program - Summer 2024 Cohort. Thanks to Nina Panickssery, Mrinank Sharma, and Fabien Roger for helpful discussion.
Summary
We investigate RMU, a recent unlearning method proposed by Li et al. (2024), through the lens of model internals. Through this lens, we explain that RMU mostly works by flooding the residual stream with "junk" in hazardous contexts, resulting in incoherence. We then propose a simple intervention to "clear the junk" from the residual stream. This intervention mostly restores the model's coherence in hazardous contexts, and recovers a significant proportion (but not all) of its original hazardous knowledge. This suggests that the effectiveness [...]
---
Outline:
(00:33) Summary
(01:34) What is RMU?
(03:53) Examining an RMU model
(04:33) Prompting with hazardous instructions
(04:49) Looking at activations
(06:18) Trying to undo RMU via directional ablation
(07:31) Directional ablation mostly restores coherence
(08:05) Directional ablation mostly restores activations to baseline
(08:39) Does directional ablation recover unlearned knowledge?
(09:41) Evaluation on WMDP benchmark
(11:46) Author contributions statement
The original text contained 4 footnotes which were omitted from this narration.
The original text contained 1 image which was described by AI.
---