Produced as part of the ML Alignment Theory Scholars Program - Winter 2023-24 Cohort under advisement of Lee Sharkey.
_DeclareMathOperator*{argmax}{arg,max} DeclareMathOperator*{argmin}{arg,min}_Sparse autoencoders are a method of resolving superposition by recovering linearly encoded “features” inside activations. Unfortunately, despite the great recent success of SAEs at extracting human interpretable features, they fail to perfectly reconstruct the activations. For instance, Cunningham et al. (2023) note that replacing the residual stream of layer 2 of Pythia-70m with the reconstructed output of an SAE increased the perplexity of the model on the Pile from 25 to 40. It is important for interpretability that the features we extract accurately represent what the model is doing.
In this post, I show how and why SAEs have a reconstruction gap due to ‘feature suppression’. Then, I look at a few ways to fix this while maintaining SAEs interpretability. By modifying and fine-tuning a pre-trained SAE, we [...]
---
Outline:
(01:26) Feature Suppression
(02:08) An illustrative example of feature suppression
(03:24) Feature suppression is a significant problem in current SAEs
(04:47) How can we fix feature suppression in trained SAEs?
(06:20) Fine-tuning Reduces Feature Suppression
(09:28) Activation Strength Causes Feature Suppression
(10:35) A theoretical example predicts frequency isnt a factor
(11:25) Fine-tuning does not fix regression dilution
(12:37) Experimental measurements agree for activation strength, disagree for frequency
(14:35) Conclusion
(15:31) Appendix
(15:34) Related Work
(16:29) Extended Data
(17:19) Summary Statistics for SAEs
(17:47) Low strength/high frequency features also rotate more
---