
Sign up to save your podcasts
Or
Sparse Autoencoder (SAE) errors are empirically pathological: when a reconstructed activation vector is distance _epsilon_ from the original activation vector, substituting a randomly chosen point at the same distance changes the next token prediction probabilities significantly less than substituting the SAE reconstruction[1] (measured by both KL and loss). This is true for all layers of the model (~2x to ~4.5x increase in KL and loss over baseline) and is not caused by feature suppression/shrinkage. Assuming others replicate, these results suggest the proxy reconstruction objective is behaving pathologically. I am not sure why these errors occur but expect understanding this gap will give us deeper insight into SAEs while also providing an additional metric to guide methodological progress.
Introduction.As the interpretability community allocates more resources and increases reliance on SAEs, it is important to understand the limitation and potential flaws of this method.
SAEs are designed [...]
---
Outline:
(03:59) Intuition: how big a deal is this (KL) difference?
(04:30) Experiments and Results
(04:56) Intervention Types
(07:35) Layerwise Intervention Results in More Detail
(08:56) Single Token Intervention Results
(10:10) How pathological are the errors?
(11:09) When do these errors happen?
(12:26) Replication with Attention SAEs
(13:18) Concluding Thoughts
(13:22) Why is this happening?
(16:21) Takeaways
(17:15) Future work
(17:43) Acknowledgements
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Sparse Autoencoder (SAE) errors are empirically pathological: when a reconstructed activation vector is distance _epsilon_ from the original activation vector, substituting a randomly chosen point at the same distance changes the next token prediction probabilities significantly less than substituting the SAE reconstruction[1] (measured by both KL and loss). This is true for all layers of the model (~2x to ~4.5x increase in KL and loss over baseline) and is not caused by feature suppression/shrinkage. Assuming others replicate, these results suggest the proxy reconstruction objective is behaving pathologically. I am not sure why these errors occur but expect understanding this gap will give us deeper insight into SAEs while also providing an additional metric to guide methodological progress.
Introduction.As the interpretability community allocates more resources and increases reliance on SAEs, it is important to understand the limitation and potential flaws of this method.
SAEs are designed [...]
---
Outline:
(03:59) Intuition: how big a deal is this (KL) difference?
(04:30) Experiments and Results
(04:56) Intervention Types
(07:35) Layerwise Intervention Results in More Detail
(08:56) Single Token Intervention Results
(10:10) How pathological are the errors?
(11:09) When do these errors happen?
(12:26) Replication with Attention SAEs
(13:18) Concluding Thoughts
(13:22) Why is this happening?
(16:21) Takeaways
(17:15) Future work
(17:43) Acknowledgements
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners