Epistemic status - self-evident.
In this post, we interpret a small sample of Sparse Autoencoder features which reveal meaningful computational structure in the model that is clearly highly researcher-independent and of significant relevance to AI alignment.
Motivation
Recent excitement about Sparse Autoencoders (SAEs) has been mired by the following question: Do SAE features reflect properties of the model, or just capture correlational structure in the underlying data distribution?
While a full answer to this question is important and will take deliberate investigation, we note that researchers who've spent large amounts of time interacting with feature dashboards think it's more likely that SAE features capture highly non-trivial information about the underlying models.
Evidently, SAEs are the one true answer to ontology identification and as evidence of this, we show how initially uninterpretable features are often quite interpretable with further investigation / tweaking of dashboards. In each case, we [...]
---
Outline:
(00:22) Motivation
(01:38) Case Studies in SAE Features
(01:42) Scripture Feature
(02:07) Perseverance Feature
(02:41) Teamwork Feature
(02:59) Deciphering Feature Activations with Quantization can be highly informative
(03:58) Lesson - visualize activation on full prompts to better understand features!
(04:49) Predictive Feature
(05:29) Neel Nanda Feature
(06:14) Effective Altruism Features
(06:39) “Criticism of Effective Altruism” Feature
(07:03) “Criticism of Criticism of Effective Altruism” Feature
(07:26) “Criticism of Criticism of Criticism of Effective Altruism” Feature
(07:56) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---