
Sign up to save your podcasts
Or
Epistemic status - self-evident.
In this post, we interpret a small sample of Sparse Autoencoder features which reveal meaningful computational structure in the model that is clearly highly researcher-independent and of significant relevance to AI alignment.
Motivation
Recent excitement about Sparse Autoencoders (SAEs) has been mired by the following question: Do SAE features reflect properties of the model, or just capture correlational structure in the underlying data distribution?
While a full answer to this question is important and will take deliberate investigation, we note that researchers who've spent large amounts of time interacting with feature dashboards think it's more likely that SAE features capture highly non-trivial information about the underlying models.
Evidently, SAEs are the one true answer to ontology identification and as evidence of this, we show how initially uninterpretable features are often quite interpretable with further investigation / tweaking of dashboards. In each case, we [...]
---
Outline:
(00:22) Motivation
(01:38) Case Studies in SAE Features
(01:42) Scripture Feature
(02:07) Perseverance Feature
(02:41) Teamwork Feature
(02:59) Deciphering Feature Activations with Quantization can be highly informative
(03:58) Lesson - visualize activation on full prompts to better understand features!
(04:49) Predictive Feature
(05:29) Neel Nanda Feature
(06:14) Effective Altruism Features
(06:39) “Criticism of Effective Altruism” Feature
(07:03) “Criticism of Criticism of Effective Altruism” Feature
(07:26) “Criticism of Criticism of Criticism of Effective Altruism” Feature
(07:56) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Epistemic status - self-evident.
In this post, we interpret a small sample of Sparse Autoencoder features which reveal meaningful computational structure in the model that is clearly highly researcher-independent and of significant relevance to AI alignment.
Motivation
Recent excitement about Sparse Autoencoders (SAEs) has been mired by the following question: Do SAE features reflect properties of the model, or just capture correlational structure in the underlying data distribution?
While a full answer to this question is important and will take deliberate investigation, we note that researchers who've spent large amounts of time interacting with feature dashboards think it's more likely that SAE features capture highly non-trivial information about the underlying models.
Evidently, SAEs are the one true answer to ontology identification and as evidence of this, we show how initially uninterpretable features are often quite interpretable with further investigation / tweaking of dashboards. In each case, we [...]
---
Outline:
(00:22) Motivation
(01:38) Case Studies in SAE Features
(01:42) Scripture Feature
(02:07) Perseverance Feature
(02:41) Teamwork Feature
(02:59) Deciphering Feature Activations with Quantization can be highly informative
(03:58) Lesson - visualize activation on full prompts to better understand features!
(04:49) Predictive Feature
(05:29) Neel Nanda Feature
(06:14) Effective Altruism Features
(06:39) “Criticism of Effective Altruism” Feature
(07:03) “Criticism of Criticism of Effective Altruism” Feature
(07:26) “Criticism of Criticism of Criticism of Effective Altruism” Feature
(07:56) Conclusion
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
461 Listeners