Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Self-explaining SAE features, published by Dmitrii Kharlapenko on August 5, 2024 on The AI Alignment Forum.
TL;DR
We apply the method of SelfIE/Patchscopes to explain SAE features - we give the model a prompt like "What does X mean?", replace the residual stream on X with the decoder direction times some scale, and have it generate an explanation. We call this self-explanation.
The natural alternative is auto-interp, using a larger LLM to spot patterns in max activating examples. We show that our method is effective, and comparable with Neuronpedia's auto-interp labels (with the caveat that Neuronpedia's auto-interp used the comparatively weak GPT-3.5 so this is not a fully fair comparison).
We aren't confident you should use our method over auto-interp, but we think in some situations it has advantages: no max activating dataset examples are needed, and it's cheaper as you just run the model being studied (eg Gemma 2B) not a larger model like GPT-4.
Further, it has different errors to auto-interp, so finding and reading both may be valuable for researchers in practice.
We provide advice for using self-explanation in practice, in particular for the challenge of automatically choosing the right scale, which significantly affects explanation quality.
We also release a tool for you to work with self-explanation.
We hope the technique is useful to the community as is, but expect there's many optimizations and improvements on top of what is in this post.
Introduction
This work was produced as part of the ML Alignment & Theory Scholars Program - Summer 24 Cohort, under mentorship from Neel Nanda and Arthur Conmy.
SAE features promise a flexible and extensive framework for interpretation of LLM internals. Recent work (like
Scaling Monosemanticity) has shown that they are capable of capturing even high-level abstract concepts inside the model. Compared to MLP neurons, they can capture many more interesting concepts.
Unfortunately, in order to learn things with SAE features and interpret what the SAE tells us, one needs to first interpret these features on their own. The current mainstream method for their interpretation requires storing the feature's activations on millions of tokens, filtering for the prompts that activate it the most, and looking for a pattern connecting them. This is typically done by a human, or sometimes
somewhat automated with the use of larger LLMs like ChatGPT, aka auto-interp. Auto-interp is a useful and somewhat effective method, but requires an extensive amount of data and expensive closed-source language model API calls (for researchers outside scaling labs)
Recent papers like
SelfIE or
Patchscopes have proposed a mechanistic method of directly utilizing the model in question to explain its own internals activations in natural language. It is an approach that replaces an activation during the forward pass (e.g. some of the token embeddings in the prompt) with a new activation and then makes the model generate explanations using this modified prompt.
It's a variant of activation patching, with the notable differences that it generates a many token output (rather than a single token), and that the patched in activation may not be the same type as the activation it's overriding (and is just an arbitrary vector of the same dimension). We study how this approach can be applied to SAE feature interpretation, since it is:
Potentially cheaper and does not require large closed model inference
Can be viewed as a more truthful to the source, since it is uses the SAE feature vectors directly to generate explanations instead of looking at the max activating examples
How to use
Basic method
We ask the model to explain the meaning of a residual stream direction as if it literally was a word or phrase:
Prompt 1 (/ replaced according to model inp...