Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.
Work done in Neel Nanda's stream of MATS 6.0, equal contribution by Bart Bussmann and Patrick Leask, Patrick Leask is concurrently a PhD candidate at Durham University
TL;DR: When you scale up an SAE, the features in the larger SAE can be categorized in two groups: 1) “novel features” with new information not in the small SAE and 2) “reconstruction features” that sparsify information that already exists in the small SAE. You can stitch SAEs by adding the novel features to the smaller SAE.
Introduction
Sparse autoencoders (SAEs) have been shown to recover sparse, monosemantic features from language models. However, there has been limited research into how those features vary with dictionary size, that is, when you take the same activation in the same model and train a wider dictionary on it, what changes? And how [...]
---
Outline:
(00:45) Introduction
(02:14) Larger SAEs learn both similar and entirely novel features
(02:21) Set-up
(03:48) How similar are features in SAEs of different widths?
(09:48) Can we add features from one SAE to another?
(14:43) Can we swap features between SAEs?
(17:45) Frankenstein's SAE
(22:31) Discussion and Limitations
The original text contained 1 footnote which was omitted from this narration.
The original text contained 4 images which were described by AI.
---