
Sign up to save your podcasts
Or
Work done in Neel Nanda's stream of MATS 6.0, equal contribution by Bart Bussmann and Patrick Leask, Patrick Leask is concurrently a PhD candidate at Durham University
TL;DR: When you scale up an SAE, the features in the larger SAE can be categorized in two groups: 1) “novel features” with new information not in the small SAE and 2) “reconstruction features” that sparsify information that already exists in the small SAE. You can stitch SAEs by adding the novel features to the smaller SAE.
Introduction
Sparse autoencoders (SAEs) have been shown to recover sparse, monosemantic features from language models. However, there has been limited research into how those features vary with dictionary size, that is, when you take the same activation in the same model and train a wider dictionary on it, what changes? And how [...]
---
Outline:
(00:45) Introduction
(02:14) Larger SAEs learn both similar and entirely novel features
(02:21) Set-up
(03:48) How similar are features in SAEs of different widths?
(09:48) Can we add features from one SAE to another?
(14:43) Can we swap features between SAEs?
(17:45) Frankenstein's SAE
(22:31) Discussion and Limitations
The original text contained 1 footnote which was omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Work done in Neel Nanda's stream of MATS 6.0, equal contribution by Bart Bussmann and Patrick Leask, Patrick Leask is concurrently a PhD candidate at Durham University
TL;DR: When you scale up an SAE, the features in the larger SAE can be categorized in two groups: 1) “novel features” with new information not in the small SAE and 2) “reconstruction features” that sparsify information that already exists in the small SAE. You can stitch SAEs by adding the novel features to the smaller SAE.
Introduction
Sparse autoencoders (SAEs) have been shown to recover sparse, monosemantic features from language models. However, there has been limited research into how those features vary with dictionary size, that is, when you take the same activation in the same model and train a wider dictionary on it, what changes? And how [...]
---
Outline:
(00:45) Introduction
(02:14) Larger SAEs learn both similar and entirely novel features
(02:21) Set-up
(03:48) How similar are features in SAEs of different widths?
(09:48) Can we add features from one SAE to another?
(14:43) Can we swap features between SAEs?
(17:45) Frankenstein's SAE
(22:31) Discussion and Limitations
The original text contained 1 footnote which was omitted from this narration.
The original text contained 4 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,420 Listeners
2,387 Listeners
7,893 Listeners
4,126 Listeners
87 Listeners
1,458 Listeners
9,040 Listeners
87 Listeners
390 Listeners
5,431 Listeners
15,216 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners