
Sign up to save your podcasts
Or
View trees here
Search through latents with a token-regex language
View individual latents here
See code here (github.com/noanabeshima/matryoshka-saes)
Continually updated version of this document
Abstract
Sparse autoencoders (SAEs)[1][2] break down neural network internals into components called latents. Smaller SAE latents seem to correspond to more abstract concepts while larger SAE latents seem to represent finer, more specific concepts.
While increasing SAE size allows for finer-grained representations, it also introduces two key problems: feature absorption introduced in Chanin et al. [3], where latents develop unintuitive "holes" as other latents in the SAE take over specific cases, and what I term fragmentation, where meaningful abstract concepts in the small SAE (e.g. 'female names' or 'words in quotes') shatter (via feature splitting[1:1]) into many specific latents, hiding real structure in the model.
This paper introduces Matryoshka SAEs, a training approach that addresses these challenges. Inspired by prior work[4][5], Matryoshka SAEs are trained [...]
---
Outline:
(00:18) Abstract
(01:40) Introduction
(04:08) Problem
(04:11) Terminology
(04:34) Reference SAEs
(05:27) Feature Absorption Example
(08:26) Method
(11:07) Results
(11:10) Toy Model
(15:58) Reconstruction Quality
(17:14) Limitations and Future Work
(20:30) Acknowledgements
The original text contained 30 footnotes which were omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
View trees here
Search through latents with a token-regex language
View individual latents here
See code here (github.com/noanabeshima/matryoshka-saes)
Continually updated version of this document
Abstract
Sparse autoencoders (SAEs)[1][2] break down neural network internals into components called latents. Smaller SAE latents seem to correspond to more abstract concepts while larger SAE latents seem to represent finer, more specific concepts.
While increasing SAE size allows for finer-grained representations, it also introduces two key problems: feature absorption introduced in Chanin et al. [3], where latents develop unintuitive "holes" as other latents in the SAE take over specific cases, and what I term fragmentation, where meaningful abstract concepts in the small SAE (e.g. 'female names' or 'words in quotes') shatter (via feature splitting[1:1]) into many specific latents, hiding real structure in the model.
This paper introduces Matryoshka SAEs, a training approach that addresses these challenges. Inspired by prior work[4][5], Matryoshka SAEs are trained [...]
---
Outline:
(00:18) Abstract
(01:40) Introduction
(04:08) Problem
(04:11) Terminology
(04:34) Reference SAEs
(05:27) Feature Absorption Example
(08:26) Method
(11:07) Results
(11:10) Toy Model
(15:58) Reconstruction Quality
(17:14) Limitations and Future Work
(20:30) Acknowledgements
The original text contained 30 footnotes which were omitted from this narration.
The original text contained 8 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,342 Listeners
2,393 Listeners
7,955 Listeners
4,130 Listeners
87 Listeners
1,445 Listeners
8,910 Listeners
88 Listeners
372 Listeners
5,421 Listeners
15,326 Listeners
466 Listeners
122 Listeners
76 Listeners
449 Listeners