
Sign up to save your podcasts
Or


Introduction
Soon after we released Not All Language Model Features Are One-Dimensionally Linear, I started working with @Logan Riggs and @Jannik Brinkmann on a natural followup to the paper: could we build a variant of SAEs that could find multi-dimensional features directly, instead of needing to cluster SAE latents post-hoc like we did in the paper.
We worked on this for a few months last summer and tried a bunch of things. Unfortunately, none of our results were that compelling, and eventually our interest in the project died down and we didn’t publish our (mostly negative) results. Recently, multiple people (@Noa Nabeshima , @chanind, Goncalo Paulo) said they were interested in working on SAEs that could find multi-dimensional features, so I decided I would write up what we tried.
At this point the results are almost a year old, but I think the overall narrative should still [...]
---
Outline:
(00:10) Introduction
(02:32) Group SAEs
(03:23) Synthetic Circles Experiments
(07:15) Training Group SAEs on GPT-2
(07:27) High level metrics
(09:28) Do the Group SAEs Capture Known Circular Subspaces
(11:46) Other Things We Tried
(12:03) Experimenting with learned groups
(12:08) Motivation and Ideas
(15:43) Learned Group Space
(18:13) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
" showing colored points from 1-12" style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongIntroduction
Soon after we released Not All Language Model Features Are One-Dimensionally Linear, I started working with @Logan Riggs and @Jannik Brinkmann on a natural followup to the paper: could we build a variant of SAEs that could find multi-dimensional features directly, instead of needing to cluster SAE latents post-hoc like we did in the paper.
We worked on this for a few months last summer and tried a bunch of things. Unfortunately, none of our results were that compelling, and eventually our interest in the project died down and we didn’t publish our (mostly negative) results. Recently, multiple people (@Noa Nabeshima , @chanind, Goncalo Paulo) said they were interested in working on SAEs that could find multi-dimensional features, so I decided I would write up what we tried.
At this point the results are almost a year old, but I think the overall narrative should still [...]
---
Outline:
(00:10) Introduction
(02:32) Group SAEs
(03:23) Synthetic Circles Experiments
(07:15) Training Group SAEs on GPT-2
(07:27) High level metrics
(09:28) Do the Group SAEs Capture Known Circular Subspaces
(11:46) Other Things We Tried
(12:03) Experimenting with learned groups
(12:08) Motivation and Ideas
(15:43) Learned Group Space
(18:13) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
" showing colored points from 1-12" style="max-width: 100%;" />Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,366 Listeners

2,438 Listeners

8,995 Listeners

4,148 Listeners

92 Listeners

1,595 Listeners

9,913 Listeners

90 Listeners

71 Listeners

5,471 Listeners

16,082 Listeners

536 Listeners

131 Listeners

95 Listeners

519 Listeners