
Sign up to save your podcasts
Or


This is a follow-up to a previous post on finding interpretable and steerable features in CLIP.
Introduction
CLIP is a neural network commonly used to guide image diffusion. A Sparse Autoencoder was trained on the dense image embeddings CLIP produces to transform it into a sparse representation of active features. These features seem to represent individual units of meaning. They can also be manipulated in groups — combinations of multiple active features — that represent intuitive concepts. These groups can be understood entirely visually, and often encode surprisingly rich and interesting conceptual detail.
By directly manipulating these groups as single units, image generation can be edited and guided without using prompting or language input. Concepts that were difficult to specify or edit by text prompting become easy and intuitive to [...]
---
Outline:
(00:23) Introduction
(01:24) Summary of Results
(02:35) Training Sparse Autoencoders on CLIP
(04:03) Training Performance
(05:05) Weights
(05:12) Inspecting Images by Feature Activations
(06:18) Performing Iterated Grouping
(09:07) Feature Visualization
(10:56) Applications
(11:13) Limitations
(12:06) Related Work
(12:27) Conclusion
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrong
This is a follow-up to a previous post on finding interpretable and steerable features in CLIP.
Introduction
CLIP is a neural network commonly used to guide image diffusion. A Sparse Autoencoder was trained on the dense image embeddings CLIP produces to transform it into a sparse representation of active features. These features seem to represent individual units of meaning. They can also be manipulated in groups — combinations of multiple active features — that represent intuitive concepts. These groups can be understood entirely visually, and often encode surprisingly rich and interesting conceptual detail.
By directly manipulating these groups as single units, image generation can be edited and guided without using prompting or language input. Concepts that were difficult to specify or edit by text prompting become easy and intuitive to [...]
---
Outline:
(00:23) Introduction
(01:24) Summary of Results
(02:35) Training Sparse Autoencoders on CLIP
(04:03) Training Performance
(05:05) Weights
(05:12) Inspecting Images by Feature Activations
(06:18) Performing Iterated Grouping
(09:07) Feature Visualization
(10:56) Applications
(11:13) Limitations
(12:06) Related Work
(12:27) Conclusion
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,843 Listeners

130 Listeners

7,214 Listeners

531 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners