
Sign up to save your podcasts
Or


The paper proposes a method to identify and interpret the directions in activation space of neural networks, addressing the issue of polysemanticity. The method uses sparse autoencoders to reconstruct internal activations and achieves more interpretable and monosemantic results compared to alternative approaches. This method can enable precise model editing and improve model transparency and steerability.
https://arxiv.org/abs//2309.08600
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers
By Igor Melnyk5
33 ratings
The paper proposes a method to identify and interpret the directions in activation space of neural networks, addressing the issue of polysemanticity. The method uses sparse autoencoders to reconstruct internal activations and achieves more interpretable and monosemantic results compared to alternative approaches. This method can enable precise model editing and improve model transparency and steerability.
https://arxiv.org/abs//2309.08600
YouTube: https://www.youtube.com/@ArxivPapers
PODCASTS:
Apple Podcasts: https://podcasts.apple.com/us/podcast/arxiv-papers/id1692476016
Spotify: https://podcasters.spotify.com/pod/show/arxiv-papers

956 Listeners

1,976 Listeners

438 Listeners

112,847 Listeners

10,064 Listeners

5,532 Listeners

213 Listeners

51 Listeners

98 Listeners

473 Listeners