This posts assumes basic familiarity with Sparse Autoencoders. For those unfamiliar with this technique, we highly recommend the introductory sections of these papers.
TL;DR
Neuronpedia is a platform for mechanistic interpretability research. It was previously focused on crowdsourcing explanations of neurons, but we’ve pivoted to accelerating researchers for Sparse Autoencoders (SAEs) by hosting models, feature dashboards, data visualizations, tooling, and more.
Important Links
- Explore: The SAE research focused Neuronpedia. Current SAEs for GPT2-Small:
- RES-JB: Residuals - Joseph Bloom (294k feats)
- ATT-KK: Attention Out - Connor Kissane + Robert Kryzanowski (344k feats)
- Upload: Get your SAEs hosted by Neuronpedia: fill out this <10 minute application
- Participate: Join #neuronpedia on the Open Source Mech Interp Slack
Neuronpedia has received 1 year of funding. Johnny Lin is full-time on engineering, design, and product, while Joseph Bloom is supporting with high-level direction and product management. We’d love to talk to you [...]
---
Outline:
(00:21) TL;DR
(00:43) Important Links
(01:48) Why Neuronpedia?
(02:37) What Its Already Useful For
(03:29) Making It More Useful
(05:08) Strategy: Iterate Very Quickly
(06:17) Current Neuronpedia Functionality
(06:21) Hosting SAE Feature Dashboards
(07:05) Feature Testing
(07:45) Automatic Explanations and UMAP for Exploration
(08:23) Live Feature Inference
(09:13) Enabling Collaboration
(11:05) Future Work
(11:19) Circuit Analysis
(11:42) Understanding + Red-Teaming Features
(12:15) Quality Control and Benchmarking
(12:40) FAQ
(12:44) Who's involved with the project?
(13:35) I’d like to upload my SAE weights to Neuronpedia. How do I do this?
(13:48) This sounds cool, how can I help?
(14:44) You seem to be super into SAEs, what if SAEs suck?
(15:27) Is Neuronpedia an AI safety org?
(15:52) Are there dual-use risks associated with SAEs?
---