The Nonlinear Library

AF - 'Fundamental' vs 'applied' mechanistic interpretability research by Lee Sharkey


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 'Fundamental' vs 'applied' mechanistic interpretability research, published by Lee Sharkey on May 23, 2023 on The AI Alignment Forum.
When justifying my mechanistic interpretability research interests to others, I've occasionally found it useful to borrow a distinction from physics and distinguish between 'fundamental' versus 'applied' interpretability research.
Fundamental interpretability research is the kind that investigates better ways to think about the structure of the function learned by neural networks. It lets us make new categories of hypotheses about neural networks. In the ideal case, it suggests novel interpretability methods based on new insights, but is not the methods themselves.
Examples include:
A Mathematical Framework for Transformer Circuits (Elhage et al., 2021)
Toy Models of Superposition (Elhage et al., 2022)
Polysemanticity and Capacity in Neural Networks (Scherlis et al., 2022)
Interpreting Neural Networks through the Polytope Lens (Black et al., 2022)
Causal Abstraction for Faithful Model Interpretation (Geiger et al., 2023)
Research agenda: Formalizing abstractions of computations (Jenner, 2023)
Work that looks for ways to identify modules in neural networks (see LessWrong 'Modularity' tag).
Applied interpretability research is the kind that uses existing methods to find the representations or circuits that particular neural networks have learned. It generally involves finding facts or testing hypotheses about a given network (or set of networks) based on assumptions provided by theory.
Examples include
Steering GPT-2-XL by adding an activation vector (Turner et al., 2023)
Discovering Latent Knowledge in Language Models (Burns et al., 2022)
The Singular Value Decompositions of Transformer Weight Matrices are Highly Interpretable (Millidge et al., 2022)
In-context Learning and Induction Heads (Olsson et al., 2022)
We Found An Neuron in GPT-2 (Miller et al., 2023)
Language models can explain neurons in language models (Bills et al., 2023)
Acquisition of Chess Knowledge in AlphaZero (McGrath et al., 2021)
Although I've found the distinction between fundamental and applied interpretability useful, it's not always clear cut:
Sometimes articles are part fundamental, part applied (e.g. arguably 'A Mathematical Framework for Transformer Circuits' is mostly theoretical, but also studies particular language models using new theory).
Sometimes articles take generally accepted 'fundamental' -- but underutilized -- assumptions and develop methods based on them (e.g. Causal Scrubbing, where the key underutilized fundamental assumption was that the structure of neural networks can be well studied using causal interventions).
Other times the distinction is unclear because applied interpretability feeds back into fundamental interpretability, leading to fundamental insights about the structure of computation in networks (e.g. the Logit Lens lends weight to the theory that transformer language models do iterative inference).
Why I currently prioritize fundamental interpretability
Clearly both fundamental and applied interpretability research are essential. We need both in order to progress scientifically and to ensure future models are safe.
But given our current position on the tech tree, I find that I care more about fundamental interpretability.
The reason is that current interpretability methods are unsuitable for comprehensively interpreting networks on a mechanistic level. So far, our methods only seem to be able to identify particular representations that we look for or describe how particular behaviors are carried out. But they don't let us identify all representations or circuits in a network or summarize the full computational graph of a neural network (whatever that might mean). Let's call the ability to do these things 'comprehensive interpreta...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings