The Nonlinear Library

AF - [MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming by Dan H


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [MLSN #8] Mechanistic interpretability, using law to inform AI alignment, scaling laws for proxy gaming, published by Dan H on February 20, 2023 on The AI Alignment Forum.
As part of a larger community building effort, CAIS is writing a safety newsletter that is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here.
Welcome to the 8th issue of the ML Safety Newsletter! In this edition, we cover:
Isolating the specific mechanism that GPT-2 uses to identify the indirect object in a sentence
When maximum softmax probability is optimal
How law can inform specification for AI systems
Using language models to find a group consensus
Scaling laws for proxy gaming
An adversarial attack on adaptive models
How systems safety can be applied to ML
And much more...
Monitoring
A Circuit for Indirect Object Identification in GPT-2 small
One subset of interpretability is mechanistic interpretability: understanding how models perform functions down to the level of particular parameters. Those working on this agenda believe that by learning how small parts of a network function, they may eventually be able to rigorously understand how the network implements high-level computations.
This paper tries to identify how GPT-2 small solves indirect object identification, the task of identifying the correct indirect object to complete a sentence with. Using a number of interpretability techniques, the authors seek to isolate particular parts of the network that are responsible for this behavior.
[Link]
Learning to Reject Meets OOD Detection
Both learning to reject (also called error detection; deciding whether a sample is likely to be misclassified) and out-of-distribution detection share the same baseline: maximum softmax probability. MSP has been outperformed by other methods in OOD detection, but never in learning to reject, and it is mathematically provable that it is optimal for learning to reject. This paper shows that it isn’t optimal for OOD detection, and identifies specific circumstances in which it can be outperformed. This theoretical result is a good confirmation of the existing empirical results.
[Link]
Other Monitoring News
[Link] The first paper that successfully applies feature visualization techniques to Vision Transformers.
[Link] This method uses the reconstruction loss of diffusion models to create a new SOTA method for out-of-distribution detection in images.
[Link] A new Trojan attack on code generation models works by inserting poisoned code into docstrings rather than the code itself, evading some vulnerability-removal techniques.
[Link] This paper shows that fine tuning language models for particular tasks relies on changing only a very small subset of parameters. The authors show that as few as 0.01% of parameters can be “grafted” onto the original network and achieve performance that is nearly as high.
Alignment
Applying Law to AI Alignment
One problem in alignment is specification: though we may give AI systems instructions, we cannot possibly specify what they should do in all circumstances. Thus, we have to consider how our specifications will generalize in fuzzy, or out-of-distribution contexts.
The author of this paper argues that law has many desirable properties that may make it useful in informing specification. For example, the law often uses “standards”: relatively vague instructions (e.g. “act with reasonable caution at railroad crossings”; in contrast to rules like “do not exceed 30 miles per hour”) whose specifics have been developed through years of precedent. In the law, it is often necessary to consider the “spirit” behind these standards, which is exactly what we want AI systems to be able to do. This paper argues that AI system...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings