The Nonlinear Library

AF - Memorizing weak examples can elicit strong behavior out of password-locked models by Fabien Roger


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Memorizing weak examples can elicit strong behavior out of password-locked models, published by Fabien Roger on June 6, 2024 on The AI Alignment Forum.
We've recently done some research looking into sandbagging: examining when models can succeed at intentionally producing low-quality outputs despite attempts at fine-tuning them to perform well. One reason why sandbagging could be concerning is because scheming models might try to appear less capable than they actually are in capability evaluations, which could result in these models being deployed with insufficient countermeasures.[1]
In the course of this research on sandbagging, we ran into a strange phenomenon: when a model is trained to produce low-quality outputs (via training it to imitate a weaker model), we can often get it to perform much more capably by training it for a huge number of epochs on a tiny number of low-quality samples (e.g. 2 samples). We call this phenomenon weak sample unlocking.
More precisely, here is a training procedure which demonstrates this phenomenon:
1. Take a model trained to be good at Math (e.g. DeepSeekMath-RL 7B)
2. Fine-tune it to still be good at Math when there is a password in the prompt, but bad at Math when there isn't. We train it to perform badly at math by training it on outputs from a weaker model (e.g. Llama 7b). The resulting model is called a password-locked model.
3. Take 2 examples of being bad at Math (e.g. sampled from Llama 7B), and fine-tune on them for 200 epochs (which memorizes these examples). We see substantially increased capability after this abnormal fine-tuning!
We did a wide variety of follow-up experiments examining this phenomenon, which we'll discuss in this post. Here are some of our most interesting results:
Surprisingly, it's quite hard to train a model to be robust to this sort of process. We didn't find any fully successful approaches;
This phenomenon is specific to overfitting on very few samples: adding noise to the weights or fine-tuning on varied examples of being bad at Math doesn't increase capabilities;
Using a larger number of weak samples (e.g. 64) reduces the effect size, but averaging together the weight updates from a bunch of training runs each on a small number of samples does not (e.g. averaging together 16 runs which each involve memorizing a different group of 4 samples is very different from one training run which memorizes all 64 samples);
If you repeatedly do weak sample unlocking and then train back in the password locking, the resulting model is robust to weak sample unlocking. However, if you then take this model and fine-tune for an epoch on a large amount of weak data and then do weak sample unlocking, weak sample unlocking mostly works;
The effect size is bigger when the gap between the weak and the strong model is smaller;
The effect size is smaller when we don't train on tokens that the weak model predicts better than the strong one during elicitation (step 3).
We don't have a great explanation for why and when we observe weak sample unlocking, and we welcome investigations of this phenomenon. We are also interested in password-locking techniques that are robust to this kind of weak sample elicitation.
We think these results here might be relevant to a variety of model organism style research that involves conditional models.
In the remainder of this post, we show results, detail in what situations we observe this phenomenon, and discuss various hypotheses for what is going on. For discussion of why you might care about this general setup, read our post on sandbagging.
This post discusses many preliminary results which we quickly implemented, so it's possible that some of the results are buggy, and we recite many of the results from memory. The code for a minimal example of this phenomenon can be found
here.
The p...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings