LessWrong (30+ Karma)

“Mechanistically Eliciting Latent Behaviors in Language Models” by Andrew Mack


Listen Later

Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout).

TL,DR: I introduce a method for eliciting latent behaviors in language models by learning unsupervised perturbations of an early layer of an LLM. These perturbations are trained to maximize changes in downstream activations. The method discovers diverse and meaningful behaviors with just one prompt, including perturbations overriding safety training, eliciting backdoored behaviors and uncovering latent capabilities.

Summary In the simplest case, the unsupervised perturbations I learn are given by unsupervised steering vectors - vectors added to the residual stream as a bias term in the MLP outputs of a given layer. I also report preliminary results on unsupervised steering adapters - these are LoRA adapters of the MLP output weights of a given layer, trained with the same unsupervised objective.

I apply the method to several alignment-relevant toy examples, and find that the [...]

---

Outline:

(09:37) Related Work

(14:15) The Method: Unsupervised Steering of Language Models

(15:52) Unsupervised Steering Vectors

(18:30) Unsupervised Steering Adapters

(19:24) Why does it work?

(22:44) Red-Teaming

(24:17) Setup

(24:36) Results

(24:55) Fantasy bomb-making instructions

(27:32) Real-life instructions

(30:24) Conversations with Qwen-14B-Chat steered by real-world vectors

(46:01) Vector arithmetic: subtracting vectors 9 and 22 lead to refusal on innocuous requests

(50:16) Generalization outside the context of refusal

(53:28) Detecting Backdoors

(55:29) Backdoor details

(57:46) Results

(58:25) Other Vectors - Hybrid-Reasoning Vectors

(01:03:07) Capability Discovery

(01:03:11) Chain-of-Thought Vector

(01:07:08) Portuguese Math Reasoning Adapter

(01:13:29) Negative Results

(01:14:40) Future Work

(01:15:12) Improving generalization of unsupervised steering vectors/adapters

(01:16:56) Feedback cycles with circuits-level mechanistic interpretability

(01:18:30) Conclusion

The original text contained 15 footnotes which were omitted from this narration.

---

First published:

April 30th, 2024

Source:

https://www.lesswrong.com/posts/ioPnHKFyy4Cw2Gr2x/mechanistically-eliciting-latent-behaviors-in-language-1

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,159 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

131 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,263 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

530 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,379 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners