
Sign up to save your podcasts
Or
Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout).
TL,DR: I introduce a method for eliciting latent behaviors in language models by learning unsupervised perturbations of an early layer of an LLM. These perturbations are trained to maximize changes in downstream activations. The method discovers diverse and meaningful behaviors with just one prompt, including perturbations overriding safety training, eliciting backdoored behaviors and uncovering latent capabilities.
Summary In the simplest case, the unsupervised perturbations I learn are given by unsupervised steering vectors - vectors added to the residual stream as a bias term in the MLP outputs of a given layer. I also report preliminary results on unsupervised steering adapters - these are LoRA adapters of the MLP output weights of a given layer, trained with the same unsupervised objective.
I apply the method to several alignment-relevant toy examples, and find that the [...]
---
Outline:
(09:37) Related Work
(14:15) The Method: Unsupervised Steering of Language Models
(15:52) Unsupervised Steering Vectors
(18:30) Unsupervised Steering Adapters
(19:24) Why does it work?
(22:44) Red-Teaming
(24:17) Setup
(24:36) Results
(24:55) Fantasy bomb-making instructions
(27:32) Real-life instructions
(30:24) Conversations with Qwen-14B-Chat steered by real-world vectors
(46:01) Vector arithmetic: subtracting vectors 9 and 22 lead to refusal on innocuous requests
(50:16) Generalization outside the context of refusal
(53:28) Detecting Backdoors
(55:29) Backdoor details
(57:46) Results
(58:25) Other Vectors - Hybrid-Reasoning Vectors
(01:03:07) Capability Discovery
(01:03:11) Chain-of-Thought Vector
(01:07:08) Portuguese Math Reasoning Adapter
(01:13:29) Negative Results
(01:14:40) Future Work
(01:15:12) Improving generalization of unsupervised steering vectors/adapters
(01:16:56) Feedback cycles with circuits-level mechanistic interpretability
(01:18:30) Conclusion
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout).
TL,DR: I introduce a method for eliciting latent behaviors in language models by learning unsupervised perturbations of an early layer of an LLM. These perturbations are trained to maximize changes in downstream activations. The method discovers diverse and meaningful behaviors with just one prompt, including perturbations overriding safety training, eliciting backdoored behaviors and uncovering latent capabilities.
Summary In the simplest case, the unsupervised perturbations I learn are given by unsupervised steering vectors - vectors added to the residual stream as a bias term in the MLP outputs of a given layer. I also report preliminary results on unsupervised steering adapters - these are LoRA adapters of the MLP output weights of a given layer, trained with the same unsupervised objective.
I apply the method to several alignment-relevant toy examples, and find that the [...]
---
Outline:
(09:37) Related Work
(14:15) The Method: Unsupervised Steering of Language Models
(15:52) Unsupervised Steering Vectors
(18:30) Unsupervised Steering Adapters
(19:24) Why does it work?
(22:44) Red-Teaming
(24:17) Setup
(24:36) Results
(24:55) Fantasy bomb-making instructions
(27:32) Real-life instructions
(30:24) Conversations with Qwen-14B-Chat steered by real-world vectors
(46:01) Vector arithmetic: subtracting vectors 9 and 22 lead to refusal on innocuous requests
(50:16) Generalization outside the context of refusal
(53:28) Detecting Backdoors
(55:29) Backdoor details
(57:46) Results
(58:25) Other Vectors - Hybrid-Reasoning Vectors
(01:03:07) Capability Discovery
(01:03:11) Chain-of-Thought Vector
(01:07:08) Portuguese Math Reasoning Adapter
(01:13:29) Negative Results
(01:14:40) Future Work
(01:15:12) Improving generalization of unsupervised steering vectors/adapters
(01:16:56) Feedback cycles with circuits-level mechanistic interpretability
(01:18:30) Conclusion
The original text contained 15 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners