LessWrong (30+ Karma)

“Decomposing the QK circuit with Bilinear Sparse Dictionary Learning” by keith_wynroe, Lee Sharkey


Listen Later

This work was produced as part of Lee Sharkey's stream in the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort

Intro and Motivation

Sparse dictionary learning (SDL) has attracted a lot of attention recently as a method for interpreting transformer activations. They demonstrate that model activations can often be explained using a sparsely-activating, overcomplete set of human-interpretable directions.

However, despite its success for explaining many components, applying SDL to interpretability is relatively nascent and have yet to be applied to some model activations. In particular, intermediate activations of attention blocks have yet to be studied, and provide challenges for standard SDL methods.

The first challenge is bilinearity: SDL is usually applied to individual vector spaces at individual layers, so we can simply identify features as a direction in activation space. But the QK circuits of transformer attention layers are different: They involve a bilinear [...]

---

Outline:

(00:16) Intro and Motivation

(02:09) Training Setup

(02:27) Step 1: Reconstructing the attention pattern with key- and query-transcoders

(02:36) Architecture

(03:25) Loss functions

(05:10) Step 2: Reducing to Sparse Feature-Pairs with Masking

(09:31) Results

(09:34) Both features and feature pairs are highly sparse

(10:17) Reconstructed attention patterns are highly accurate

(13:35) Feature Analysis

(13:55) Our unsupervised method identifies Name-Attention features in Name-Mover and Negative Name-Mover Heads

(17:18) Discovering Novel Feature-Pairs

(17:51) Example 1. Pushy Social Media (Layer 10)

(19:06) Example 2: Date Completion (Layer 10) - Attending from months to numbers which may be the day

(20:08) Feature Sparsity

(21:35) Key- and query-features activate densely

(22:45) A dense ‘Attend to BOS’ feature

(24:41) Discussion

(27:25) Future Work

The original text contained 5 footnotes which were omitted from this narration.

The original text contained 19 images which were described by AI.

---

First published:

July 2nd, 2024

Source:

https://www.lesswrong.com/posts/2ep6FGjTQoGDRnhrq/decomposing-the-qk-circuit-with-bilinear-sparse-dictionary

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,882 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,216 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

533 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,223 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners