LessWrong (30+ Karma)

“Circuits in Superposition 2: Now with Less Wrong Math” by Linda Linsefors, Lucius Bushnaq


Listen Later

Audio note: this article contains 323 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

Summary & Motivation

This post is a continuation and clarification of Circuits in Superposition: Compressing many small neural networks into one. That post presented a sketch of a general mathematical framework for compressing different circuits into a network in superposition. On closer inspection, some of it turned out to be wrong, though. The error propagation calculations for networks with multiple layers were incorrect. With the framework used in that post, the errors blow up too much over multiple layers.

This post presents a slightly changed construction that fixes those problems, and improves on the original construction in some other ways as well.[1]

By computation in superposition we mean that a network represents features in superposition and [...]

---

Outline:

(00:25) Summary & Motivation

(01:43) Takeaways

(02:32) The number of circuits we can fit in scales linearly with the number of network parameters

(04:02) Each circuit will only use a small subset of neurons in the larger network

(04:37) Implications for experiments on computation in superposition

(05:15) Reality really does have a surprising amount of detail

(06:25) Construction

(07:25) Assumptions

(08:44) Embedding the circuits into the network

(10:40) Layer 0

(11:49) Constructing the Embedding and Unembedding matrices

(12:38) Requirements

(14:30) Step 1

(15:08) Step 2

(17:02) Step 3

(17:23) Step 4

(17:50) Step 5

(18:01) Real python code

(18:14) Properties of _E_ and _U_

(18:53) Error calculation

(19:23) Defining the error terms

(22:08) _\\mathring{\\epsilon}_t^l_ - The embedding overlap error

(23:36) _\\tilde{\\epsilon}_t^l_ - The propagation error

(24:38) Calculation:

(27:29) _\\ddot{\\epsilon}_t^l_ - The ReLU activation error

(27:45) Calculations:

(29:34) _\\epsilon_t^l_ - Adding up all the errors

(29:43) Layer 0

(29:55) Layer 1

(30:10) Layer 2

(30:45) Layer 3

(31:03) Worst-case errors vs mean square errors

(32:24) Summary:

(33:12) Discussion

(33:15) Noise correction/suppression is necessary

(34:30) However, we do not in general predict sparse ReLU activations for networks implementing computation in superposition

(36:03) But we do tentatively predict that circuits only use small subsets of network neurons

(37:11) Acknowledgements

The original text contained 24 footnotes which were omitted from this narration.

---

First published:

June 30th, 2025

Source:

https://www.lesswrong.com/posts/FWkZYQceEzL84tNej/circuits-in-superposition-2-now-with-less-wrong-math

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,469 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,395 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

7,953 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,142 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

89 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,472 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,207 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

426 Listeners

Hard Fork by The New York Times

Hard Fork

5,461 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,321 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

482 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

121 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

75 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

461 Listeners