LessWrong (30+ Karma)

“Sparsify: A mechanistic interpretability research agenda” by Lee Sharkey


Listen Later

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

Over the last couple of years, mechanistic interpretability has seen substantial progress. Part of this progress has been enabled by the identification of superposition as a key barrier to understanding neural networks (Elhage et al., 2022) and the identification of sparse autoencoders as a solution to superposition (Sharkey et al., 2022; Cunningham et al., 2023; Bricken et al., 2023).

From our current vantage point, I think there's a relatively clear roadmap toward a world where mechanistic interpretability is useful for safety. This post outlines my views on what progress in mechanistic interpretability looks like and what I think is achievable by the field in the next 2+ years. It represents a rough outline of what I plan to work on in the near future.

My thinking and work is, of course, very heavily inspired by the [...]

---

Outline:

(01:33) Key frameworks for understanding the agenda

(01:38) Framework 1: The three steps of mechanistic interpretability

(03:57) Framework 2: The description accuracy vs. description length tradeoff

(07:54) The unreasonable effectiveness of SAEs for mechanistic interpretability

(10:38) Framework 3: Big data-driven science vs. Hypothesis-driven science

(15:14) Sparsify: The Agenda

(17:33) Objective 1: Improving SAEs

(17:57) Benchmarking SAEs

(18:19) Fixing SAE pathologies

(20:46) Applying SAEs to attention

(22:40) Better hyperparameter selection methods

(23:21) Computationally efficient sparse coding

(24:39) Objective 2: Decompiled networks

(27:28) Policy goals for network decompilation

(29:17) Objective 3: Abstraction above raw decompilations

(31:41) Objective 4: Deep Description

(35:23) A sketch of an automated process for deep description: The Iterative-Forward-Backwards procedure

(38:30) Objective 5: Mechanistic interpretability-based evals and other applications of mechanistic interpretability

The original text contained 4 footnotes which were omitted from this narration.

---

First published:

April 3rd, 2024

Source:

https://www.lesswrong.com/posts/64MizJXzyvrYpeKqm/sparsify-a-mechanistic-interpretability-research-agenda

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,041 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,230 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,229 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners