LessWrong (30+ Karma)

“Selective modularity: a research agenda” by cloud, Jacob G-W


Listen Later

Overview: By training neural networks with selective modularity, gradient routing enables new approaches to core problems in AI safety. This agenda identifies related research directions that might enable safer development of transformative AI.

Introduction

Soon, the world may see rapid increases in AI capabilities resulting from AI research automation, and no one knows how to ensure this happens safely (Soares, 2016; Aschenbrenner, 2023; Anwar et al., 2024; Greenblatt, 2025). The current ML paradigm may not be well-suited to this task, as it produces inscrutable, generalist models without guarantees on their out-of-distribution performance. These models may reflect unintentional quirks of their training objectives (Pan et al., 2022; Skalse et al., 2022; Krakovna et al., 2020).

Gradient routing (Cloud et al., 2024) is a general training method intended to meet the need for economically-competitive training methods for producing safe AI systems. The main idea of gradient routing is to configure which [...]

---

Outline:

(00:28) Introduction

(04:56) Directions we think are most promising

(06:17) Recurring ideas

(09:37) Gradient routing methods and applications

(09:42) Improvements to basic gradient routing methodology

(09:53) Existing improvements

(11:26) Choosing what to route where

(12:42) Abstract and contextual localization

(15:06) Gating

(16:48) Improved regularization

(17:20) Incorporating existing ideas

(18:10) Gradient routing beyond pretraining

(20:21) Applications

(20:25) Semi-supervised reinforcement learning

(22:43) Semi-supervised robust unlearning

(24:55) Interpretability

(27:21) Conceptual work on gradient routing

(27:25) The science of absorption

(29:47) Modeling the effects of combined estimands

(30:54) Influencing generalization

(32:38) Identifying sufficient conditions for scalable oversight

(33:57) Related conceptual work

(34:02) Understanding entanglement

(36:51) Finetunability as a proxy for generalization

(39:50) Understanding when to expose limited supervision to the model via the behavioral objective

(41:57) Clarifying capabilities vs. dispositions

(43:05) Implications for AI safety

(43:10) AI governance

(45:02) Access control

(47:00) Implications of robust unlearning

(48:34) Safety cases

(49:27) Getting involved

(50:27) Acknowledgements

The original text contained 6 footnotes which were omitted from this narration.

---

First published:

March 24th, 2025

Source:

https://www.lesswrong.com/posts/tAnHM3L25LwuASdpF/selective-modularity-a-research-agenda

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,664 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,216 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

530 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,132 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners