LessWrong (30+ Karma)

“Thoughts on the conservative assumptions in AI control” by Buck


Listen Later

Work that I’ve done on techniques for mitigating risk from misaligned AI often makes a number of conservative assumptions about the capabilities of the AIs we’re trying to control. (E.g. the original AI control paper, Adaptive Deployment of Untrusted LLMs Reduces Distributed Threats, How to prevent collusion when using untrusted models to monitor each other.) For example:

  • The AIs are consistently trying to subvert safety measures.
  • They’re very good at strategizing about how to subvert them, and can do this strategizing totally illegibly (e.g. in a single forward pass).
  • They can follow a strategy that's coordinated between all their instances.
  • They know exactly what safety measures are in use.
  • They know a lot about the distribution of inputs.
  • They’re able to pick Schelling points perfectly.
  • They’re able to follow stochastic policies with excellent calibration (e.g. they’re able to attack with probability 0.01%).

But I don’t think situations where [...]

---

Outline:

(02:40) The conservative assumption I don't make

(05:36) Hard worlds are plausible

(11:39) Those assumptions are hard to evaluate

(14:57) Why it's nice to rely on these assumptions

(15:08) Making those assumptions makes it methodologically clean to evaluate safety

(16:06) Relying only on easy-to-evaluate assumptions would allow more clearly-specified and hard-to-game regulatory standards

(17:22) Kerckhoff's principle in computer security as an analogy

(18:41) The main downsides of these assumptions

(18:46) You can't completely escape from methodological messiness

(19:50) Bad technique prioritization

(21:02) Insane game theory requirements to evaluate techniques

(21:55) Conclusion

---

First published:

January 17th, 2025

Source:

https://www.lesswrong.com/posts/rHyPtvfnvWeMv7Lkb/thoughts-on-the-conservative-assumptions-in-ai-control

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,847 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,206 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,150 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners