The AI Control Agenda, in its own words:
… we argue that AI labs should ensure that powerful AIs are controlled. That is, labs should make sure that the safety measures they apply to their powerful models prevent unacceptably bad outcomes, even if the AIs are misaligned and intentionally try to subvert those safety measures. We think no fundamental research breakthroughs are required for labs to implement safety measures that meet our standard for AI control for early transformatively useful AIs; we think that meeting our standard would substantially reduce the risks posed by intentional subversion.
There's more than one definition of “AI control research”, but I’ll emphasize two features, which both match the summary above and (I think) are true of approximately-100% of control research in practice:
- Control research exclusively cares about intentional deception/scheming; it does not aim to solve any other failure mode.
- Control research exclusively cares [...]
---
Outline:(01:34) The Model and The Problem
(03:57) The Median Doom-Path: Slop, not Scheming
(08:22) Failure To Generalize
(10:59) A Less-Simplified Model
(11:54) Recap
The original text contained 1 footnote which was omitted from this narration. The original text contained 2 images which were described by AI. ---
First published: January 21st, 2025
Source: https://www.lesswrong.com/posts/8wBN8cdNAv3c7vt6p/the-case-against-ai-control-research
---
Narrated by TYPE III AUDIO.
---
Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.