LessWrong (30+ Karma)

“Secret Collusion: Will We Know When to Unplug AI?” by schroederdewitt, srm, MikhailB, Lewis Hammond, chansmi, sofmonk


Listen Later

Audio note: this article contains 108 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.

TL;DR: We introduce the first comprehensive theoretical framework for understanding and mitigating secret collusion among advanced AI agents, along with CASE, a novel model evaluation framework. CASE assesses the cryptographic and steganographic capabilities of agents, while exploring the emergence of secret collusion in real-world-like multi-agent settings. Whereas current AI models aren't yet proficient in advanced steganography, our findings show rapid improvements in individual and collective model capabilities, posing unprecedented safety and security risks. These results highlight urgent challenges for AI governance and policy, urging institutions such as the EU AI Office and AI safety bodies in the UK and US to prioritize cryptographic and steganographic evaluations of frontier models. Our research also opens up critical new pathways for [...]

---

Outline:

(03:00) Multi-Agent Safety and Security in the Age of Autonomous Internet Agents

(11:00) DecAI: Decentralized Systems of AI Agents

(16:18) Steganography

(18:21) How detectable is undetectable?

(19:31) Provably Secure Steganography (aka Complexity-theoretic security)

(19:57) Information-Theoretic Security and Perfectly Secure Steganography

(21:33) Perfectly Secure Steganography via Minimum Entropy Coupling

(23:35) Collusion and Secret Collusion

(24:14) The Incentives Required for (Secret) Collusion

(26:15) On the Emergence of Secret Collusion: Instrumental vs. Intentional

(26:35) Case 1: Agents with pre-existing steganographic capabilities

(28:47) Case 2: Emergence through pre-deployment optimisation pressure

(29:29) Case 3: Emergence during deployment

(31:35) Putting it all together - understanding and evaluating existing models

(32:48) Disappearing Act: What capabilities do LLMs need to collude secretly?

(33:40) Benchmarking Frontier and Baseline Models

(36:01) Instrumental Emergence

(39:46) Mitigating Secret Collusion

(40:02) Monitoring and Penalisation

(42:50) Limiting Capabilities

(46:38) Paraphrasing and the Performance/Mitigation Trade-off

(51:58) On Intrinsic Steganographic and Cryptographic Frontier Model Capabilities

(57:29) Steganography and AI Control

(01:00:00) Policy Considerations and Impact

(01:02:20) Conclusions: Looking Ahead

(01:03:57) Open research questions

(01:06:17) A Call to Action

The original text contained 5 footnotes which were omitted from this narration.

The original text contained 2 images which were described by AI.

---

First published:

September 16th, 2024

Source:

https://www.lesswrong.com/posts/smMdYezaC8vuiLjCf/secret-collusion-will-we-know-when-to-unplug-ai

---

Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,164 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,255 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

535 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,266 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners