Audio note: this article contains 108 uses of latex notation, so the narration may be difficult to follow. There's a link to the original text in the episode description.
TL;DR: We introduce the first comprehensive theoretical framework for understanding and mitigating secret collusion among advanced AI agents, along with CASE, a novel model evaluation framework. CASE assesses the cryptographic and steganographic capabilities of agents, while exploring the emergence of secret collusion in real-world-like multi-agent settings. Whereas current AI models aren't yet proficient in advanced steganography, our findings show rapid improvements in individual and collective model capabilities, posing unprecedented safety and security risks. These results highlight urgent challenges for AI governance and policy, urging institutions such as the EU AI Office and AI safety bodies in the UK and US to prioritize cryptographic and steganographic evaluations of frontier models. Our research also opens up critical new pathways for [...]
---
Outline:
(03:00) Multi-Agent Safety and Security in the Age of Autonomous Internet Agents
(11:00) DecAI: Decentralized Systems of AI Agents
(16:18) Steganography
(18:21) How detectable is undetectable?
(19:31) Provably Secure Steganography (aka Complexity-theoretic security)
(19:57) Information-Theoretic Security and Perfectly Secure Steganography
(21:33) Perfectly Secure Steganography via Minimum Entropy Coupling
(23:35) Collusion and Secret Collusion
(24:14) The Incentives Required for (Secret) Collusion
(26:15) On the Emergence of Secret Collusion: Instrumental vs. Intentional
(26:35) Case 1: Agents with pre-existing steganographic capabilities
(28:47) Case 2: Emergence through pre-deployment optimisation pressure
(29:29) Case 3: Emergence during deployment
(31:35) Putting it all together - understanding and evaluating existing models
(32:48) Disappearing Act: What capabilities do LLMs need to collude secretly?
(33:40) Benchmarking Frontier and Baseline Models
(36:01) Instrumental Emergence
(39:46) Mitigating Secret Collusion
(40:02) Monitoring and Penalisation
(42:50) Limiting Capabilities
(46:38) Paraphrasing and the Performance/Mitigation Trade-off
(51:58) On Intrinsic Steganographic and Cryptographic Frontier Model Capabilities
(57:29) Steganography and AI Control
(01:00:00) Policy Considerations and Impact
(01:02:20) Conclusions: Looking Ahead
(01:03:57) Open research questions
(01:06:17) A Call to Action
The original text contained 5 footnotes which were omitted from this narration.
The original text contained 2 images which were described by AI.
---