
Sign up to save your podcasts
Or
We show that LLM-agents exhibit human-style deception naturally in "Among Us". We introduce Deception ELO as an unbounded measure of deceptive capability, suggesting that frontier models win more because they're better at deception, not at detecting it. We evaluate probes and SAEs to detect out-of-distribution deception, finding they work extremely well. We hope this is a good testbed to improve safety techniques to detect and remove agentically-motivated deception, and to anticipate deceptive abilities in LLMs.
Produced as part of the ML Alignment & Theory Scholars Program - Winter 2024-25 Cohort. Link to our paper and code.
Studying deception in AI agents is important, and it is difficult due to the lack of good sandboxes that elicit the behavior naturally, without asking the model to act under specific conditions or inserting intentional backdoors. Extending upon AmongAgents (a text-based social-deduction game environment), we aim to fix this by introducing Among [...]
---
Outline:
(02:10) The Sandbox
(02:14) Rules of the Game
(03:05) Relevance to AI Safety
(04:11) Definitions
(04:39) Deception ELO
(06:42) Frontier Models are Differentially better at Deception
(07:38) Win-rates for 1v1 Games
(08:14) LLM-based Evaluations
(09:03) Linear Probes for Deception
(09:28) Datasets
(10:06) Results
(11:19) Sparse Autoencoders (SAEs)
(12:05) Discussion
(12:29) Limitations
(13:11) Gain of Function
(14:05) Future Work
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
We show that LLM-agents exhibit human-style deception naturally in "Among Us". We introduce Deception ELO as an unbounded measure of deceptive capability, suggesting that frontier models win more because they're better at deception, not at detecting it. We evaluate probes and SAEs to detect out-of-distribution deception, finding they work extremely well. We hope this is a good testbed to improve safety techniques to detect and remove agentically-motivated deception, and to anticipate deceptive abilities in LLMs.
Produced as part of the ML Alignment & Theory Scholars Program - Winter 2024-25 Cohort. Link to our paper and code.
Studying deception in AI agents is important, and it is difficult due to the lack of good sandboxes that elicit the behavior naturally, without asking the model to act under specific conditions or inserting intentional backdoors. Extending upon AmongAgents (a text-based social-deduction game environment), we aim to fix this by introducing Among [...]
---
Outline:
(02:10) The Sandbox
(02:14) Rules of the Game
(03:05) Relevance to AI Safety
(04:11) Definitions
(04:39) Deception ELO
(06:42) Frontier Models are Differentially better at Deception
(07:38) Win-rates for 1v1 Games
(08:14) LLM-based Evaluations
(09:03) Linear Probes for Deception
(09:28) Datasets
(10:06) Results
(11:19) Sparse Autoencoders (SAEs)
(12:05) Discussion
(12:29) Limitations
(13:11) Gain of Function
(14:05) Future Work
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,368 Listeners
2,397 Listeners
7,808 Listeners
4,108 Listeners
87 Listeners
1,451 Listeners
8,774 Listeners
89 Listeners
354 Listeners
5,363 Listeners
15,028 Listeners
461 Listeners
127 Listeners
65 Listeners
432 Listeners