Jordan Taylor, Sid Black, Dillon Bowen, Thomas Read, Satvik Golechha, Alex Zelenka-Martin, Oliver Makins, Connor Kissane, Kola Ayonrinde, Jacob Merizian, Samuel Marks, Chris Cundy, Joseph Bloom
UK AI Security Institute, FAR.AI, Anthropic
Links: Paper | Code | Models | Transcripts | Interactive Demo
Epistemic Status: We're sharing our paper and a hastily written summary of it, assuming a higher level of context on sandbagging / auditing games than other materials. We also share some informal commentary on our results. This post was written by Jordan and Joseph, and may not reflect the views of all authors.
This summary diverges from the paper, condensing heavily while adding our additional commentary in footnotes,
Paper Summary & Commentary
TL;DR
We ran an auditing game to stress-test sandbagging detection (rather than mitigation) techniques. Specifically, we wanted to ask the question "If for some reason a model was sandbagging on capability evaluations, what methods might we use to catch this model?". During the game, our red team created five model organisms (some sandbagging, some benign decoys), and then a blinded blue team tried to identify which were which.
The red team won. The blue team only classified 1/5 models correctly when [...]
---
Outline:
(01:02) Paper Summary & Commentary
(01:05) TL;DR
(01:59) The Setup
(02:06) Red Team
(02:56) Blue Team
(03:28) Results
(03:54) Phase 1: Black-Box and Probes
(05:07) Phase 2: Training-Based Methods
(06:44) Final Scores
(07:23) Post-Game Analysis
(08:26) Limitations
(09:07) Reasons why our set up favoured the blue team:
(09:18) Reasons why our set up favoured the red team:
(12:31) Other limitations:
(14:06) Future work
(14:20) Future work: Detection
(16:40) Future work: Elicitation
(18:22) Additional commentary
(18:25) What went well
(19:02) Practical Difficulties
(22:04) Resources
(22:25) Paper Citation
The original text contained 10 footnotes which were omitted from this narration.
---