
Sign up to save your podcasts
Or
The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge".
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
The transcript: https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html
Topics we discuss, and timestamps:
0:00:36 - Model organisms and stress-testing
0:07:38 - Sleeper Agents
0:22:32 - Do 'sleeper agents' properly model deceptive alignment?
0:38:32 - Surprising results in "Sleeper Agents"
0:57:25 - Sycophancy to Subterfuge
1:09:21 - How models generalize from sycophancy to subterfuge
1:16:37 - Is the reward editing task valid?
1:21:46 - Training away sycophancy and subterfuge
1:29:22 - Model organisms, AI control, and evaluations
1:33:45 - Other model organisms research
1:35:27 - Alignment stress-testing at Anthropic
1:43:32 - Following Evan's work
Main papers:
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training: https://arxiv.org/abs/2401.05566
Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models: https://arxiv.org/abs/2406.10162
Anthropic links:
Anthropic's newsroom: https://www.anthropic.com/news
Careers at Anthropic: https://www.anthropic.com/careers
Other links:
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research: https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1
Simple probes can catch sleeper agents: https://www.anthropic.com/research/probes-catch-sleeper-agents
Studying Large Language Model Generalization with Influence Functions: https://arxiv.org/abs/2308.03296
Stress-Testing Capability Elicitation With Password-Locked Models [aka model organisms of sandbagging]: https://arxiv.org/abs/2405.19550
Episode art by Hamish Doodles: hamishdoodles.com
4.4
88 ratings
The 'model organisms of misalignment' line of research creates AI models that exhibit various types of misalignment, and studies them to try to understand how the misalignment occurs and whether it can be somehow removed. In this episode, Evan Hubinger talks about two papers he's worked on at Anthropic under this agenda: "Sleeper Agents" and "Sycophancy to Subterfuge".
Patreon: https://www.patreon.com/axrpodcast
Ko-fi: https://ko-fi.com/axrpodcast
The transcript: https://axrp.net/episode/2024/12/01/episode-39-evan-hubinger-model-organisms-misalignment.html
Topics we discuss, and timestamps:
0:00:36 - Model organisms and stress-testing
0:07:38 - Sleeper Agents
0:22:32 - Do 'sleeper agents' properly model deceptive alignment?
0:38:32 - Surprising results in "Sleeper Agents"
0:57:25 - Sycophancy to Subterfuge
1:09:21 - How models generalize from sycophancy to subterfuge
1:16:37 - Is the reward editing task valid?
1:21:46 - Training away sycophancy and subterfuge
1:29:22 - Model organisms, AI control, and evaluations
1:33:45 - Other model organisms research
1:35:27 - Alignment stress-testing at Anthropic
1:43:32 - Following Evan's work
Main papers:
Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training: https://arxiv.org/abs/2401.05566
Sycophancy to Subterfuge: Investigating Reward-Tampering in Large Language Models: https://arxiv.org/abs/2406.10162
Anthropic links:
Anthropic's newsroom: https://www.anthropic.com/news
Careers at Anthropic: https://www.anthropic.com/careers
Other links:
Model Organisms of Misalignment: The Case for a New Pillar of Alignment Research: https://www.alignmentforum.org/posts/ChDH335ckdvpxXaXX/model-organisms-of-misalignment-the-case-for-a-new-pillar-of-1
Simple probes can catch sleeper agents: https://www.anthropic.com/research/probes-catch-sleeper-agents
Studying Large Language Model Generalization with Influence Functions: https://arxiv.org/abs/2308.03296
Stress-Testing Capability Elicitation With Password-Locked Models [aka model organisms of sandbagging]: https://arxiv.org/abs/2405.19550
Episode art by Hamish Doodles: hamishdoodles.com
26,366 Listeners
2,409 Listeners
1,870 Listeners
294 Listeners
107 Listeners
4,123 Listeners
90 Listeners
298 Listeners
90 Listeners
426 Listeners
257 Listeners
88 Listeners
60 Listeners
144 Listeners
124 Listeners