
Sign up to save your podcasts
Or


This is a cross-post of some recent Anthropic research on building auditing agents.[1] The following is quoted from the Alignment Science blog post.
tl;dr
We're releasing Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework for automated auditing that uses AI agents to test the behaviors of target models across diverse scenarios. When applied to 14 frontier models with 111 seed instructions, Petri successfully elicited a broad set of misaligned behaviors including autonomous deception, oversight subversion, whistleblowing, and cooperation with human misuse. The tool is available now at github.com/safety-research/petri.
Introduction
AI models are becoming more capable and are being deployed with wide-ranging affordances across more domains, increasing the surface area where misaligned behaviors might emerge. The sheer volume and complexity of potential behaviors far exceeds what researchers can manually test, making it increasingly difficult to properly audit each model.
Over the past year, we've [...]
---
Outline:
(00:24) tl;dr
(00:56) Introduction
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/petri/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis is a cross-post of some recent Anthropic research on building auditing agents.[1] The following is quoted from the Alignment Science blog post.
tl;dr
We're releasing Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework for automated auditing that uses AI agents to test the behaviors of target models across diverse scenarios. When applied to 14 frontier models with 111 seed instructions, Petri successfully elicited a broad set of misaligned behaviors including autonomous deception, oversight subversion, whistleblowing, and cooperation with human misuse. The tool is available now at github.com/safety-research/petri.
Introduction
AI models are becoming more capable and are being deployed with wide-ranging affordances across more domains, increasing the surface area where misaligned behaviors might emerge. The sheer volume and complexity of potential behaviors far exceeds what researchers can manually test, making it increasingly difficult to properly audit each model.
Over the past year, we've [...]
---
Outline:
(00:24) tl;dr
(00:56) Introduction
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/petri/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,376 Listeners

2,429 Listeners

8,187 Listeners

4,155 Listeners

92 Listeners

1,553 Listeners

9,799 Listeners

89 Listeners

488 Listeners

5,472 Listeners

16,144 Listeners

531 Listeners

131 Listeners

96 Listeners

510 Listeners