
Sign up to save your podcasts
Or


This is a cross-post of some recent Anthropic research on building auditing agents.[1] The following is quoted from the Alignment Science blog post.
tl;dr
We're releasing Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework for automated auditing that uses AI agents to test the behaviors of target models across diverse scenarios. When applied to 14 frontier models with 111 seed instructions, Petri successfully elicited a broad set of misaligned behaviors including autonomous deception, oversight subversion, whistleblowing, and cooperation with human misuse. The tool is available now at github.com/safety-research/petri.
Introduction
AI models are becoming more capable and are being deployed with wide-ranging affordances across more domains, increasing the surface area where misaligned behaviors might emerge. The sheer volume and complexity of potential behaviors far exceeds what researchers can manually test, making it increasingly difficult to properly audit each model.
Over the past year, we've [...]
---
Outline:
(00:24) tl;dr
(00:56) Introduction
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/petri/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis is a cross-post of some recent Anthropic research on building auditing agents.[1] The following is quoted from the Alignment Science blog post.
tl;dr
We're releasing Petri (Parallel Exploration Tool for Risky Interactions), an open-source framework for automated auditing that uses AI agents to test the behaviors of target models across diverse scenarios. When applied to 14 frontier models with 111 seed instructions, Petri successfully elicited a broad set of misaligned behaviors including autonomous deception, oversight subversion, whistleblowing, and cooperation with human misuse. The tool is available now at github.com/safety-research/petri.
Introduction
AI models are becoming more capable and are being deployed with wide-ranging affordances across more domains, increasing the surface area where misaligned behaviors might emerge. The sheer volume and complexity of potential behaviors far exceeds what researchers can manually test, making it increasingly difficult to properly audit each model.
Over the past year, we've [...]
---
Outline:
(00:24) tl;dr
(00:56) Introduction
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://alignment.anthropic.com/2025/petri/
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,320 Listeners

2,451 Listeners

8,549 Listeners

4,178 Listeners

93 Listeners

1,601 Listeners

9,922 Listeners

95 Listeners

512 Listeners

5,510 Listeners

15,930 Listeners

547 Listeners

130 Listeners

93 Listeners

467 Listeners