LessWrong (30+ Karma)

“come work on dangerous capability mitigations at Anthropic” by Dave Orr


Listen Later

Hi everyone,

TL;DR the Safeguards team at Anthopic is hiring ML experts to focus on dangerous capability mitigations - this is a very high impact ML Engineer / Research Scientist opportunity related to serious harm from advanced AI capabilities. We’re hiring multiple roles at varying levels of seniority including both IC and management.

The Safeguards team focuses on present day and near future harms. Since we activated our ASL 3 safety standard for Claude 4 Opus, in accordance with our Responsible Scaling Plan, this means that we need to do a great job of protecting the world in case Claude can provide uplift for dangerous capabilities. Safeguards builds constitutional classifiers and probes that run over every Opus conversational turn to look for and suppress dangerous content and defend against jailbreaks.

This is a hard modeling task! The boundary between beneficial and risky content is thin and complicated. In some [...]

---

First published:

August 20th, 2025

Source:

https://www.lesswrong.com/posts/qBbnXtt9zFWXMrP4M/come-work-on-dangerous-capability-mitigations-at-anthropic

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,586 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,219 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,096 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners