
Sign up to save your podcasts
Or


Coverage of Claude Opus 4.6 started yesterday with the mundane alignment and model welfare sections of the model card.
Today covers the kinds of safety I think matter most: Sabotage, deception, situational awareness, outside red teaming and most importantly the frontier, catastrophic and existential risks. I think it was correct to release Opus 4.6 as an ASL-3 model, but the process Anthropic uses is breaking down, and it not on track to reliably get the right answer on Opus 5.
Tomorrow I’ll cover benchmarks, reactions and the holistic takeaways and practical implications. I’m still taking it all in, but it seems clear to me that Claude Opus 4.6 is the best model out there and should be your daily driver, with or without Claude Code, on most non-coding tasks, but it is not without its weaknesses, in particular in writing and falling into generating more ‘AI slop’ style prose than Claude Opus 4.5.
For coding tasks, I presume that Opus 4.6 with Claude Code is the play, especially with Agent Teams and fast mode available, and I’m using it myself, but Codex with GPT-5.3-Codex-Max is also a strong model and a viable alternative, and a fully [...]
---
Outline:
(01:32) Sabotage, Deception and Evaluation Integrity
(03:42) Sandbagging On Dangerous Capability Evaluations
(06:01) Situational Awareness
(07:33) Inhibiting Evaluation Awareness (6.5)
(09:06) Self-Preference
(10:24) UK AISI Testing
(11:40) Apollo Research Testing
(14:24) Responsible Scaling Policy Evaluations
(15:45) CBRN (mostly Biology)
(18:43) Autonomy
(26:40) Autonomy Benchmarks
(29:53) Cyber
(31:27) Ship It Anyway
(33:40) You Are Not Ready
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongCoverage of Claude Opus 4.6 started yesterday with the mundane alignment and model welfare sections of the model card.
Today covers the kinds of safety I think matter most: Sabotage, deception, situational awareness, outside red teaming and most importantly the frontier, catastrophic and existential risks. I think it was correct to release Opus 4.6 as an ASL-3 model, but the process Anthropic uses is breaking down, and it not on track to reliably get the right answer on Opus 5.
Tomorrow I’ll cover benchmarks, reactions and the holistic takeaways and practical implications. I’m still taking it all in, but it seems clear to me that Claude Opus 4.6 is the best model out there and should be your daily driver, with or without Claude Code, on most non-coding tasks, but it is not without its weaknesses, in particular in writing and falling into generating more ‘AI slop’ style prose than Claude Opus 4.5.
For coding tasks, I presume that Opus 4.6 with Claude Code is the play, especially with Agent Teams and fast mode available, and I’m using it myself, but Codex with GPT-5.3-Codex-Max is also a strong model and a viable alternative, and a fully [...]
---
Outline:
(01:32) Sabotage, Deception and Evaluation Integrity
(03:42) Sandbagging On Dangerous Capability Evaluations
(06:01) Situational Awareness
(07:33) Inhibiting Evaluation Awareness (6.5)
(09:06) Self-Preference
(10:24) UK AISI Testing
(11:40) Apollo Research Testing
(14:24) Responsible Scaling Policy Evaluations
(15:45) CBRN (mostly Biology)
(18:43) Autonomy
(26:40) Autonomy Benchmarks
(29:53) Cyber
(31:27) Ship It Anyway
(33:40) You Are Not Ready
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,179 Listeners

132 Listeners

7,264 Listeners

531 Listeners

16,339 Listeners

4 Listeners

14 Listeners

2 Listeners