
Sign up to save your podcasts
Or


Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world. At least while I learn more, I am defaulting to it as my new primary model for queries short of GPT-5-Pro level.
I’ll cover the system card and alignment concerns first, then cover capabilities and reactions tomorrow once everyone has had another day to play with the new model.
It was great to recently see the collaboration between OpenAI and Anthropic where they evaluated each others’ models. I would love to see this incorporated into model cards going forward, where GPT-5 was included in Anthropic's system cards as a comparison point, and Claude was included in OpenAI's.
Basic Alignment Facts About Sonnet 4.5
Anthropic: Overall, we find that Claude Sonnet 4.5 has a substantially improved safety profile compared to previous Claude models.
[...]
---
Outline:
(01:36) Basic Alignment Facts About Sonnet 4.5
(03:54) 2.1: Single Turn Tests and 2.2: Ambiguous Context Evaluations
(05:01) 2.3. and 2.4: Multi-Turn Testing
(07:00) 2.5: Bias
(08:56) 3: Honesty
(10:26) 4: Agentic Safety
(10:41) 4.1: Malicious Agentic Coding
(13:01) 4.2: Prompt Injections Within Agentic Systems
(15:05) 5: Cyber Capabilities
(17:35) 5.3: Responsible Scaling Policy (RSP) Cyber Tests
(22:15) 6: Reward Hacking
(26:47) 7: Alignment
(28:11) Situational Awareness
(33:38) Test Design
(36:32) Evaluation Awareness
(42:57) 7.4: Evidence From Training And Early Use
(43:55) 7.5: Risk Area Discussions
(45:26) It's Sabotage
(50:48) Interpretability Investigations
(58:35) 8: Model Welfare Assessment
(58:54) 9: RSP (Responsible Scaling Policy) Evaluations
(59:51) Keep Sonnet Safe
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world. At least while I learn more, I am defaulting to it as my new primary model for queries short of GPT-5-Pro level.
I’ll cover the system card and alignment concerns first, then cover capabilities and reactions tomorrow once everyone has had another day to play with the new model.
It was great to recently see the collaboration between OpenAI and Anthropic where they evaluated each others’ models. I would love to see this incorporated into model cards going forward, where GPT-5 was included in Anthropic's system cards as a comparison point, and Claude was included in OpenAI's.
Basic Alignment Facts About Sonnet 4.5
Anthropic: Overall, we find that Claude Sonnet 4.5 has a substantially improved safety profile compared to previous Claude models.
[...]
---
Outline:
(01:36) Basic Alignment Facts About Sonnet 4.5
(03:54) 2.1: Single Turn Tests and 2.2: Ambiguous Context Evaluations
(05:01) 2.3. and 2.4: Multi-Turn Testing
(07:00) 2.5: Bias
(08:56) 3: Honesty
(10:26) 4: Agentic Safety
(10:41) 4.1: Malicious Agentic Coding
(13:01) 4.2: Prompt Injections Within Agentic Systems
(15:05) 5: Cyber Capabilities
(17:35) 5.3: Responsible Scaling Policy (RSP) Cyber Tests
(22:15) 6: Reward Hacking
(26:47) 7: Alignment
(28:11) Situational Awareness
(33:38) Test Design
(36:32) Evaluation Awareness
(42:57) 7.4: Evidence From Training And Early Use
(43:55) 7.5: Risk Area Discussions
(45:26) It's Sabotage
(50:48) Interpretability Investigations
(58:35) 8: Model Welfare Assessment
(58:54) 9: RSP (Responsible Scaling Policy) Evaluations
(59:51) Keep Sonnet Safe
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,383 Listeners

2,423 Listeners

1,083 Listeners

107 Listeners

289 Listeners

93 Listeners

489 Listeners

5,479 Listeners

132 Listeners

13 Listeners

133 Listeners

151 Listeners

509 Listeners

0 Listeners

133 Listeners