
Sign up to save your podcasts
Or


Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world. At least while I learn more, I am defaulting to it as my new primary model for queries short of GPT-5-Pro level.
I’ll cover the system card and alignment concerns first, then cover capabilities and reactions tomorrow once everyone has had another day to play with the new model.
It was great to recently see the collaboration between OpenAI and Anthropic where they evaluated each others’ models. I would love to see this incorporated into model cards going forward, where GPT-5 was included in Anthropic's system cards as a comparison point, and Claude was included in OpenAI's.
Basic Alignment Facts About Sonnet 4.5
Anthropic: Overall, we find that Claude Sonnet 4.5 has a substantially improved safety profile compared to previous Claude models.
[...]
---
Outline:
(01:36) Basic Alignment Facts About Sonnet 4.5
(03:54) 2.1: Single Turn Tests and 2.2: Ambiguous Context Evaluations
(05:01) 2.3. and 2.4: Multi-Turn Testing
(07:00) 2.5: Bias
(08:56) 3: Honesty
(10:26) 4: Agentic Safety
(10:41) 4.1: Malicious Agentic Coding
(13:01) 4.2: Prompt Injections Within Agentic Systems
(15:05) 5: Cyber Capabilities
(17:35) 5.3: Responsible Scaling Policy (RSP) Cyber Tests
(22:15) 6: Reward Hacking
(26:47) 7: Alignment
(28:11) Situational Awareness
(33:38) Test Design
(36:32) Evaluation Awareness
(42:57) 7.4: Evidence From Training And Early Use
(43:55) 7.5: Risk Area Discussions
(45:26) It's Sabotage
(50:48) Interpretability Investigations
(58:35) 8: Model Welfare Assessment
(58:54) 9: RSP (Responsible Scaling Policy) Evaluations
(59:51) Keep Sonnet Safe
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world. At least while I learn more, I am defaulting to it as my new primary model for queries short of GPT-5-Pro level.
I’ll cover the system card and alignment concerns first, then cover capabilities and reactions tomorrow once everyone has had another day to play with the new model.
It was great to recently see the collaboration between OpenAI and Anthropic where they evaluated each others’ models. I would love to see this incorporated into model cards going forward, where GPT-5 was included in Anthropic's system cards as a comparison point, and Claude was included in OpenAI's.
Basic Alignment Facts About Sonnet 4.5
Anthropic: Overall, we find that Claude Sonnet 4.5 has a substantially improved safety profile compared to previous Claude models.
[...]
---
Outline:
(01:36) Basic Alignment Facts About Sonnet 4.5
(03:54) 2.1: Single Turn Tests and 2.2: Ambiguous Context Evaluations
(05:01) 2.3. and 2.4: Multi-Turn Testing
(07:00) 2.5: Bias
(08:56) 3: Honesty
(10:26) 4: Agentic Safety
(10:41) 4.1: Malicious Agentic Coding
(13:01) 4.2: Prompt Injections Within Agentic Systems
(15:05) 5: Cyber Capabilities
(17:35) 5.3: Responsible Scaling Policy (RSP) Cyber Tests
(22:15) 6: Reward Hacking
(26:47) 7: Alignment
(28:11) Situational Awareness
(33:38) Test Design
(36:32) Evaluation Awareness
(42:57) 7.4: Evidence From Training And Early Use
(43:55) 7.5: Risk Area Discussions
(45:26) It's Sabotage
(50:48) Interpretability Investigations
(58:35) 8: Model Welfare Assessment
(58:54) 9: RSP (Responsible Scaling Policy) Evaluations
(59:51) Keep Sonnet Safe
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,391 Listeners

2,470 Listeners

1,095 Listeners

109 Listeners

293 Listeners

87 Listeners

548 Listeners

5,547 Listeners

140 Listeners

14 Listeners

140 Listeners

156 Listeners

458 Listeners

0 Listeners

143 Listeners