
Sign up to save your podcasts
Or


Three out of three CEOs of top AI companies agree: "Mitigating the risk of extinction from AI should be a global priority."
How do they plan to do this?
Anthropic has a Responsible Scaling Policy, Google DeepMind has a Frontier Safety Framework, and OpenAI has a Preparedness Framework, all of which were updated in 2025.
Overview of the policies
All three policies have similar “bones”.[1] They:
TL;DR summary table for the rest of the article:
AnthropicGoogle DeepMindOpenAISafety policy documentResponsible [...]---
Outline:
(00:44) Overview of the policies
(02:00) Anthropic
(02:18) What capabilities are they monitoring for?
(06:07) How do they monitor these capabilities?
(07:35) What will they do if an AI looks dangerous?
(09:59) Deployment Constraints
(10:32) Google DeepMind
(11:03) What capabilities are they monitoring for?
(13:08) How do they monitor these capabilities?
(14:06) What will they do if an AI looks dangerous?
(15:46) Industry Wide Recommendations
(16:44) Some details of note
(17:49) OpenAI
(18:21) What capabilities are they monitoring?
(21:04) How do they monitor these capabilities?
(22:35) What will they do if an AI looks dangerous?
(26:24) Notable differences between the companies' plans
(27:21) Commentary on the safety plans
(29:12) The current situation
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThree out of three CEOs of top AI companies agree: "Mitigating the risk of extinction from AI should be a global priority."
How do they plan to do this?
Anthropic has a Responsible Scaling Policy, Google DeepMind has a Frontier Safety Framework, and OpenAI has a Preparedness Framework, all of which were updated in 2025.
Overview of the policies
All three policies have similar “bones”.[1] They:
TL;DR summary table for the rest of the article:
AnthropicGoogle DeepMindOpenAISafety policy documentResponsible [...]---
Outline:
(00:44) Overview of the policies
(02:00) Anthropic
(02:18) What capabilities are they monitoring for?
(06:07) How do they monitor these capabilities?
(07:35) What will they do if an AI looks dangerous?
(09:59) Deployment Constraints
(10:32) Google DeepMind
(11:03) What capabilities are they monitoring for?
(13:08) How do they monitor these capabilities?
(14:06) What will they do if an AI looks dangerous?
(15:46) Industry Wide Recommendations
(16:44) Some details of note
(17:49) OpenAI
(18:21) What capabilities are they monitoring?
(21:04) How do they monitor these capabilities?
(22:35) What will they do if an AI looks dangerous?
(26:24) Notable differences between the companies' plans
(27:21) Commentary on the safety plans
(29:12) The current situation
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,330 Listeners

2,453 Listeners

8,557 Listeners

4,182 Listeners

93 Listeners

1,601 Listeners

9,927 Listeners

95 Listeners

511 Listeners

5,512 Listeners

15,931 Listeners

545 Listeners

131 Listeners

94 Listeners

467 Listeners