
Sign up to save your podcasts
Or


OpenAI's recent transparency on safety and alignment strategies has been extremely helpful and refreshing.
Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful.
Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail.
I have strong disagreements with several key assumptions underlying their position.
Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good.
This post examines their key implicit and explicit assumptions.
In particular, there are three core assumptions that I challenge:
---
Outline:
(02:45) Core Implicit Assumption: AI Can Remain a 'Mere Tool'
(05:16) Core Implicit Assumption: 'Economic Normal'
(06:20) Core Assumption: No Abrupt Phase Changes
(10:40) Implicit Assumption: Release of AI Models Only Matters Directly
(12:20) On Their Taxonomy of Potential Risks
(22:01) The Need for Coordination
(24:55) Core Principles
(25:42) Embracing Uncertainty
(28:19) Defense in Depth
(29:35) Methods That Scale
(31:08) Human Control
(31:30) Community Effort
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongOpenAI's recent transparency on safety and alignment strategies has been extremely helpful and refreshing.
Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful.
Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail.
I have strong disagreements with several key assumptions underlying their position.
Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good.
This post examines their key implicit and explicit assumptions.
In particular, there are three core assumptions that I challenge:
---
Outline:
(02:45) Core Implicit Assumption: AI Can Remain a 'Mere Tool'
(05:16) Core Implicit Assumption: 'Economic Normal'
(06:20) Core Assumption: No Abrupt Phase Changes
(10:40) Implicit Assumption: Release of AI Models Only Matters Directly
(12:20) On Their Taxonomy of Potential Risks
(22:01) The Need for Coordination
(24:55) Core Principles
(25:42) Embracing Uncertainty
(28:19) Defense in Depth
(29:35) Methods That Scale
(31:08) Human Control
(31:30) Community Effort
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,365 Listeners

2,443 Listeners

9,128 Listeners

4,156 Listeners

92 Listeners

1,595 Listeners

9,907 Listeners

90 Listeners

507 Listeners

5,469 Listeners

16,056 Listeners

540 Listeners

132 Listeners

95 Listeners

521 Listeners