LessWrong (30+ Karma)

“On OpenAI’s Safety and Alignment Philosophy” by Zvi


Listen Later

OpenAI's recent transparency on safety and alignment strategies has been extremely helpful and refreshing.

Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful.

Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail.

I have strong disagreements with several key assumptions underlying their position.

Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good.

This post examines their key implicit and explicit assumptions.

In particular, there are three core assumptions that I challenge:

  1. AI Will Remain [...]
  2. ---

    Outline:

    (02:45) Core Implicit Assumption: AI Can Remain a 'Mere Tool'

    (05:16) Core Implicit Assumption: 'Economic Normal'

    (06:20) Core Assumption: No Abrupt Phase Changes

    (10:40) Implicit Assumption: Release of AI Models Only Matters Directly

    (12:20) On Their Taxonomy of Potential Risks

    (22:01) The Need for Coordination

    (24:55) Core Principles

    (25:42) Embracing Uncertainty

    (28:19) Defense in Depth

    (29:35) Methods That Scale

    (31:08) Human Control

    (31:30) Community Effort

    ---

    First published:

    March 5th, 2025

    Source:

    https://www.lesswrong.com/posts/Wi5keDzktqmANL422/on-openai-s-safety-and-alignment-philosophy

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ...more
    View all episodesView all episodes
    Download on the App Store

    LessWrong (30+ Karma)By LessWrong


    More shows like LessWrong (30+ Karma)

    View all
    The Daily by The New York Times

    The Daily

    112,909 Listeners

    Astral Codex Ten Podcast by Jeremiah

    Astral Codex Ten Podcast

    130 Listeners

    Interesting Times with Ross Douthat by New York Times Opinion

    Interesting Times with Ross Douthat

    7,221 Listeners

    Dwarkesh Podcast by Dwarkesh Patel

    Dwarkesh Podcast

    535 Listeners

    The Ezra Klein Show by New York Times Opinion

    The Ezra Klein Show

    16,221 Listeners

    AI Article Readings by Readings of great articles in AI voices

    AI Article Readings

    4 Listeners

    Doom Debates by Liron Shapira

    Doom Debates

    14 Listeners

    LessWrong posts by zvi by zvi

    LessWrong posts by zvi

    2 Listeners