LessWrong (30+ Karma)

“On OpenAI’s Safety and Alignment Philosophy” by Zvi


Listen Later

OpenAI's recent transparency on safety and alignment strategies has been extremely helpful and refreshing.

Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extremely helpful.

Now we have another document, How We Think About Safety and Alignment. Again, they have laid out their thinking crisply and in excellent detail.

I have strong disagreements with several key assumptions underlying their position.

Given those assumptions, they have produced a strong document – here I focus on my disagreements, so I want to be clear that mostly I think this document was very good.

This post examines their key implicit and explicit assumptions.

In particular, there are three core assumptions that I challenge:

  1. AI Will Remain [...]
  2. ---

    Outline:

    (02:45) Core Implicit Assumption: AI Can Remain a 'Mere Tool'

    (05:16) Core Implicit Assumption: 'Economic Normal'

    (06:20) Core Assumption: No Abrupt Phase Changes

    (10:40) Implicit Assumption: Release of AI Models Only Matters Directly

    (12:20) On Their Taxonomy of Potential Risks

    (22:01) The Need for Coordination

    (24:55) Core Principles

    (25:42) Embracing Uncertainty

    (28:19) Defense in Depth

    (29:35) Methods That Scale

    (31:08) Human Control

    (31:30) Community Effort

    ---

    First published:

    March 5th, 2025

    Source:

    https://www.lesswrong.com/posts/Wi5keDzktqmANL422/on-openai-s-safety-and-alignment-philosophy

    ---

    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    ...more
    View all episodesView all episodes
    Download on the App Store

    LessWrong (30+ Karma)By LessWrong


    More shows like LessWrong (30+ Karma)

    View all
    Making Sense with Sam Harris by Sam Harris

    Making Sense with Sam Harris

    26,365 Listeners

    Conversations with Tyler by Mercatus Center at George Mason University

    Conversations with Tyler

    2,443 Listeners

    The Peter Attia Drive by Peter Attia, MD

    The Peter Attia Drive

    9,128 Listeners

    Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

    Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    4,156 Listeners

    ManifoldOne by Steve Hsu

    ManifoldOne

    92 Listeners

    Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

    Your Undivided Attention

    1,595 Listeners

    All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

    All-In with Chamath, Jason, Sacks & Friedberg

    9,907 Listeners

    Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

    Machine Learning Street Talk (MLST)

    90 Listeners

    Dwarkesh Podcast by Dwarkesh Patel

    Dwarkesh Podcast

    507 Listeners

    Hard Fork by The New York Times

    Hard Fork

    5,469 Listeners

    The Ezra Klein Show by New York Times Opinion

    The Ezra Klein Show

    16,056 Listeners

    Moonshots with Peter Diamandis by PHD Ventures

    Moonshots with Peter Diamandis

    540 Listeners

    No Priors: Artificial Intelligence | Technology | Startups by Conviction

    No Priors: Artificial Intelligence | Technology | Startups

    132 Listeners

    Latent Space: The AI Engineer Podcast by swyx + Alessio

    Latent Space: The AI Engineer Podcast

    95 Listeners

    BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

    BG2Pod with Brad Gerstner and Bill Gurley

    521 Listeners