LessWrong (Curated & Popular)

“AI companies are unlikely to make high-assurance safety cases if timelines are short” by ryan_greenblatt


Listen Later

One hope for keeping existential risks low is to get AI companies to (successfully) make high-assurance safety cases: structured and auditable arguments that an AI system is very unlikely to result in existential risks given how it will be deployed.[1] Concretely, once AIs are quite powerful, high-assurance safety cases would require making a thorough argument that the level of (existential) risk caused by the company is very low; perhaps they would require that the total chance of existential risk over the lifetime of the AI company[2] is less than 0.25%[3][4].

The idea of making high-assurance safety cases (once AI systems are dangerously powerful) is popular in some parts of the AI safety community and a variety of work appears to focus on this. Further, Anthropic has expressed an intention (in their RSP) to "keep risks below acceptable levels"[5] and there is a common impression that Anthropic would pause [...]

---

Outline:

(03:19) Why are companies unlikely to succeed at making high-assurance safety cases in short timelines?

(04:14) Ensuring sufficient security is very difficult

(04:55) Sufficiently mitigating scheming risk is unlikely

(09:35) Accelerating safety and security with earlier AIs seems insufficient

(11:58) Other points

(14:07) Companies likely wont unilaterally slow down if they are unable to make high-assurance safety cases

(18:26) Could coordination or government action result in high-assurance safety cases?

(19:55) What about safety cases aiming at a higher risk threshold?

(21:57) Implications and conclusions

The original text contained 20 footnotes which were omitted from this narration.

---

First published:
January 23rd, 2025

Source:
https://www.lesswrong.com/posts/neTbrpBziAsTH5Bn7/ai-companies-are-unlikely-to-make-high-assurance-safety

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (Curated & Popular)By LessWrong

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

11 ratings


More shows like LessWrong (Curated & Popular)

View all
Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,376 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

122 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,101 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

The Jim Rutt Show by The Jim Rutt Show

The Jim Rutt Show

249 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

90 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

325 Listeners

Hard Fork by The New York Times

Hard Fork

5,358 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

137 Listeners

Razib Khan's Unsupervised Learning by Razib Khan

Razib Khan's Unsupervised Learning

200 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

103 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

64 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

138 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

113 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

0 Listeners