LessWrong (30+ Karma)

“Preparing for a Warning Shot” by Noah Birnbaum


Listen Later

Crossposted to the EA Forum and my Substack.

Confidence level: moderate uncertainty and not that concrete (yet). Exploratory, but I think this is plausibly important and underexplored.

TL;DR

Early AI safety arguments often assumed we wouldn’t get meaningful warning shots (a non-existential public display of misalignment) before catastrophic misalignment, meaning things would go from “seems fine” to “we lose” pretty quickly. Given what we now know about AI development (model weight changes, jagged capabilities, slow or fizzled takeoff), that assumption looks weaker than it used to.

Some people gesture at “warning shots,” but almost no one is working on what we should do in anticipation. That seems like a mistake. Preparing for warning shots—especially ambiguous ones—could be a high-leverage and neglected area of AI Safety.

The classic “no warning shot” picture

A common view in early AI safety research—associated especially with Yudkowsky and Bostrom—was roughly:

  • A sufficiently intelligent misaligned system would know that revealing misalignment while weak is bad for it.
  • So if things go wrong, they go wrong suddenly (AKA a sharp left turn).
  • Therefore, we shouldn’t expect intermediate failures that clearly demonstrate large scale risks before we know that it's too late.

If this picture is [...]

---

Outline:

(00:24) TL;DR

(01:07) The classic no warning shot picture

(01:51) Why this picture now looks less likely

(03:50) Warning shots can shift the Overton Window

(05:07) Preparedness matters because warning shots could be ambiguous

(07:10) Risks and perverse incentives

(07:47) A speculative implication for AI safety research

(08:25) Conclusion

The original text contained 5 footnotes which were omitted from this narration.

---

First published:

February 5th, 2026

Source:

https://www.lesswrong.com/posts/GKtwwqusm4vxqkChc/preparing-for-a-warning-shot

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,122 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

132 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,266 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

529 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,315 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners