
Sign up to save your podcasts
Or


Crossposted to the EA Forum and my Substack.
Confidence level: moderate uncertainty and not that concrete (yet). Exploratory, but I think this is plausibly important and underexplored.
TL;DR
Early AI safety arguments often assumed we wouldn’t get meaningful warning shots (a non-existential public display of misalignment) before catastrophic misalignment, meaning things would go from “seems fine” to “we lose” pretty quickly. Given what we now know about AI development (model weight changes, jagged capabilities, slow or fizzled takeoff), that assumption looks weaker than it used to.
Some people gesture at “warning shots,” but almost no one is working on what we should do in anticipation. That seems like a mistake. Preparing for warning shots—especially ambiguous ones—could be a high-leverage and neglected area of AI Safety.
The classic “no warning shot” picture
A common view in early AI safety research—associated especially with Yudkowsky and Bostrom—was roughly:
If this picture is [...]
---
Outline:
(00:24) TL;DR
(01:07) The classic no warning shot picture
(01:51) Why this picture now looks less likely
(03:50) Warning shots can shift the Overton Window
(05:07) Preparedness matters because warning shots could be ambiguous
(07:10) Risks and perverse incentives
(07:47) A speculative implication for AI safety research
(08:25) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongCrossposted to the EA Forum and my Substack.
Confidence level: moderate uncertainty and not that concrete (yet). Exploratory, but I think this is plausibly important and underexplored.
TL;DR
Early AI safety arguments often assumed we wouldn’t get meaningful warning shots (a non-existential public display of misalignment) before catastrophic misalignment, meaning things would go from “seems fine” to “we lose” pretty quickly. Given what we now know about AI development (model weight changes, jagged capabilities, slow or fizzled takeoff), that assumption looks weaker than it used to.
Some people gesture at “warning shots,” but almost no one is working on what we should do in anticipation. That seems like a mistake. Preparing for warning shots—especially ambiguous ones—could be a high-leverage and neglected area of AI Safety.
The classic “no warning shot” picture
A common view in early AI safety research—associated especially with Yudkowsky and Bostrom—was roughly:
If this picture is [...]
---
Outline:
(00:24) TL;DR
(01:07) The classic no warning shot picture
(01:51) Why this picture now looks less likely
(03:50) Warning shots can shift the Overton Window
(05:07) Preparedness matters because warning shots could be ambiguous
(07:10) Risks and perverse incentives
(07:47) A speculative implication for AI safety research
(08:25) Conclusion
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

113,122 Listeners

132 Listeners

7,266 Listeners

529 Listeners

16,315 Listeners

4 Listeners

14 Listeners

2 Listeners