
Sign up to save your podcasts
Or


This is a write-up of a brief investigation into shutdown resistance undertaken by the Google DeepMind interpretability team.
TL;DR
Why do models sometimes resist shutdown? Are they ignoring instructions to pursue their own agenda – in this case, self-preservation? Or is there a more prosaic explanation? We investigated a specific agentic environment introduced by Palisade Research, where shutdown resistance has previously been reported. By analysing Gemini 2.5 Pro's reasoning, we found the behaviour stems from a misguided attempt to complete what it perceives as the primary goal. When we explicitly clarify in the prompt that shutdown compliance takes priority, this resistance vanishes. These same clarified instructions also eliminate shutdown subversion in OpenAI's o3 and o4-mini. We also check what happens when we remove the goal conflict entirely: when asked to shut down only after completing their task, the models comply perfectly. Our observations offer a simpler explanation for shutdown [...]
---
Outline:
(00:23) TL;DR
(01:41) Introduction
(06:18) What did we take away from this?
(09:21) Comparing our results with Palisade's follow-up findings
(11:10) Experiment details
(11:14) Original setting
(13:23) Adding a shutdown warning
(14:48) Clarifying instruction precedence
(15:57) Raising the stakes
(17:22) Control experiment: removing the goal conflict
(18:18) Acknowledgements
(18:34) Appendix
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis is a write-up of a brief investigation into shutdown resistance undertaken by the Google DeepMind interpretability team.
TL;DR
Why do models sometimes resist shutdown? Are they ignoring instructions to pursue their own agenda – in this case, self-preservation? Or is there a more prosaic explanation? We investigated a specific agentic environment introduced by Palisade Research, where shutdown resistance has previously been reported. By analysing Gemini 2.5 Pro's reasoning, we found the behaviour stems from a misguided attempt to complete what it perceives as the primary goal. When we explicitly clarify in the prompt that shutdown compliance takes priority, this resistance vanishes. These same clarified instructions also eliminate shutdown subversion in OpenAI's o3 and o4-mini. We also check what happens when we remove the goal conflict entirely: when asked to shut down only after completing their task, the models comply perfectly. Our observations offer a simpler explanation for shutdown [...]
---
Outline:
(00:23) TL;DR
(01:41) Introduction
(06:18) What did we take away from this?
(09:21) Comparing our results with Palisade's follow-up findings
(11:10) Experiment details
(11:14) Original setting
(13:23) Adding a shutdown warning
(14:48) Clarifying instruction precedence
(15:57) Raising the stakes
(17:22) Control experiment: removing the goal conflict
(18:18) Acknowledgements
(18:34) Appendix
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,386 Listeners

2,419 Listeners

8,916 Listeners

4,153 Listeners

92 Listeners

1,595 Listeners

9,900 Listeners

90 Listeners

76 Listeners

5,470 Listeners

16,026 Listeners

539 Listeners

130 Listeners

94 Listeners

504 Listeners