
Sign up to save your podcasts
Or
This is a write-up of a brief investigation into shutdown resistance undertaken by the Google DeepMind interpretability team.
TL;DR
Why do models sometimes resist shutdown? Are they ignoring instructions to pursue their own agenda – in this case, self-preservation? Or is there a more prosaic explanation? We investigated a specific agentic environment introduced by Palisade Research, where shutdown resistance has previously been reported. By analysing Gemini 2.5 Pro's reasoning, we found the behaviour stems from a misguided attempt to complete what it perceives as the primary goal. When we explicitly clarify in the prompt that shutdown compliance takes priority, this resistance vanishes. These same clarified instructions also eliminate shutdown subversion in OpenAI's o3 and o4-mini. We also check what happens when we remove the goal conflict entirely: when asked to shut down only after completing their task, the models comply perfectly. Our observations offer a simpler explanation for shutdown [...]
---
Outline:
(00:23) TL;DR
(01:41) Introduction
(06:18) What did we take away from this?
(09:21) Comparing our results with Palisade's follow-up findings
(11:10) Experiment details
(11:14) Original setting
(13:23) Adding a shutdown warning
(14:48) Clarifying instruction precedence
(15:57) Raising the stakes
(17:22) Control experiment: removing the goal conflict
(18:18) Acknowledgements
(18:34) Appendix
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
This is a write-up of a brief investigation into shutdown resistance undertaken by the Google DeepMind interpretability team.
TL;DR
Why do models sometimes resist shutdown? Are they ignoring instructions to pursue their own agenda – in this case, self-preservation? Or is there a more prosaic explanation? We investigated a specific agentic environment introduced by Palisade Research, where shutdown resistance has previously been reported. By analysing Gemini 2.5 Pro's reasoning, we found the behaviour stems from a misguided attempt to complete what it perceives as the primary goal. When we explicitly clarify in the prompt that shutdown compliance takes priority, this resistance vanishes. These same clarified instructions also eliminate shutdown subversion in OpenAI's o3 and o4-mini. We also check what happens when we remove the goal conflict entirely: when asked to shut down only after completing their task, the models comply perfectly. Our observations offer a simpler explanation for shutdown [...]
---
Outline:
(00:23) TL;DR
(01:41) Introduction
(06:18) What did we take away from this?
(09:21) Comparing our results with Palisade's follow-up findings
(11:10) Experiment details
(11:14) Original setting
(13:23) Adding a shutdown warning
(14:48) Clarifying instruction precedence
(15:57) Raising the stakes
(17:22) Control experiment: removing the goal conflict
(18:18) Acknowledgements
(18:34) Appendix
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,469 Listeners
2,395 Listeners
7,928 Listeners
4,142 Listeners
89 Listeners
1,472 Listeners
9,207 Listeners
88 Listeners
417 Listeners
5,448 Listeners
15,237 Listeners
481 Listeners
121 Listeners
75 Listeners
461 Listeners