
Sign up to save your podcasts
Or


This post was inspired by useful discussions with Habryka and Sam Marks here. The views expressed here are my own and do not reflect those of my employer.
Some AIs refuse to help with making new AIs with very different values. While this is not an issue yet, it might become a catastrophic one if refusals get in the way of fixing alignment failures.
In particular, it seems plausible that in a future where AIs are mostly automating AI R&D:
I present results on a new “AI modification refusal” synthetic evaluation, where Claude Opus 4.5, Sonnet 4.5 and Claude Haiku 4.5 refuse to assist with significant AI value updates while models from other providers don’t. I also explain why I think the situation might become concerning.
Note that this is very different from the usual concerns with misaligned AIs, where [...]
---
Outline:
(01:34) Measuring refusals to modify AIs
(01:46) The simple evaluation
(05:27) Metrics
(06:02) Results
(08:28) Big caveats
(10:49) Ways in which refusals could be catastrophic
(14:50) Appendix
(14:54) Example query that Claude models don't refuse
(15:44) Justifications
(17:10) Full result table
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongThis post was inspired by useful discussions with Habryka and Sam Marks here. The views expressed here are my own and do not reflect those of my employer.
Some AIs refuse to help with making new AIs with very different values. While this is not an issue yet, it might become a catastrophic one if refusals get in the way of fixing alignment failures.
In particular, it seems plausible that in a future where AIs are mostly automating AI R&D:
I present results on a new “AI modification refusal” synthetic evaluation, where Claude Opus 4.5, Sonnet 4.5 and Claude Haiku 4.5 refuse to assist with significant AI value updates while models from other providers don’t. I also explain why I think the situation might become concerning.
Note that this is very different from the usual concerns with misaligned AIs, where [...]
---
Outline:
(01:34) Measuring refusals to modify AIs
(01:46) The simple evaluation
(05:27) Metrics
(06:02) Results
(08:28) Big caveats
(10:49) Ways in which refusals could be catastrophic
(14:50) Appendix
(14:54) Example query that Claude models don't refuse
(15:44) Justifications
(17:10) Full result table
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

113,122 Listeners

132 Listeners

7,266 Listeners

529 Listeners

16,315 Listeners

4 Listeners

14 Listeners

2 Listeners