
Sign up to save your podcasts
Or
Work done with James Faina, Evan Hubinger and Ethan Perez.
This is ongoing research on speculative threats, which might be of interest to researchers in the field of AI safety. We ask that readers treat this as a work in progress, rather than as a mature research paper.
To prevent highly capable and potentially misaligned LLMs from taking bad actions, we might want to monitor all of their outputs (Inan et al. 2023, Greenblatt 2024, Balesni 2024). How do we ensure that highly capable LLMs can’t disable the monitoring system?
Well-configured permissions and good cybersecurity may prevent highly capable LLMs from modifying inference code they are not supposed to modify, but highly capable LLMs might be used to improve the inference code, which opens up the risk of LLMs subtly disabling the monitor.
We build a toy setting where we study how well current LLMs can generate such bugs [...]
---
Outline:
(01:48) The threat model
(01:51) Background on highly capable LLMs
(03:01) The high-level concern
(04:49) Mitigations
(05:43) Our toy setting
(05:47) A very simplified inference code
(07:02) Attacks
(08:07) Checking if the generation monitor is disabled
(09:41) The monitor
(10:45) Results
(10:48) Quantitative analysis
(12:21) Qualitative analysis
(13:53) Conclusion
(14:32) Acknowledgements
The original text contained 5 footnotes which were omitted from this narration.
The original text contained 3 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Work done with James Faina, Evan Hubinger and Ethan Perez.
This is ongoing research on speculative threats, which might be of interest to researchers in the field of AI safety. We ask that readers treat this as a work in progress, rather than as a mature research paper.
To prevent highly capable and potentially misaligned LLMs from taking bad actions, we might want to monitor all of their outputs (Inan et al. 2023, Greenblatt 2024, Balesni 2024). How do we ensure that highly capable LLMs can’t disable the monitoring system?
Well-configured permissions and good cybersecurity may prevent highly capable LLMs from modifying inference code they are not supposed to modify, but highly capable LLMs might be used to improve the inference code, which opens up the risk of LLMs subtly disabling the monitor.
We build a toy setting where we study how well current LLMs can generate such bugs [...]
---
Outline:
(01:48) The threat model
(01:51) Background on highly capable LLMs
(03:01) The high-level concern
(04:49) Mitigations
(05:43) Our toy setting
(05:47) A very simplified inference code
(07:02) Attacks
(08:07) Checking if the generation monitor is disabled
(09:41) The monitor
(10:45) Results
(10:48) Quantitative analysis
(12:21) Qualitative analysis
(13:53) Conclusion
(14:32) Acknowledgements
The original text contained 5 footnotes which were omitted from this narration.
The original text contained 3 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,346 Listeners
2,388 Listeners
7,994 Listeners
4,133 Listeners
87 Listeners
1,444 Listeners
8,913 Listeners
87 Listeners
373 Listeners
5,417 Listeners
15,281 Listeners
465 Listeners
122 Listeners
76 Listeners
450 Listeners