
Sign up to save your podcasts
Or


Twitter thread here.
tl;dr
When prompted, current models can sandbag ML experiments and research decisions without being detected by zero-shot prompted monitors. Claude 3.5 Sonnet (new) can only sandbag effectively when seeing a one-shot example, while Claude 3.7 Sonnet can do this without an example (zero-shot). We are not yet worried about sabotage in today's models, as the sandbagging we observe today would be noticed by humans.
Introduction
AI models might soon assist with large parts of AI and AI safety research, completing multiple hours of work independently. These automated researchers will likely not be sufficiently capable to cause immediate catastrophic harm. However, they will have the opportunity to engage in sabotage, including sabotaging the research we rely on to make more powerful AI models safe. For example, a malicious automated researcher might slow down AI safety research by purposefully making promising experiments fail.
In this work we aim [...]
---
Outline:
(00:15) tl;dr
(00:45) Introduction
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongTwitter thread here.
tl;dr
When prompted, current models can sandbag ML experiments and research decisions without being detected by zero-shot prompted monitors. Claude 3.5 Sonnet (new) can only sandbag effectively when seeing a one-shot example, while Claude 3.7 Sonnet can do this without an example (zero-shot). We are not yet worried about sabotage in today's models, as the sandbagging we observe today would be noticed by humans.
Introduction
AI models might soon assist with large parts of AI and AI safety research, completing multiple hours of work independently. These automated researchers will likely not be sufficiently capable to cause immediate catastrophic harm. However, they will have the opportunity to engage in sabotage, including sabotaging the research we rely on to make more powerful AI models safe. For example, a malicious automated researcher might slow down AI safety research by purposefully making promising experiments fail.
In this work we aim [...]
---
Outline:
(00:15) tl;dr
(00:45) Introduction
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,353 Listeners

2,453 Listeners

8,507 Listeners

4,165 Listeners

95 Listeners

1,611 Listeners

10,018 Listeners

95 Listeners

516 Listeners

5,526 Listeners

15,849 Listeners

554 Listeners

130 Listeners

92 Listeners

472 Listeners