
Sign up to save your podcasts
Or
We often talk about ensuring control, which in the context of this doc refers to preventing AIs from being able to cause existential problems, even if the AIs attempt to subvert our countermeasures. To better contextualize control, I think it's useful to discuss the main threats. I'll focus my discussion on threats induced by misalignment which could plausibly increase existential risk.
While control is often associated with preventing security failures, I'll discuss other issues like sabotaging safety-critical work. I'll list and explain my prioritization of threats. As part of this, I'll argue that rogue internal deployments—cases where the AI is running within the company's datacenter but with control measures disabled—can be substantially worse than self-exfiltration—the AI stealing its weights and running them on an external server—as it might give the AI access to much more compute and the ability to interfere with important things done within the AI [...]
---
Outline:
(01:06) Some clusters of threats
(05:49) Concentrated vs non-concentrated failures
(08:27) My prioritization between these threats
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
We often talk about ensuring control, which in the context of this doc refers to preventing AIs from being able to cause existential problems, even if the AIs attempt to subvert our countermeasures. To better contextualize control, I think it's useful to discuss the main threats. I'll focus my discussion on threats induced by misalignment which could plausibly increase existential risk.
While control is often associated with preventing security failures, I'll discuss other issues like sabotaging safety-critical work. I'll list and explain my prioritization of threats. As part of this, I'll argue that rogue internal deployments—cases where the AI is running within the company's datacenter but with control measures disabled—can be substantially worse than self-exfiltration—the AI stealing its weights and running them on an external server—as it might give the AI access to much more compute and the ability to interfere with important things done within the AI [...]
---
Outline:
(01:06) Some clusters of threats
(05:49) Concentrated vs non-concentrated failures
(08:27) My prioritization between these threats
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,334 Listeners
2,389 Listeners
8,004 Listeners
4,120 Listeners
90 Listeners
1,494 Listeners
9,254 Listeners
91 Listeners
424 Listeners
5,448 Listeners
15,457 Listeners
506 Listeners
127 Listeners
71 Listeners
466 Listeners