
Sign up to save your podcasts
Or
We often talk about ensuring control, which in the context of this doc refers to preventing AIs from being able to cause existential problems, even if the AIs attempt to subvert our countermeasures. To better contextualize control, I think it's useful to discuss the main countermeasures (building on my prior discussion of the main threats). I'll focus my discussion on countermeasures for reducing existential risk from misalignment, though some of these countermeasures will help with other problems such as human power grabs.
I'll overview control measures that seem important to me, including software infrastructure, security measures, and other control measures (which often involve usage of AI or human labor). In addition to the core control measures (which are focused on monitoring, auditing, and tracking AIs and on blocking and/or resampling suspicious outputs), there is a long tail of potentially helpful control measures, including things like better control of [...]
---
Outline:
(01:13) Background and categorization
(05:54) Non-ML software infrastructure
(06:50) The most important software infrastructure
(11:16) Less important software infrastructure
(18:50) AI and human components
(20:35) Core measures
(26:43) Other less-core measures
(31:33) More speculative measures
(40:10) Specialized countermeasures
(44:38) Other directions and what Im not discussing
(45:46) Security improvements
---
First published:
Source:
Narrated by TYPE III AUDIO.
We often talk about ensuring control, which in the context of this doc refers to preventing AIs from being able to cause existential problems, even if the AIs attempt to subvert our countermeasures. To better contextualize control, I think it's useful to discuss the main countermeasures (building on my prior discussion of the main threats). I'll focus my discussion on countermeasures for reducing existential risk from misalignment, though some of these countermeasures will help with other problems such as human power grabs.
I'll overview control measures that seem important to me, including software infrastructure, security measures, and other control measures (which often involve usage of AI or human labor). In addition to the core control measures (which are focused on monitoring, auditing, and tracking AIs and on blocking and/or resampling suspicious outputs), there is a long tail of potentially helpful control measures, including things like better control of [...]
---
Outline:
(01:13) Background and categorization
(05:54) Non-ML software infrastructure
(06:50) The most important software infrastructure
(11:16) Less important software infrastructure
(18:50) AI and human components
(20:35) Core measures
(26:43) Other less-core measures
(31:33) More speculative measures
(40:10) Specialized countermeasures
(44:38) Other directions and what Im not discussing
(45:46) Security improvements
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,368 Listeners
2,397 Listeners
7,808 Listeners
4,111 Listeners
87 Listeners
1,454 Listeners
8,774 Listeners
89 Listeners
354 Listeners
5,363 Listeners
15,028 Listeners
461 Listeners
127 Listeners
65 Listeners
432 Listeners