
Sign up to save your podcasts
Or


Is AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongIs AI takeover like a nuclear meltdown? A coup? A plane crash?
My day job is thinking about safety measures that aim to reduce catastrophic risks from AI (especially risks from egregious misalignment). The two main themes of this work are the design of such measures (what's the space of techniques we might expect to be affordable and effective) and their evaluation (how do we decide which safety measures to implement, and whether a set of measures is sufficiently robust). I focus especially on AI control, where we assume our models are trying to subvert our safety measures and aspire to find measures that are robust anyway.
Like other AI safety researchers, I often draw inspiration from other fields that contain potential analogies. Here are some of those fields, my opinions on their [...]
---
Outline:
(01:04) Robustness to insider threats
(07:16) Computer security
(09:58) Adversarial risk analysis
(11:58) Safety engineering
(13:34) Physical security
(18:06) How human power structures arise and are preserved
The original text contained 1 image which was described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

112,843 Listeners

130 Listeners

7,214 Listeners

531 Listeners

16,223 Listeners

4 Listeners

14 Listeners

2 Listeners