
Sign up to save your podcasts
Or
Introduction/summary
In my last post, I laid out my picture of what it would even be to solve the alignment problem. In this series of posts, I want to talk about how we might solve it.
To be clear: I don’t think that humans necessarily need to solve the whole problem – at least, not on our own. To the contrary, I think we should be trying hard to get help from sufficiently capable and trusted AIs.[1] And I think various paths that my last post would class as “avoiding” or “handling but not solving” the problem, rather than “solving it,” are important to consider too, especially in the near-term.[2]
I’ll discuss this more in future posts. Still, even if we ultimately need AIs to help us solve the problem, I expect it to be useful to have as direct a grip as we can, now, on [...]
---
Outline:
(00:08) Introduction/summary
(02:22) Summary of the series
(02:35) Part 1 – Ontology
(06:16) What does success look like?
(09:10) Is it even worth talking about the full problem?
(12:53) Preliminaries re: avoiding takeover
(13:15) The ontology I’m using
(15:26) A spectrum of motivation control and option control
(23:02) Incentive structure safety cases
(30:12) Carving up the space of approaches to motivation and option control
(30:42) Internal vs. external variables
(32:50) Inspection vs. intervention
(35:31) AI-assisted improvements
The original text contained 20 footnotes which were omitted from this narration.
The original text contained 13 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Introduction/summary
In my last post, I laid out my picture of what it would even be to solve the alignment problem. In this series of posts, I want to talk about how we might solve it.
To be clear: I don’t think that humans necessarily need to solve the whole problem – at least, not on our own. To the contrary, I think we should be trying hard to get help from sufficiently capable and trusted AIs.[1] And I think various paths that my last post would class as “avoiding” or “handling but not solving” the problem, rather than “solving it,” are important to consider too, especially in the near-term.[2]
I’ll discuss this more in future posts. Still, even if we ultimately need AIs to help us solve the problem, I expect it to be useful to have as direct a grip as we can, now, on [...]
---
Outline:
(00:08) Introduction/summary
(02:22) Summary of the series
(02:35) Part 1 – Ontology
(06:16) What does success look like?
(09:10) Is it even worth talking about the full problem?
(12:53) Preliminaries re: avoiding takeover
(13:15) The ontology I’m using
(15:26) A spectrum of motivation control and option control
(23:02) Incentive structure safety cases
(30:12) Carving up the space of approaches to motivation and option control
(30:42) Internal vs. external variables
(32:50) Inspection vs. intervention
(35:31) AI-assisted improvements
The original text contained 20 footnotes which were omitted from this narration.
The original text contained 13 images which were described by AI.
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
26,366 Listeners
2,383 Listeners
7,944 Listeners
4,137 Listeners
87 Listeners
1,459 Listeners
9,050 Listeners
88 Listeners
386 Listeners
5,422 Listeners
15,220 Listeners
473 Listeners
120 Listeners
76 Listeners
456 Listeners