
Sign up to save your podcasts
Or


Last year I wrote the CAST agenda, arguing that aiming for Corrigibility As Singular Target was the least-doomed way to make an AGI. (Though it is almost certainly wiser to hold off on building it until we have more skill at alignment, as a species.)
I still basically believe that CAST is right. Corrigibility still seems like a promising target compared to full alignment with human values, since there's a better story for how a near-miss when aiming towards corrigibility might be recoverable, but a near-miss when aiming for goodness could result is a catastrophe, due to the fragility of value. On top of this, corrigibility is significantly simpler and less philosophically fraught than human values, decreasing the amount of information that needs to be perfectly transmitted to the machine. Any spec, constitution, or whatever that attempts to balance corrigibility with other goals runs the risk of the convergent instrumental drives towards those other goals washing out the corrigibility. My most recent novel is intended to be an introduction to corrigibility that's accessible to laypeople, featuring a CAST AGI as a main character, and I feel good about what I wrote there.
But I'm starting to feel like certain [...]
---
Outline:
(02:42) 1. Oops I Ruined the Universe
(06:12) Is There an Obvious Fix?
(08:20) 2. Attractor Basin Masks Brittleness
(11:32) 3. Feedback Loops Quickly Disappear by Default
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongLast year I wrote the CAST agenda, arguing that aiming for Corrigibility As Singular Target was the least-doomed way to make an AGI. (Though it is almost certainly wiser to hold off on building it until we have more skill at alignment, as a species.)
I still basically believe that CAST is right. Corrigibility still seems like a promising target compared to full alignment with human values, since there's a better story for how a near-miss when aiming towards corrigibility might be recoverable, but a near-miss when aiming for goodness could result is a catastrophe, due to the fragility of value. On top of this, corrigibility is significantly simpler and less philosophically fraught than human values, decreasing the amount of information that needs to be perfectly transmitted to the machine. Any spec, constitution, or whatever that attempts to balance corrigibility with other goals runs the risk of the convergent instrumental drives towards those other goals washing out the corrigibility. My most recent novel is intended to be an introduction to corrigibility that's accessible to laypeople, featuring a CAST AGI as a main character, and I feel good about what I wrote there.
But I'm starting to feel like certain [...]
---
Outline:
(02:42) 1. Oops I Ruined the Universe
(06:12) Is There an Obvious Fix?
(08:20) 2. Attractor Basin Masks Brittleness
(11:32) 3. Feedback Loops Quickly Disappear by Default
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,332 Listeners

2,453 Listeners

8,579 Listeners

4,183 Listeners

93 Listeners

1,598 Listeners

9,932 Listeners

95 Listeners

511 Listeners

5,518 Listeners

15,938 Listeners

546 Listeners

131 Listeners

93 Listeners

467 Listeners