
Sign up to save your podcasts
Or


Even if we solve the AI alignment problem, we still face post-alignment problems, which are all the other existential problems
People have identified various imposing problems that we may need to solve before developing ASI. An incomplete list of topics: misuse; animal-inclusive AI; AI welfare; S-risks from conflict; gradual disempowerment; permanent mass unemployment; risks from malevolent actors/AI-enabled coups/gradual concentration of power; moral error.
If we figure out how to resolve one of these problems, we still have to deal with all the others. If even one problem remains unsolved, the future could be catastrophically bad. That fact diminishes the promise of working on problems individually.
A global moratorium on superintelligence buys us more time to work on alignment as well as all of the post-alignment problems. Pausing AI is in the common interest of many causes.
Cross-posted from my website.
We can't delay until after ASI
If we figure out how to align ASI, can it solve post-alignment problems for us? Or can we use ASI to enable a Long Reflection? No.
To build an aligned ASI, one of two conditions must hold:
---
Outline:
(01:15) We cant delay until after ASI
(03:12) Whats the alternative to pausing?
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongEven if we solve the AI alignment problem, we still face post-alignment problems, which are all the other existential problems
People have identified various imposing problems that we may need to solve before developing ASI. An incomplete list of topics: misuse; animal-inclusive AI; AI welfare; S-risks from conflict; gradual disempowerment; permanent mass unemployment; risks from malevolent actors/AI-enabled coups/gradual concentration of power; moral error.
If we figure out how to resolve one of these problems, we still have to deal with all the others. If even one problem remains unsolved, the future could be catastrophically bad. That fact diminishes the promise of working on problems individually.
A global moratorium on superintelligence buys us more time to work on alignment as well as all of the post-alignment problems. Pausing AI is in the common interest of many causes.
Cross-posted from my website.
We can't delay until after ASI
If we figure out how to align ASI, can it solve post-alignment problems for us? Or can we use ASI to enable a Long Reflection? No.
To build an aligned ASI, one of two conditions must hold:
---
Outline:
(01:15) We cant delay until after ASI
(03:12) Whats the alternative to pausing?
The original text contained 3 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

113,121 Listeners

131 Listeners

7,244 Listeners

551 Listeners

16,525 Listeners

4 Listeners

14 Listeners

2 Listeners