
Sign up to save your podcasts
Or


AI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of the most controversial, with AI accelerationists accusing AI doomers/ai-not-kill-everyoneism-ers of being luddites who would rather keep humanity shackled to the horse and plow than risk any progress, whilst the doomers in turn accuse accels of rushing humanity as fast as it can straight off a cliff.
As Robin Hanson likes to point out, trying to change policy on a polarised issue is backbreaking work. But if you can find a way to pull sideways you can find ways to make easy progress with noone pulling the other way.
So can we think of a research program that:
a) will produce critically useful results even if AI isn't dangerous/benefits of AI far outweigh costs.
b) would likely be sufficient to prevent doom if the project is successful and AI does turn out to [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongAI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of the most controversial, with AI accelerationists accusing AI doomers/ai-not-kill-everyoneism-ers of being luddites who would rather keep humanity shackled to the horse and plow than risk any progress, whilst the doomers in turn accuse accels of rushing humanity as fast as it can straight off a cliff.
As Robin Hanson likes to point out, trying to change policy on a polarised issue is backbreaking work. But if you can find a way to pull sideways you can find ways to make easy progress with noone pulling the other way.
So can we think of a research program that:
a) will produce critically useful results even if AI isn't dangerous/benefits of AI far outweigh costs.
b) would likely be sufficient to prevent doom if the project is successful and AI does turn out to [...]
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,041 Listeners

130 Listeners

7,230 Listeners

531 Listeners

16,229 Listeners

4 Listeners

14 Listeners

2 Listeners