
Sign up to save your podcasts
Or


Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we’ll flesh out [...]
---
Outline:
(01:10) Expand the alignment ecosystem with startups
(06:10) Expanding now prepares well for the future
(08:01) Differential tech development that doesn’t hurt on net is a broader category of work than some think
(14:11) We need to participate in and build the structures we want to see in the world
(16:09) Practical next steps to solve alignment
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongMany thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we’ll flesh out [...]
---
Outline:
(01:10) Expand the alignment ecosystem with startups
(06:10) Expanding now prepares well for the future
(08:01) Differential tech development that doesn’t hurt on net is a broader category of work than some think
(14:11) We need to participate in and build the structures we want to see in the world
(16:09) Practical next steps to solve alignment
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,909 Listeners

130 Listeners

7,215 Listeners

532 Listeners

16,221 Listeners

4 Listeners

14 Listeners

2 Listeners