
Sign up to save your podcasts
Or
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we’ll flesh out [...]
---
Outline:
(01:10) Expand the alignment ecosystem with startups
(06:10) Expanding now prepares well for the future
(08:01) Differential tech development that doesn’t hurt on net is a broader category of work than some think
(14:11) We need to participate in and build the structures we want to see in the world
(16:09) Practical next steps to solve alignment
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Many thanks to Brandon Goldman, David Langer, Samuel Härgestam, Eric Ho, Diogo de Lucena, and Marc Carauleanu, for their support and feedback throughout.
Most alignment researchers we sampled in our recent survey think we are currently not on track to succeed with alignment–meaning that humanity may well be on track to lose control of our future.
In order to improve our chances of surviving and thriving, we should apply our most powerful coordination methods towards solving the alignment problem. We think that startups are an underappreciated part of humanity's toolkit, and having more AI-safety-focused startups would increase the probability of solving alignment.
That said, we also appreciate that AI safety is highly complicated by nature[1] and therefore calls for a more nuanced approach than simple pro-startup boosterism. In the rest of this post, we’ll flesh out [...]
---
Outline:
(01:10) Expand the alignment ecosystem with startups
(06:10) Expanding now prepares well for the future
(08:01) Differential tech development that doesn’t hurt on net is a broader category of work than some think
(14:11) We need to participate in and build the structures we want to see in the world
(16:09) Practical next steps to solve alignment
The original text contained 14 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,388 Listeners
7,910 Listeners
4,133 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,429 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners