
Sign up to save your podcasts
Or


This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. This blogpost is paired with our announcement that Apollo Research is spinning out from fiscal sponsorship into a PBC.
Summary of main claims:
---
Outline:
(02:07) Definition of AGI safety products
(05:37) Argument 1: Sufficient Incentive Alignment
(06:40) Transfer in time: AGI could be a scaled-up version of current systems
(08:20) Transfer in problem space: Some frontier problems are not too dissimilar from safety problems that have large-scale demand
(09:58) Argument 2: Taking AGI & the economy seriously
(12:13) Argument 3: Automated AI safety work requires scale
(14:59) Argument 4: The market doesn't solve safety on its own
(18:02) Limitations
(21:46) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongThis is a personal post and does not necessarily reflect the opinion of other members of Apollo Research. This blogpost is paired with our announcement that Apollo Research is spinning out from fiscal sponsorship into a PBC.
Summary of main claims:
---
Outline:
(02:07) Definition of AGI safety products
(05:37) Argument 1: Sufficient Incentive Alignment
(06:40) Transfer in time: AGI could be a scaled-up version of current systems
(08:20) Transfer in problem space: Some frontier problems are not too dissimilar from safety problems that have large-scale demand
(09:58) Argument 2: Taking AGI & the economy seriously
(12:13) Argument 3: Automated AI safety work requires scale
(14:59) Argument 4: The market doesn't solve safety on its own
(18:02) Limitations
(21:46) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

113,168 Listeners

130 Listeners

7,265 Listeners

528 Listeners

16,409 Listeners

4 Listeners

14 Listeners

2 Listeners