
Sign up to save your podcasts
Or


This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
This post summarizes the responses I received from asking “Are there any big mistakes the AI safety community has made in the past or are currently making?”
A rough decompositions of the main themes brought up. The figures omit some less popular themes, and double-count respondents who brought up more than one theme.“Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are [...]
---
Outline:
(01:58) How to read this post
(02:51) Too many galaxy-brained arguments and not enough empiricism
(05:09) Problems with research
(06:40) Too insular
(11:16) Bad messaging
(13:47) AI safety's relationship with the leading AGI companies
(16:37) The bandwagon
(17:44) Pausing is bad
(18:17) Discounting public outreach and governance as a route to safety
(21:36) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongThis is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
This post summarizes the responses I received from asking “Are there any big mistakes the AI safety community has made in the past or are currently making?”
A rough decompositions of the main themes brought up. The figures omit some less popular themes, and double-count respondents who brought up more than one theme.“Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are [...]
---
Outline:
(01:58) How to read this post
(02:51) Too many galaxy-brained arguments and not enough empiricism
(05:09) Problems with research
(06:40) Too insular
(11:16) Bad messaging
(13:47) AI safety's relationship with the leading AGI companies
(16:37) The bandwagon
(17:44) Pausing is bad
(18:17) Discounting public outreach and governance as a route to safety
(21:36) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,909 Listeners

130 Listeners

7,215 Listeners

532 Listeners

16,221 Listeners

4 Listeners

14 Listeners

2 Listeners