
Sign up to save your podcasts
Or
This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
This post summarizes the responses I received from asking “Are there any big mistakes the AI safety community has made in the past or are currently making?”
A rough decompositions of the main themes brought up. The figures omit some less popular themes, and double-count respondents who brought up more than one theme.“Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are [...]
---
Outline:
(01:58) How to read this post
(02:51) Too many galaxy-brained arguments and not enough empiricism
(05:09) Problems with research
(06:40) Too insular
(11:16) Bad messaging
(13:47) AI safety's relationship with the leading AGI companies
(16:37) The bandwagon
(17:44) Pausing is bad
(18:17) Discounting public outreach and governance as a route to safety
(21:36) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
This is the third of three posts summarizing what I learned when I interviewed 17 AI safety experts about their "big picture" of the existential AI risk landscape: how AGI will play out, how things might go wrong, and what the AI safety community should be doing. See here for a list of the participants and the standardized list of questions I asked.
This post summarizes the responses I received from asking “Are there any big mistakes the AI safety community has made in the past or are currently making?”
A rough decompositions of the main themes brought up. The figures omit some less popular themes, and double-count respondents who brought up more than one theme.“Yeah, probably most things people are doing are mistakes. This is just some random group of people. Why would they be making good decisions on priors? When I look at most things people are [...]
---
Outline:
(01:58) How to read this post
(02:51) Too many galaxy-brained arguments and not enough empiricism
(05:09) Problems with research
(06:40) Too insular
(11:16) Bad messaging
(13:47) AI safety's relationship with the leading AGI companies
(16:37) The bandwagon
(17:44) Pausing is bad
(18:17) Discounting public outreach and governance as a route to safety
(21:36) Conclusion
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,388 Listeners
7,910 Listeners
4,133 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners