
Sign up to save your podcasts
Or


[Editor's Note: This post is split off from AI #38 and only on LessWrong because I want to avoid overloading my general readers with this sort of thing at this time, and also I think it is potentially important we have a link available. I plan to link to it from there with a short summary.]
Nick Bostrom was interviewed on a wide variety of questions on UnHerd, primarily on existential risk and AI, I found it thoughtful throughout. In it, he spent the first 80% of the time talking about existential risk. Then in the last 20% he expressed the concern that it was unlikely but possible we would overshoot our concerns about AI and never build AGI at all, which would be a tragedy.
How did those who would dismiss AI risk and build AGI as fast as possible react?
About how you would expect. This is [...]
---
Outline:
(04:40) What Bostrom Centrally Said Was Mostly Not New or Controversial
(06:54) Responses Confirming Many Concerned About Existential Risk Mostly Agree
(11:49) Quoted Text in Detail
(19:42) The Broader Podcast Context
(21:35) A Call for Nuance
(24:33) The Quoted Text Continued
(27:08) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
By zvi5
22 ratings
[Editor's Note: This post is split off from AI #38 and only on LessWrong because I want to avoid overloading my general readers with this sort of thing at this time, and also I think it is potentially important we have a link available. I plan to link to it from there with a short summary.]
Nick Bostrom was interviewed on a wide variety of questions on UnHerd, primarily on existential risk and AI, I found it thoughtful throughout. In it, he spent the first 80% of the time talking about existential risk. Then in the last 20% he expressed the concern that it was unlikely but possible we would overshoot our concerns about AI and never build AGI at all, which would be a tragedy.
How did those who would dismiss AI risk and build AGI as fast as possible react?
About how you would expect. This is [...]
---
Outline:
(04:40) What Bostrom Centrally Said Was Mostly Not New or Controversial
(06:54) Responses Confirming Many Concerned About Existential Risk Mostly Agree
(11:49) Quoted Text in Detail
(19:42) The Broader Podcast Context
(21:35) A Call for Nuance
(24:33) The Quoted Text Continued
(27:08) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,392 Listeners

2,462 Listeners

1,102 Listeners

109 Listeners

296 Listeners

89 Listeners

553 Listeners

5,558 Listeners

140 Listeners

14 Listeners

140 Listeners

155 Listeners

459 Listeners

0 Listeners

143 Listeners