LessWrong posts by zvi

“Bostrom Goes Unheard” by Zvi


Listen Later

[Editor's Note: This post is split off from AI #38 and only on LessWrong because I want to avoid overloading my general readers with this sort of thing at this time, and also I think it is potentially important we have a link available. I plan to link to it from there with a short summary.]

Nick Bostrom was interviewed on a wide variety of questions on UnHerd, primarily on existential risk and AI, I found it thoughtful throughout. In it, he spent the first 80% of the time talking about existential risk. Then in the last 20% he expressed the concern that it was unlikely but possible we would overshoot our concerns about AI and never build AGI at all, which would be a tragedy.

How did those who would dismiss AI risk and build AGI as fast as possible react?

About how you would expect. This is [...]

---

Outline:

(04:40) What Bostrom Centrally Said Was Mostly Not New or Controversial

(06:54) Responses Confirming Many Concerned About Existential Risk Mostly Agree

(11:49) Quoted Text in Detail

(19:42) The Broader Podcast Context

(21:35) A Call for Nuance

(24:33) The Quoted Text Continued

(27:08) Conclusion

---

First published:

November 13th, 2023

Source:

https://www.lesswrong.com/posts/PyNqASANiAuG7GrYW/bostrom-goes-unheard

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong posts by zviBy zvi