Book Summaries 2024

Superintelligence Paths, Dangers, Strategies Nick Bostrom


Listen Later

According to Oxford philosopher Nick Bostrom, there’s a very real possibility that AI could one day rival, and then vastly exceed, human intelligence. When and if this happens, the future of humankind would depend more on AI-generated decisions than human decisions, just as the survival of many animal species has depended more on human decisions than those of the animals in question ever since humans became more intelligent than other animals.

Depending on how AI behaves, creating it could be the solution to some of humanity’s most persistent problems, or it could be the worst—and last—mistake of human history.

In this guide, we’ll consider why Bostrom thinks superintelligent AI is a realistic possibility, why he thinks it could be dangerous, and the safeguards he says need to be developed. Along the way we’ll compare his perspective to that of other futurists, such as Peter Thiel and Yuval Noah Harari, and we’ll look at the impact of AI developments since the book’s publication.

...more
View all episodesView all episodes
Download on the App Store

Book Summaries 2024By MMM