
Sign up to save your podcasts
Or


An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
By iHeartPodcasts4.9
64306,430 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.

78,309 Listeners

23,835 Listeners

26,201 Listeners

5,668 Listeners

12,184 Listeners

10,319 Listeners

4,657 Listeners

4,537 Listeners

879 Listeners

2,118 Listeners

21,895 Listeners

246 Listeners

8 Listeners

9 Listeners

349 Listeners

64 Listeners

249 Listeners

141 Listeners

235 Listeners

1,551 Listeners

839 Listeners

62 Listeners

276 Listeners

4,449 Listeners

156 Listeners

1,013 Listeners

17 Listeners

191 Listeners

8,950 Listeners

59 Listeners

58 Listeners

32 Listeners

485 Listeners

707 Listeners

19 Listeners