
Sign up to save your podcasts
Or


An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
By iHeartPodcasts4.9
64386,438 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.

78,459 Listeners

43,812 Listeners

5,658 Listeners

1,795 Listeners

944 Listeners

12,219 Listeners

10,328 Listeners

4,646 Listeners

1,875 Listeners

4,558 Listeners

2,119 Listeners

244 Listeners

2,344 Listeners

8 Listeners

8 Listeners

350 Listeners

63 Listeners

5,156 Listeners

249 Listeners

141 Listeners

237 Listeners

1,550 Listeners

839 Listeners

2,305 Listeners

63 Listeners

276 Listeners

4,551 Listeners

158 Listeners

1,015 Listeners

17 Listeners

191 Listeners

60 Listeners

32 Listeners

534 Listeners

37 Listeners