
Sign up to save your podcasts
Or
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
4.9
64266,426 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
44,001 Listeners
78,302 Listeners
23,798 Listeners
5,672 Listeners
26,173 Listeners
10,314 Listeners
4,672 Listeners
24,215 Listeners
4,546 Listeners
2,112 Listeners
246 Listeners
2,339 Listeners
568 Listeners
8 Listeners
8 Listeners
350 Listeners
63 Listeners
243 Listeners
141 Listeners
237 Listeners
1,549 Listeners
839 Listeners
2,298 Listeners
62 Listeners
276 Listeners
4,476 Listeners
156 Listeners
1,013 Listeners
16 Listeners
190 Listeners
30 Listeners
215 Listeners
68 Listeners
59 Listeners
473 Listeners