
Sign up to save your podcasts
Or
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
4.9
63756,375 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
389 Listeners
43,895 Listeners
77,459 Listeners
14,077 Listeners
5,653 Listeners
10,253 Listeners
806 Listeners
4,637 Listeners
4,469 Listeners
15,235 Listeners
248 Listeners
2,307 Listeners
15,964 Listeners
11 Listeners
7 Listeners
63 Listeners
283 Listeners
237 Listeners
141 Listeners
236 Listeners
1,545 Listeners
837 Listeners
2,175 Listeners
802 Listeners
62 Listeners
267 Listeners
157 Listeners
938 Listeners
16 Listeners
191 Listeners
31 Listeners
568 Listeners
29 Listeners
57 Listeners
295 Listeners