
Sign up to save your podcasts
Or
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
4.9
63756,375 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
365 Listeners
43,925 Listeners
77,426 Listeners
13,994 Listeners
5,667 Listeners
10,213 Listeners
791 Listeners
4,579 Listeners
4,500 Listeners
15,141 Listeners
248 Listeners
2,272 Listeners
15,728 Listeners
9 Listeners
5 Listeners
63 Listeners
284 Listeners
235 Listeners
141 Listeners
231 Listeners
1,536 Listeners
837 Listeners
2,097 Listeners
804 Listeners
62 Listeners
263 Listeners
158 Listeners
938 Listeners
16 Listeners
188 Listeners
31 Listeners
569 Listeners
26 Listeners
59 Listeners
292 Listeners