
Sign up to save your podcasts
Or
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
4.9
63756,375 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
372 Listeners
43,791 Listeners
77,308 Listeners
14,023 Listeners
5,653 Listeners
10,246 Listeners
803 Listeners
4,625 Listeners
4,482 Listeners
15,199 Listeners
245 Listeners
2,296 Listeners
15,883 Listeners
11 Listeners
5 Listeners
63 Listeners
284 Listeners
235 Listeners
141 Listeners
234 Listeners
1,543 Listeners
836 Listeners
2,154 Listeners
802 Listeners
62 Listeners
263 Listeners
158 Listeners
940 Listeners
16 Listeners
189 Listeners
31 Listeners
573 Listeners
29 Listeners
57 Listeners
294 Listeners