
Sign up to save your podcasts
Or
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
4.9
64046,404 ratings
An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)
Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.
Learn more about your ad-choices at https://www.iheartpodcastnetwork.com
See omnystudio.com/listener for privacy information.
1,643 Listeners
77,912 Listeners
1,751 Listeners
23,798 Listeners
5,653 Listeners
10,266 Listeners
12,112 Listeners
4,643 Listeners
1,866 Listeners
23,870 Listeners
4,493 Listeners
15,310 Listeners
250 Listeners
2,314 Listeners
16,072 Listeners
11 Listeners
9 Listeners
348 Listeners
63 Listeners
243 Listeners
141 Listeners
3,062 Listeners
234 Listeners
1,551 Listeners
838 Listeners
2,222 Listeners
63 Listeners
271 Listeners
157 Listeners
970 Listeners
16 Listeners
194 Listeners
29 Listeners
215 Listeners
56 Listeners