
Sign up to save your podcasts
Or


There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy
By New Books Network4.2
109109 ratings
There's a lot of talk these days about the existential risk that artificial intelligence poses to humanity -- that somehow the AIs will rise up and destroy us or become our overlords.
In The AI Mirror: How to Reclaim our Humanity in an Age of Machine Thinking (Oxford UP), Shannon Vallor argues that the actual, and very alarming, existential risk of AI that we face right now is quite different. Because some AI technologies, such as ChatGPT or other large language models, can closely mimic the outputs of an understanding mind without having actual understanding, the technology can encourage us to surrender the activities of thinking and reasoning. This poses the risk of diminishing our ability to respond to challenges and to imagine and bring about different futures. In her compelling book, Vallor, who holds the Baillie Gifford Chair in the Ethics and Artificial Intelligence at the University of Edinburgh's Edinburgh Futures Institute, critically examines AI Doomers and Long-termism, the nature of AI in relation to human intelligence, and the technology industry's hand in diverting our attention from the serious risks we face.
Learn more about your ad choices. Visit megaphone.fm/adchoices
Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/philosophy

15,205 Listeners

290 Listeners

10,756 Listeners

2,105 Listeners

209 Listeners

210 Listeners

161 Listeners

147 Listeners

63 Listeners

51 Listeners

1,599 Listeners

185 Listeners

46 Listeners

164 Listeners

103 Listeners

61 Listeners

1,539 Listeners

317 Listeners

587 Listeners

199 Listeners

447 Listeners

277 Listeners

225 Listeners