
Sign up to save your podcasts
Or


Artificial intelligence has leapt from speculative theory to everyday tool with astonishing speed, promising breakthroughs in science, medicine, and the ways we learn, live, and work. But to some of its earliest researchers, the race toward superintelligence represents not progress but an existential threat, one that could end humanity as we know it.
Eliezer Yudkowsky and Nate Soares, authors of If Anyone Builds It, Everyone Dies, join Oren to debate their claim that pursuing AI will end in human extinction. During the conversation, a skeptical Oren pushes them on whether meaningful safeguards are possible, what a realistic boundary between risk and progress might look like, and how society should judge the costs of stopping against the consequences of carrying on.
By American Compass4.5
5858 ratings
Artificial intelligence has leapt from speculative theory to everyday tool with astonishing speed, promising breakthroughs in science, medicine, and the ways we learn, live, and work. But to some of its earliest researchers, the race toward superintelligence represents not progress but an existential threat, one that could end humanity as we know it.
Eliezer Yudkowsky and Nate Soares, authors of If Anyone Builds It, Everyone Dies, join Oren to debate their claim that pursuing AI will end in human extinction. During the conversation, a skeptical Oren pushes them on whether meaningful safeguards are possible, what a realistic boundary between risk and progress might look like, and how society should judge the costs of stopping against the consequences of carrying on.

4,275 Listeners

2,443 Listeners

2,293 Listeners

723 Listeners

900 Listeners

7,179 Listeners

1,231 Listeners

634 Listeners

2,433 Listeners

832 Listeners

224 Listeners

8,750 Listeners

333 Listeners

157 Listeners

32 Listeners