
Sign up to save your podcasts
Or


Two years ago, AI systems were still fumbling at basic reasoning. Today, they’re drafting legal briefs, solving advanced math problems, and diagnosing medical conditions at expert level. At this dizzying pace, it's difficult to imagine what the technology will be capable of just years from now, let alone decades. But in their new book, If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares — co-founder and president of the Machine Intelligence Research Institute (MIRI), respectively — argue that there's one easy call we can make: the default outcome of building superhuman AI is that we lose control of it, with consequences severe enough to threaten humanity's survival.
Yet despite leading figures in the AI industry expressing concerns about extinction risks from AI, the companies they head up remain engaged in a high-stakes race to the bottom. The incentives are enormous, and the brakes are weak. Having studied [...]
---
Outline:
(01:14) Today's AI Systems Are Grown Like Organisms, Not Engineered Like Machines
(03:43) You Don't Get What You Train For
(08:23) AI's Favorite Things
(09:27) Why We'd Lose
(12:47) The Case for Hope
(13:51) Discussion about this post
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By Center for AI SafetyTwo years ago, AI systems were still fumbling at basic reasoning. Today, they’re drafting legal briefs, solving advanced math problems, and diagnosing medical conditions at expert level. At this dizzying pace, it's difficult to imagine what the technology will be capable of just years from now, let alone decades. But in their new book, If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares — co-founder and president of the Machine Intelligence Research Institute (MIRI), respectively — argue that there's one easy call we can make: the default outcome of building superhuman AI is that we lose control of it, with consequences severe enough to threaten humanity's survival.
Yet despite leading figures in the AI industry expressing concerns about extinction risks from AI, the companies they head up remain engaged in a high-stakes race to the bottom. The incentives are enormous, and the brakes are weak. Having studied [...]
---
Outline:
(01:14) Today's AI Systems Are Grown Like Organisms, Not Engineered Like Machines
(03:43) You Don't Get What You Train For
(08:23) AI's Favorite Things
(09:27) Why We'd Lose
(12:47) The Case for Hope
(13:51) Discussion about this post
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.