
Sign up to save your podcasts
Or


Daniel notes: This is a linkpost for Vitalik's post. I've copied the text below so that I can mark it up with comments.
...
Special thanks to Balvi volunteers for feedback and review
In April this year, Daniel Kokotajlo, Scott Alexander and others released what they describe as "a scenario that represents our best guess about what [the impact of superhuman AI over the next 5 years] might look like". The scenario predicts that by 2027 we will have made superhuman AI and the entire future of our civilization hinges on how it turns out: by 2030 we will get either (from the US perspective) utopia or (from any human's perspective) total annihilation.
In the months since then, there has been a large volume of responses, with varying perspectives on how likely the scenario that they presented is. For example:
---
Outline:
(04:24) Bio doom is far from the slam-dunk that the scenario describes
(10:16) What about combining bio with other types of attack?
(11:51) Cybersecurity doom is also far from a slam-dunk
(15:29) Super-persuasion doom is also far from a slam-dunk
(18:17) Implications of these arguments
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongDaniel notes: This is a linkpost for Vitalik's post. I've copied the text below so that I can mark it up with comments.
...
Special thanks to Balvi volunteers for feedback and review
In April this year, Daniel Kokotajlo, Scott Alexander and others released what they describe as "a scenario that represents our best guess about what [the impact of superhuman AI over the next 5 years] might look like". The scenario predicts that by 2027 we will have made superhuman AI and the entire future of our civilization hinges on how it turns out: by 2030 we will get either (from the US perspective) utopia or (from any human's perspective) total annihilation.
In the months since then, there has been a large volume of responses, with varying perspectives on how likely the scenario that they presented is. For example:
---
Outline:
(04:24) Bio doom is far from the slam-dunk that the scenario describes
(10:16) What about combining bio with other types of attack?
(11:51) Cybersecurity doom is also far from a slam-dunk
(15:29) Super-persuasion doom is also far from a slam-dunk
(18:17) Implications of these arguments
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,375 Listeners

2,424 Listeners

8,934 Listeners

4,153 Listeners

92 Listeners

1,594 Listeners

9,907 Listeners

90 Listeners

75 Listeners

5,469 Listeners

16,043 Listeners

539 Listeners

130 Listeners

95 Listeners

503 Listeners