
Sign up to save your podcasts
Or


A post by Michael Nielsen that I found quite interesting. I decided to reproduce the full essay content here, since I think Michael is fine with that, but feel free to let me know to only excerpt it.
This is the text for a talk exploring why experts disagree so strongly about whether artificial superintelligence (ASI) poses an existential risk to humanity. I review some key arguments on both sides, emphasizing that the fundamental danger isn't about whether "rogue ASI" gets out of control: it's the raw power ASI will confer, and the lower barriers to creating dangerous technologies. This point is not new, but has two underappreciated consequences. First, many people find rogue ASI implausible, and this has led them to mistakenly dismiss existential risk. Second: much work on AI alignment, while well-intentioned, speeds progress toward catastrophic capabilities, without addressing our world's potential vulnerability to dangerous technologies.
[...]
---
Outline:
(06:37) Biorisk scenario
(17:42) The Vulnerable World Hypothesis
(26:08) Loss of control to ASI
(32:18) Conclusion
(38:50) Acknowledgements
The original text contained 29 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongA post by Michael Nielsen that I found quite interesting. I decided to reproduce the full essay content here, since I think Michael is fine with that, but feel free to let me know to only excerpt it.
This is the text for a talk exploring why experts disagree so strongly about whether artificial superintelligence (ASI) poses an existential risk to humanity. I review some key arguments on both sides, emphasizing that the fundamental danger isn't about whether "rogue ASI" gets out of control: it's the raw power ASI will confer, and the lower barriers to creating dangerous technologies. This point is not new, but has two underappreciated consequences. First, many people find rogue ASI implausible, and this has led them to mistakenly dismiss existential risk. Second: much work on AI alignment, while well-intentioned, speeds progress toward catastrophic capabilities, without addressing our world's potential vulnerability to dangerous technologies.
[...]
---
Outline:
(06:37) Biorisk scenario
(17:42) The Vulnerable World Hypothesis
(26:08) Loss of control to ASI
(32:18) Conclusion
(38:50) Acknowledgements
The original text contained 29 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,365 Listeners

2,437 Listeners

9,046 Listeners

4,153 Listeners

92 Listeners

1,595 Listeners

9,911 Listeners

90 Listeners

70 Listeners

5,470 Listeners

16,097 Listeners

536 Listeners

131 Listeners

95 Listeners

520 Listeners