
Sign up to save your podcasts
Or


This is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongThis is a new introduction to AI as an extinction threat, previously posted to the MIRI website in February alongside a summary. It was written independently of Eliezer and Nate's forthcoming book, If Anyone Builds It, Everyone Dies, and isn't a sneak peak of the book. Since the book is long and costs money, we expect this to be a valuable resource in its own right even after the book comes out next month.[1]
The stated goal of the world's leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and [...]
---
Outline:
(02:27) 1. There isn't a ceiling at human-level capabilities.
(08:56) 2. ASI is very likely to exhibit goal-oriented behavior.
(15:12) 3. ASI is very likely to pursue the wrong goals.
(32:40) 4. It would be lethally dangerous to build ASIs that have the wrong goals.
(46:03) 5. Catastrophe can be averted via a sufficiently aggressive policy response.
The original text contained 1 footnote which was omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,389 Listeners

2,420 Listeners

8,893 Listeners

4,150 Listeners

92 Listeners

1,592 Listeners

9,852 Listeners

90 Listeners

500 Listeners

5,471 Listeners

16,029 Listeners

540 Listeners

129 Listeners

94 Listeners

503 Listeners