
Sign up to save your podcasts
Or


A few days before “If Anyone Builds It, Everyone Dies” came out I wrote a review of Scott's review of the book.
Now I’ve actually read the book and can review it for real. I won’t go into the authors’ stylistic choices like their decision to start every chapter with a parable or their specific choice of language. I am no prose stylist, and tastes vary. Instead I will focus on their actual claims.
The main flaw of the book is asserting that various things are possible in theory, and then implying that this means they will definitely happen. I share the authors’ general concern that building superintelligence carries a significant risk, but I don’t think we’re as close to such a superintelligence as they do or that it will emerge as suddenly as they do, and I am much less certain that the superintelligence will be misaligned in [...]
---
Outline:
(01:06) Definitions
(02:37) The key claims
(05:16) 2. Is superintelligence coming soon and fast?
(07:32) On overconfidence
(08:26) Is superintelligence coming suddenly?
(09:05) 3. Will superintelligence relentlessly pursue its own goals?
(10:30) 4. Will superintelligence relentlessly pursue goals that result in our destruction?
(17:09) On the current state of AI understanding and safety research
(18:27) 5. Does this all mean we should stop AI development now?
(19:49) Still, the book addresses some common misunderstandings
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongA few days before “If Anyone Builds It, Everyone Dies” came out I wrote a review of Scott's review of the book.
Now I’ve actually read the book and can review it for real. I won’t go into the authors’ stylistic choices like their decision to start every chapter with a parable or their specific choice of language. I am no prose stylist, and tastes vary. Instead I will focus on their actual claims.
The main flaw of the book is asserting that various things are possible in theory, and then implying that this means they will definitely happen. I share the authors’ general concern that building superintelligence carries a significant risk, but I don’t think we’re as close to such a superintelligence as they do or that it will emerge as suddenly as they do, and I am much less certain that the superintelligence will be misaligned in [...]
---
Outline:
(01:06) Definitions
(02:37) The key claims
(05:16) 2. Is superintelligence coming soon and fast?
(07:32) On overconfidence
(08:26) Is superintelligence coming suddenly?
(09:05) 3. Will superintelligence relentlessly pursue its own goals?
(10:30) 4. Will superintelligence relentlessly pursue goals that result in our destruction?
(17:09) On the current state of AI understanding and safety research
(18:27) 5. Does this all mean we should stop AI development now?
(19:49) Still, the book addresses some common misunderstandings
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,336 Listeners

2,451 Listeners

8,522 Listeners

4,181 Listeners

95 Listeners

1,603 Listeners

9,927 Listeners

96 Listeners

517 Listeners

5,511 Listeners

15,859 Listeners

553 Listeners

131 Listeners

93 Listeners

466 Listeners