
Sign up to save your podcasts
Or


A few days before “If Anyone Builds It, Everyone Dies” came out I wrote a review of Scott's review of the book.
Now I’ve actually read the book and can review it for real. I won’t go into the authors’ stylistic choices like their decision to start every chapter with a parable or their specific choice of language. I am no prose stylist, and tastes vary. Instead I will focus on their actual claims.
The main flaw of the book is asserting that various things are possible in theory, and then implying that this means they will definitely happen. I share the authors’ general concern that building superintelligence carries a significant risk, but I don’t think we’re as close to such a superintelligence as they do or that it will emerge as suddenly as they do, and I am much less certain that the superintelligence will be misaligned in [...]
---
Outline:
(01:06) Definitions
(02:37) The key claims
(05:16) 2. Is superintelligence coming soon and fast?
(07:32) On overconfidence
(08:26) Is superintelligence coming suddenly?
(09:05) 3. Will superintelligence relentlessly pursue its own goals?
(10:30) 4. Will superintelligence relentlessly pursue goals that result in our destruction?
(17:09) On the current state of AI understanding and safety research
(18:27) 5. Does this all mean we should stop AI development now?
(19:49) Still, the book addresses some common misunderstandings
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongA few days before “If Anyone Builds It, Everyone Dies” came out I wrote a review of Scott's review of the book.
Now I’ve actually read the book and can review it for real. I won’t go into the authors’ stylistic choices like their decision to start every chapter with a parable or their specific choice of language. I am no prose stylist, and tastes vary. Instead I will focus on their actual claims.
The main flaw of the book is asserting that various things are possible in theory, and then implying that this means they will definitely happen. I share the authors’ general concern that building superintelligence carries a significant risk, but I don’t think we’re as close to such a superintelligence as they do or that it will emerge as suddenly as they do, and I am much less certain that the superintelligence will be misaligned in [...]
---
Outline:
(01:06) Definitions
(02:37) The key claims
(05:16) 2. Is superintelligence coming soon and fast?
(07:32) On overconfidence
(08:26) Is superintelligence coming suddenly?
(09:05) 3. Will superintelligence relentlessly pursue its own goals?
(10:30) 4. Will superintelligence relentlessly pursue goals that result in our destruction?
(17:09) On the current state of AI understanding and safety research
(18:27) 5. Does this all mean we should stop AI development now?
(19:49) Still, the book addresses some common misunderstandings
The original text contained 5 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,388 Listeners

2,424 Listeners

8,267 Listeners

4,145 Listeners

92 Listeners

1,580 Listeners

9,828 Listeners

89 Listeners

488 Listeners

5,475 Listeners

16,083 Listeners

534 Listeners

133 Listeners

96 Listeners

509 Listeners