
Sign up to save your podcasts
Or


Max Tegmark recently published a post "Which side of the AI safety community are you in?", where he carves the AI safety community into 2 camps:
Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
I think this framing is counterproductive. Instead, here's the oversimplified framing that I prefer:
Plan 1: Try to get international coordination to stop the race to superintelligence, and then try to put humanity on a better trajectory so we can safely build aligned superintelligence [...]
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongMax Tegmark recently published a post "Which side of the AI safety community are you in?", where he carves the AI safety community into 2 camps:
Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.
Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.
I think this framing is counterproductive. Instead, here's the oversimplified framing that I prefer:
Plan 1: Try to get international coordination to stop the race to superintelligence, and then try to put humanity on a better trajectory so we can safely build aligned superintelligence [...]
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,320 Listeners

2,451 Listeners

8,549 Listeners

4,178 Listeners

93 Listeners

1,601 Listeners

9,922 Listeners

95 Listeners

512 Listeners

5,507 Listeners

15,930 Listeners

547 Listeners

130 Listeners

93 Listeners

467 Listeners