
Sign up to save your podcasts
Or


Dan Hendrycks, Eric Schmidt and Alexandr Wang released an extensive paper titled Superintelligence Strategy. There is also an op-ed in Time that summarizes.
The major AI labs expect superintelligence to arrive soon. They might be wrong about that, but at minimum we need to take the possibility seriously.
At a minimum, the possibility of imminent superintelligence will be highly destabilizing. Even if you do not believe it represents an existential risk to humanity (and if so you are very wrong about that) the imminent development of superintelligence is an existential threat to the power of everyone not developing it.
Planning a realistic approach to that scenario is necessary.
What would it look like to take superintelligence seriously? What would it look like if everyone took superintelligence seriously, before it was developed?
The proposed regime here, Mutually Assured AI Malfunction (MAIM), relies on various assumptions [...]
---
Outline:
(01:10) ASI (Artificial Superintelligence) is Dual Use
(01:51) Three Proposed Interventions
(05:48) The Shape of the Problems
(06:57) Strategic Competition
(08:38) Terrorism
(10:47) Loss of Control
(12:30) Existing Strategies
(15:14) MAIM of the Game
(18:02) Nonproliferation
(20:01) Competitiveness
(22:16) Laying Out Assumptions: Crazy or Crazy Enough To Work?
(25:52) Don't MAIM Me Bro
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongDan Hendrycks, Eric Schmidt and Alexandr Wang released an extensive paper titled Superintelligence Strategy. There is also an op-ed in Time that summarizes.
The major AI labs expect superintelligence to arrive soon. They might be wrong about that, but at minimum we need to take the possibility seriously.
At a minimum, the possibility of imminent superintelligence will be highly destabilizing. Even if you do not believe it represents an existential risk to humanity (and if so you are very wrong about that) the imminent development of superintelligence is an existential threat to the power of everyone not developing it.
Planning a realistic approach to that scenario is necessary.
What would it look like to take superintelligence seriously? What would it look like if everyone took superintelligence seriously, before it was developed?
The proposed regime here, Mutually Assured AI Malfunction (MAIM), relies on various assumptions [...]
---
Outline:
(01:10) ASI (Artificial Superintelligence) is Dual Use
(01:51) Three Proposed Interventions
(05:48) The Shape of the Problems
(06:57) Strategic Competition
(08:38) Terrorism
(10:47) Loss of Control
(12:30) Existing Strategies
(15:14) MAIM of the Game
(18:02) Nonproliferation
(20:01) Competitiveness
(22:16) Laying Out Assumptions: Crazy or Crazy Enough To Work?
(25:52) Don't MAIM Me Bro
---
First published:
Source:
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,354 Listeners

2,444 Listeners

9,124 Listeners

4,154 Listeners

92 Listeners

1,595 Listeners

9,902 Listeners

90 Listeners

506 Listeners

5,471 Listeners

16,051 Listeners

540 Listeners

131 Listeners

95 Listeners

521 Listeners