
Sign up to save your podcasts
Or


Dan Hendrycks, Eric Schmidt and Alexandr Wang released an extensive paper titled Superintelligence Strategy. There is also an op-ed in Time that summarizes.
The major AI labs expect superintelligence to arrive soon. They might be wrong about that, but at minimum we need to take the possibility seriously.
At a minimum, the possibility of imminent superintelligence will be highly destabilizing. Even if you do not believe it represents an existential risk to humanity (and if so you are very wrong about that) the imminent development of superintelligence is an existential threat to the power of everyone not developing it.
Planning a realistic approach to that scenario is necessary.
What would it look like to take superintelligence seriously? What would it look like if everyone took superintelligence seriously, before it was developed?
The proposed regime here, Mutually Assured AI Malfunction (MAIM), relies on various assumptions [...]
---
Outline:
(01:10) ASI (Artificial Superintelligence) is Dual Use
(01:51) Three Proposed Interventions
(05:48) The Shape of the Problems
(06:57) Strategic Competition
(08:38) Terrorism
(10:47) Loss of Control
(12:30) Existing Strategies
(15:14) MAIM of the Game
(18:02) Nonproliferation
(20:01) Competitiveness
(22:16) Laying Out Assumptions: Crazy or Crazy Enough To Work?
(25:52) Don't MAIM Me Bro
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Dan Hendrycks, Eric Schmidt and Alexandr Wang released an extensive paper titled Superintelligence Strategy. There is also an op-ed in Time that summarizes.
The major AI labs expect superintelligence to arrive soon. They might be wrong about that, but at minimum we need to take the possibility seriously.
At a minimum, the possibility of imminent superintelligence will be highly destabilizing. Even if you do not believe it represents an existential risk to humanity (and if so you are very wrong about that) the imminent development of superintelligence is an existential threat to the power of everyone not developing it.
Planning a realistic approach to that scenario is necessary.
What would it look like to take superintelligence seriously? What would it look like if everyone took superintelligence seriously, before it was developed?
The proposed regime here, Mutually Assured AI Malfunction (MAIM), relies on various assumptions [...]
---
Outline:
(01:10) ASI (Artificial Superintelligence) is Dual Use
(01:51) Three Proposed Interventions
(05:48) The Shape of the Problems
(06:57) Strategic Competition
(08:38) Terrorism
(10:47) Loss of Control
(12:30) Existing Strategies
(15:14) MAIM of the Game
(18:02) Nonproliferation
(20:01) Competitiveness
(22:16) Laying Out Assumptions: Crazy or Crazy Enough To Work?
(25:52) Don't MAIM Me Bro
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,398 Listeners

2,467 Listeners

1,099 Listeners

109 Listeners

295 Listeners

89 Listeners

551 Listeners

5,546 Listeners

140 Listeners

14 Listeners

140 Listeners

155 Listeners

458 Listeners

0 Listeners

143 Listeners