
Sign up to save your podcasts
Or
Personal Views Only
I thought Maxwell Tabarrok's The Offense-Defense Balance Rarely Changes was a good contribution to the discourse, and appreciated his attempt to use various lines of data to “kick the tires” on the case that an AI could in principle defeat all humans by inventing technologies that would allow them to dominate humans. I think Tabarrok is correct that this is a crux in the debate about how pessimistic to be about the possibility of humanity defending ourselves against a misaligned AGI,[1] and so in turn a crux about the probability of extinction or similarly bad outcomes from misaligned AI.
One thing I think Tabarrok is importantly correct about is that the idea of an “offense–defense balance” in hypothetical future human–AI conflicts is important but often undertheorized. Anyone interested in this question should be humbled by the fact that offense–defense theory and its [...]
---
Outline:
(01:26) Why I am not as reassured by the historical trends in conflict fatality rates as Tabarrok
(02:17) Reason 1: Deaths in conflict is not a measure of the offense–defense balance
(04:03) Reason 2: Only a temporary period of offense-dominance may be sufficient for AI takeover.
(05:29) Reason 3: AI might not need to defend itself against humans wielding AI-made offensive weapons.
(07:30) Reason 4: The destructiveness of the most destructive conflicts is increasing
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
Personal Views Only
I thought Maxwell Tabarrok's The Offense-Defense Balance Rarely Changes was a good contribution to the discourse, and appreciated his attempt to use various lines of data to “kick the tires” on the case that an AI could in principle defeat all humans by inventing technologies that would allow them to dominate humans. I think Tabarrok is correct that this is a crux in the debate about how pessimistic to be about the possibility of humanity defending ourselves against a misaligned AGI,[1] and so in turn a crux about the probability of extinction or similarly bad outcomes from misaligned AI.
One thing I think Tabarrok is importantly correct about is that the idea of an “offense–defense balance” in hypothetical future human–AI conflicts is important but often undertheorized. Anyone interested in this question should be humbled by the fact that offense–defense theory and its [...]
---
Outline:
(01:26) Why I am not as reassured by the historical trends in conflict fatality rates as Tabarrok
(02:17) Reason 1: Deaths in conflict is not a measure of the offense–defense balance
(04:03) Reason 2: Only a temporary period of offense-dominance may be sufficient for AI takeover.
(05:29) Reason 3: AI might not need to defend itself against humans wielding AI-made offensive weapons.
(07:30) Reason 4: The destructiveness of the most destructive conflicts is increasing
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,462 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,438 Listeners
15,220 Listeners
475 Listeners
121 Listeners
75 Listeners
461 Listeners