
Sign up to save your podcasts
Or


Personal Views Only
I thought Maxwell Tabarrok's The Offense-Defense Balance Rarely Changes was a good contribution to the discourse, and appreciated his attempt to use various lines of data to “kick the tires” on the case that an AI could in principle defeat all humans by inventing technologies that would allow them to dominate humans. I think Tabarrok is correct that this is a crux in the debate about how pessimistic to be about the possibility of humanity defending ourselves against a misaligned AGI,[1] and so in turn a crux about the probability of extinction or similarly bad outcomes from misaligned AI.
One thing I think Tabarrok is importantly correct about is that the idea of an “offense–defense balance” in hypothetical future human–AI conflicts is important but often undertheorized. Anyone interested in this question should be humbled by the fact that offense–defense theory and its [...]
---
Outline:
(01:26) Why I am not as reassured by the historical trends in conflict fatality rates as Tabarrok
(02:17) Reason 1: Deaths in conflict is not a measure of the offense–defense balance
(04:03) Reason 2: Only a temporary period of offense-dominance may be sufficient for AI takeover.
(05:29) Reason 3: AI might not need to defend itself against humans wielding AI-made offensive weapons.
(07:30) Reason 4: The destructiveness of the most destructive conflicts is increasing
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongPersonal Views Only
I thought Maxwell Tabarrok's The Offense-Defense Balance Rarely Changes was a good contribution to the discourse, and appreciated his attempt to use various lines of data to “kick the tires” on the case that an AI could in principle defeat all humans by inventing technologies that would allow them to dominate humans. I think Tabarrok is correct that this is a crux in the debate about how pessimistic to be about the possibility of humanity defending ourselves against a misaligned AGI,[1] and so in turn a crux about the probability of extinction or similarly bad outcomes from misaligned AI.
One thing I think Tabarrok is importantly correct about is that the idea of an “offense–defense balance” in hypothetical future human–AI conflicts is important but often undertheorized. Anyone interested in this question should be humbled by the fact that offense–defense theory and its [...]
---
Outline:
(01:26) Why I am not as reassured by the historical trends in conflict fatality rates as Tabarrok
(02:17) Reason 1: Deaths in conflict is not a measure of the offense–defense balance
(04:03) Reason 2: Only a temporary period of offense-dominance may be sufficient for AI takeover.
(05:29) Reason 3: AI might not need to defend itself against humans wielding AI-made offensive weapons.
(07:30) Reason 4: The destructiveness of the most destructive conflicts is increasing
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,164 Listeners

130 Listeners

7,255 Listeners

535 Listeners

16,266 Listeners

4 Listeners

14 Listeners

2 Listeners