
Sign up to save your podcasts
Or


In this paper, we make recommendations for how middle powers may band together through a binding international agreement and achieve the goal of preventing the development of ASI, without assuming initial cooperation by superpowers.
You can read the paper here: asi-prevention.com
In our previous work Modelling the Geopolitics of AI, we pointed out that middle powers face a precarious predicament in a race to ASI. Lacking the means to seriously compete in the race or unilaterally influence superpowers to halt development, they may need to resort to a strategy we dub “Vassal's Wager”: allying themselves with a superpower and hoping that their sovereignty is respected after the superpower attains a DSA.
Of course, this requires superpowers to avert the extinction risks posed by powerful AI systems, something over which middle powers have little or no control over. Thus, we argue that it is in the interest of most middle powers to collectively deter and prevent the development of ASI by any actor, including superpowers.
In this paper, we design an international agreement that could enable middle powers to form a coalition capable of achieving this goal. The agreement we propose is complementary to a “verification [...]
---
Outline:
(01:48) Key Mechanisms
(03:36) Path to Adoption
(04:48) Urgency
---
First published:
Source:
Linkpost URL:
https://asi-prevention.com
---
Narrated by TYPE III AUDIO.
By LessWrongIn this paper, we make recommendations for how middle powers may band together through a binding international agreement and achieve the goal of preventing the development of ASI, without assuming initial cooperation by superpowers.
You can read the paper here: asi-prevention.com
In our previous work Modelling the Geopolitics of AI, we pointed out that middle powers face a precarious predicament in a race to ASI. Lacking the means to seriously compete in the race or unilaterally influence superpowers to halt development, they may need to resort to a strategy we dub “Vassal's Wager”: allying themselves with a superpower and hoping that their sovereignty is respected after the superpower attains a DSA.
Of course, this requires superpowers to avert the extinction risks posed by powerful AI systems, something over which middle powers have little or no control over. Thus, we argue that it is in the interest of most middle powers to collectively deter and prevent the development of ASI by any actor, including superpowers.
In this paper, we design an international agreement that could enable middle powers to form a coalition capable of achieving this goal. The agreement we propose is complementary to a “verification [...]
---
Outline:
(01:48) Key Mechanisms
(03:36) Path to Adoption
(04:48) Urgency
---
First published:
Source:
Linkpost URL:
https://asi-prevention.com
---
Narrated by TYPE III AUDIO.

26,352 Listeners

2,454 Listeners

8,651 Listeners

4,175 Listeners

93 Listeners

1,600 Listeners

9,901 Listeners

93 Listeners

505 Listeners

5,530 Listeners

15,995 Listeners

542 Listeners

135 Listeners

95 Listeners

474 Listeners