
Sign up to save your podcasts
Or


In this post Eliezer Yudkowsky argues that the existential risk posed by artificial superintelligence cannot be mitigated by individual action, corporate self-regulation, or localised prohibitions, but only by coordinated international law — specifically, a global treaty restricting the specialised hardware used to train and run frontier AI systems — and he systematically dismantles the notion that extralegal or violent resistance would be effective, on the grounds that shutting down any single company, researcher, or national datacenter does nothing to change the overall trajectory, while simultaneously making the case that lawful, predictable, avoidable state force is a categorically different thing from the chaotic violence some critics conflate it with.
https://x.com/ESYudkowsky/status/2043601524815716866
By Readings of great articles in AI voices5
44 ratings
In this post Eliezer Yudkowsky argues that the existential risk posed by artificial superintelligence cannot be mitigated by individual action, corporate self-regulation, or localised prohibitions, but only by coordinated international law — specifically, a global treaty restricting the specialised hardware used to train and run frontier AI systems — and he systematically dismantles the notion that extralegal or violent resistance would be effective, on the grounds that shutting down any single company, researcher, or national datacenter does nothing to change the overall trajectory, while simultaneously making the case that lawful, predictable, avoidable state force is a categorically different thing from the chaotic violence some critics conflate it with.
https://x.com/ESYudkowsky/status/2043601524815716866

91,297 Listeners

2,461 Listeners

841 Listeners

113,121 Listeners

212 Listeners

7,244 Listeners

6,446 Listeners

551 Listeners

5,576 Listeners

16,525 Listeners

13 Listeners

688 Listeners

403 Listeners

6 Listeners

1,149 Listeners