
Sign up to save your podcasts
Or


Holden just published this paper on the Carnegie Endowment website. I thought it was a decent reference, so I figured I would crosspost it.
If-then commitments are an emerging framework for preparing for risks from AI without unnecessarily slowing the development of new technology. The more attention and interest there is these commitments, the faster a mature framework can progress.
Introduction
Artificial intelligence (AI) could pose a variety of catastrophic risks to international security in several domains, including the proliferation and acceleration of cyberoffense capabilities, and of the ability to develop chemical or biological weapons of mass destruction. Even the most powerful AI models today are not yet capable enough to pose such risks,[1] but the coming years could see fast and hard-to-predict changes in AI capabilities. Both companies and governments have shown significant interest in finding ways to prepare for such risks without unnecessarily slowing [...]
---
Outline:
(00:29) Introduction
(04:13) Walking Through a Potential If-Then Commitment in Detail
(04:51) The Risk: Proliferation of Expert-Level Advice on Weapons Production
(05:26) The Challenge of Sufficient Risk Mitigations
(07:09) The Example If-Then Commitment
(13:09) Potential Benefits of This If-Then Commitment
(16:05) Operationalizing the Tripwire
(22:36) Operationalizing the “Then” Part of the If-Then Commitment
(24:43) Enforcement and Accountability
(27:02) Other Possible Tripwires for If-Then Commitments
(27:38) Applying this Framework to Open Model Releases
(28:59) Limitations and Common Concerns About If-Then Commitments
(35:24) The Path to Robust, Enforceable If-Then Commitments
(38:27) Appendix: Elaborating on the Risk of AI-Assisted Chemical and Biological Weapons Development
(40:33) Acknowledgements
The original text contained 46 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongHolden just published this paper on the Carnegie Endowment website. I thought it was a decent reference, so I figured I would crosspost it.
If-then commitments are an emerging framework for preparing for risks from AI without unnecessarily slowing the development of new technology. The more attention and interest there is these commitments, the faster a mature framework can progress.
Introduction
Artificial intelligence (AI) could pose a variety of catastrophic risks to international security in several domains, including the proliferation and acceleration of cyberoffense capabilities, and of the ability to develop chemical or biological weapons of mass destruction. Even the most powerful AI models today are not yet capable enough to pose such risks,[1] but the coming years could see fast and hard-to-predict changes in AI capabilities. Both companies and governments have shown significant interest in finding ways to prepare for such risks without unnecessarily slowing [...]
---
Outline:
(00:29) Introduction
(04:13) Walking Through a Potential If-Then Commitment in Detail
(04:51) The Risk: Proliferation of Expert-Level Advice on Weapons Production
(05:26) The Challenge of Sufficient Risk Mitigations
(07:09) The Example If-Then Commitment
(13:09) Potential Benefits of This If-Then Commitment
(16:05) Operationalizing the Tripwire
(22:36) Operationalizing the “Then” Part of the If-Then Commitment
(24:43) Enforcement and Accountability
(27:02) Other Possible Tripwires for If-Then Commitments
(27:38) Applying this Framework to Open Model Releases
(28:59) Limitations and Common Concerns About If-Then Commitments
(35:24) The Path to Robust, Enforceable If-Then Commitments
(38:27) Appendix: Elaborating on the Risk of AI-Assisted Chemical and Biological Weapons Development
(40:33) Acknowledgements
The original text contained 46 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

113,004 Listeners

130 Listeners

7,228 Listeners

532 Listeners

16,218 Listeners

4 Listeners

14 Listeners

2 Listeners