IAPS Podcast

Deployment corrections: An incident response framework for frontier AI models


Listen Later

A comprehensive approach to addressing catastrophic risks from AI models should cover the full model lifecycle. This paper explores contingency plans for cases where pre-deployment risk management falls short: where either very dangerous models are deployed, or deployed models become very dangerous.

Informed by incident response practices from industries including cybersecurity, we describe a toolkit of deployment corrections that AI developers can use to respond to dangerous capabilities, behaviors, or use cases of AI models that develop or are detected after deployment. We also provide a framework for AI developers to prepare and implement this toolkit.

We conclude by recommending that frontier AI developers should (1) maintain control over model access, (2) establish or grow dedicated teams to design and maintain processes for deployment corrections, including incident response plans, and (3) establish these deployment corrections as allowable actions with downstream users. We also recommend frontier AI developers, standard-setting organizations, and regulators should collaborate to define a standardized industry-wide approach to the use of deployment corrections in incident response.

Caveat: This work applies to frontier AI models that are made available through interfaces (e.g., APIs) that provide the AI developer or another upstream party means of maintaining control over access (e.g., GPT-4 or Claude). It does not apply to management of catastrophic risk from open-source models (e.g., BLOOM or Llama-2), for which the restrictions we discuss are largely unenforceable.

---

Outline:

(01:55) Executive Summary

(09:29) 1. Challenge: Some catastrophic risks may emerge post-deployment

(12:18) Case 1: Partial restrictions in response to user-discovered performance boost and misuse

(15:07) 2. Proposed intervention: Deployment corrections

(15:58) 2.1 Range of deployment corrections

(18:36) 2.2 Additional considerations on emergency shutdown

(20:19) Case 2: Full market removal due to improved prompt injection techniques

(23:48) 3. Deployment correction framework

(26:24) 3.0 Managing this process

(29:24) 3.1 Preparation

(39:22) 3.2 Monitoring and analysis

(44:55) 3.3 Execution

(49:15) 3.4 Recovery and follow-up

(53:56) Case 3: Emergency shutdown in response to hidden compute-boosting behavior by model

(58:05) 4. Challenges and mitigations to deployment corrections

(58:29) 4.1 Distinctive challenges to incident response for frontier AI

(58:50) 4.1.1 Threat identification

(01:01:46) 4.1.2: Monitoring

(01:06:35) 4.1.3 Incident response

(01:08:59) 4.2: Disincentives and shortfalls of deployment corrections

(01:09:23) 4.2.1: Potential harms to the AI company

(01:10:33) 4.2.2: Coordination problems

(01:13:02) 5. High-level recommendations

(01:15:40) 6. Future research questions

(01:20:23) 8. Acknowledgements

(01:21:27) Appendix One. Compute as a complementary node of deployment oversight

(01:22:24) Compute provider toolkit

(01:25:31) Cloud providers and open-source models

---

First published:

September 30th, 2023

Source:

https://www.iaps.ai/research/deployment-corrections

...more
View all episodesView all episodes
Download on the App Store

IAPS PodcastBy Institute for AI Policy and Strategy