Future of Life Institute Podcast

Andrew Critch on AI Research Considerations for Human Existential Safety

09.16.2020 - By Future of Life InstitutePlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.

 Topics discussed in this episode include:

- The mainstream computer science view of AI existential risk

- Distinguishing AI safety from AI existential safety 

- The need for more precise terminology in the field of AI existential safety and alignment

- The concept of prepotent AI systems and the problem of delegation 

- Which alignment problems get solved by commercial incentives and which don’t

- The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives

- Prepotent AI risk types that lead to unsurvivability for humanity

You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/

Timestamps: 

0:00 Intro

2:53 Why Andrew wrote ARCHES and what it’s about

6:46 The perspective of the mainstream CS community on AI existential risk

13:03 ARCHES in relation to AI existential risk literature

16:05 The distinction between safety and existential safety 

24:27 Existential risk is most likely to obtain through externalities 

29:03 The relationship between existential safety and safety for current systems 

33:17 Research areas that may not be solved by natural commercial incentives

51:40 What’s an AI system and an AI technology? 

53:42 Prepotent AI 

59:41 Misaligned prepotent AI technology 

01:05:13 Human frailty 

01:07:37 The importance of delegation 

01:14:11 Single-single, single-multi, multi-single, and multi-multi 

01:15:26 Control, instruction, and comprehension 

01:20:40 The multiplicity thesis 

01:22:16 Risk types from prepotent AI that lead to human unsurvivability 

01:34:06 Flow-through effects 

01:41:00 Multi-stakeholder objectives 

01:49:08 Final words from Andrew

This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.

More episodes from Future of Life Institute Podcast