Future of Life Institute Podcast

Andrew Critch on AI Research Considerations for Human Existential Safety


Listen Later

In this episode of the AI Alignment Podcast, Andrew Critch joins us to discuss a recent paper he co-authored with David Krueger titled AI Research Considerations for Human Existential Safety. We explore a wide range of issues, from how the mainstream computer science community views AI existential risk, to the need for more accurate terminology in the field of AI existential safety and the risks of what Andrew calls prepotent AI systems. Crucially, we also discuss what Andrew sees as being the most likely source of existential risk: the possibility of externalities from multiple AIs and AI stakeholders competing in a context where alignment and AI existential safety issues are not naturally covered by industry incentives.
 Topics discussed in this episode include:
- The mainstream computer science view of AI existential risk
- Distinguishing AI safety from AI existential safety 
- The need for more precise terminology in the field of AI existential safety and alignment
- The concept of prepotent AI systems and the problem of delegation 
- Which alignment problems get solved by commercial incentives and which don’t
- The threat of diffusion of responsibility on AI existential safety considerations not covered by commercial incentives
- Prepotent AI risk types that lead to unsurvivability for humanity
You can find the page for this podcast here: https://futureoflife.org/2020/09/15/andrew-critch-on-ai-research-considerations-for-human-existential-safety/
Timestamps: 
0:00 Intro
2:53 Why Andrew wrote ARCHES and what it’s about
6:46 The perspective of the mainstream CS community on AI existential risk
13:03 ARCHES in relation to AI existential risk literature
16:05 The distinction between safety and existential safety 
24:27 Existential risk is most likely to obtain through externalities 
29:03 The relationship between existential safety and safety for current systems 
33:17 Research areas that may not be solved by natural commercial incentives
51:40 What’s an AI system and an AI technology? 
53:42 Prepotent AI 
59:41 Misaligned prepotent AI technology 
01:05:13 Human frailty 
01:07:37 The importance of delegation 
01:14:11 Single-single, single-multi, multi-single, and multi-multi 
01:15:26 Control, instruction, and comprehension 
01:20:40 The multiplicity thesis 
01:22:16 Risk types from prepotent AI that lead to human unsurvivability 
01:34:06 Flow-through effects 
01:41:00 Multi-stakeholder objectives 
01:49:08 Final words from Andrew
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

103 ratings


More shows like Future of Life Institute Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,230 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,403 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,115 Listeners

Lex Fridman Podcast by Lex Fridman

Lex Fridman Podcast

12,542 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,442 Listeners

The Origins Podcast with Lawrence Krauss by Lawrence M. Krauss

The Origins Podcast with Lawrence Krauss

490 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

203 Listeners

COMPLEXITY by Santa Fe Institute

COMPLEXITY

281 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

352 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

423 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

116 Listeners