Future of Life Institute Podcast

Roman Yampolskiy on the Uncontrollability, Incomprehensibility, and Unexplainability of AI


Listen Later

Roman Yampolskiy, Professor of Computer Science at the University of Louisville, joins us to discuss whether we can control, comprehend, and explain AI systems, and how this constrains the project of AI safety.
 Topics discussed in this episode include:
-Roman’s results on the unexplainability, incomprehensibility, and uncontrollability of AI
-The relationship between AI safety, control, and alignment
-Virtual worlds as a proposal for solving multi-multi alignment
-AI security
You can find the page for this podcast here: https://futureoflife.org/2021/03/19/roman-yampolskiy-on-the-uncontrollability-incomprehensibility-and-unexplainability-of-ai/
You can find FLI's three new policy focused job postings here: https://futureoflife.org/job-postings/
Have any feedback about the podcast? You can share your thoughts here: www.surveymonkey.com/r/DRBFZCT
Timestamps: 
0:00 Intro 
2:35 Roman’s primary research interests 
4:09 How theoretical proofs help AI safety research 
6:23 How impossibility results constrain computer science systems
10:18 The inability to tell if arbitrary code is friendly or unfriendly 
12:06 Impossibility results clarify what we can do 
14:19 Roman’s results on unexplainability and incomprehensibility 
22:34 Focusing on comprehensibility 
26:17 Roman’s results on uncontrollability 
28:33 Alignment as a subset of safety and control 
30:48 The relationship between unexplainability, incomprehensibility, and uncontrollability with each other and with AI alignment 
33:40 What does it mean to solve AI safety? 
34:19 What do the impossibility results really mean? 
37:07 Virtual worlds and AI alignment 
49:55 AI security and malevolent agents 
53:00 Air gapping, boxing, and other security methods 
58:43 Some examples of historical failures of AI systems and what we can learn from them 
1:01:20 Clarifying impossibility results
1:06 55 Examples of systems failing and what these demonstrate about AI 
1:08:20 Are oracles a valid approach to AI safety? 
1:10:30 Roman’s final thoughts
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

107 ratings


More shows like Future of Life Institute Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,376 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,429 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,087 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

589 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

608 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

288 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,155 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,553 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

488 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

531 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Possible by Reid Hoffman

Possible

120 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

556 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

131 Listeners