Future of Life Institute Podcast

Iason Gabriel on Foundational Philosophical Questions in AI Alignment


Listen Later

In the contemporary practice of many scientific disciplines, questions of values, norms, and political thought rarely explicitly enter the picture. In the realm of AI alignment, however, the normative and technical come together in an important and inseparable way. How do we decide on an appropriate procedure for aligning AI systems to human values when there is disagreement over what constitutes a moral alignment procedure? Choosing any procedure or set of values with which to align AI brings its own normative and metaethical beliefs that will require close examination and reflection if we hope to succeed at alignment. Iason Gabriel, Senior Research Scientist at DeepMind, joins us on this episode of the AI Alignment Podcast to explore the interdependence of the normative and technical in AI alignment and to discuss his recent paper Artificial Intelligence, Values and Alignment.   
 Topics discussed in this episode include:
-How moral philosophy and political theory are deeply related to AI alignment
-The problem of dealing with a plurality of preferences and philosophical views in AI alignment
-How the is-ought problem and metaethics fits into alignment 
-What we should be aligning AI systems to
-The importance of democratic solutions to questions of AI alignment 
-The long reflection
You can find the page for this podcast here: https://futureoflife.org/2020/09/03/iason-gabriel-on-foundational-philosophical-questions-in-ai-alignment/
Timestamps: 
0:00 Intro
2:10 Why Iason wrote Artificial Intelligence, Values and Alignment
3:12 What AI alignment is
6:07 The technical and normative aspects of AI alignment
9:11 The normative being dependent on the technical
14:30 Coming up with an appropriate alignment procedure given the is-ought problem
31:15 What systems are subject to an alignment procedure?
39:55 What is it that we're trying to align AI systems to?
01:02:30 Single agent and multi agent alignment scenarios
01:27:00 What is the procedure for choosing which evaluative model(s) will be used to judge different alignment proposals
01:30:28 The long reflection
01:53:55 Where to follow and contact Iason
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

107 ratings


More shows like Future of Life Institute Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,377 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,430 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,083 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

589 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

608 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

289 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,151 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,556 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

531 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Possible by Reid Hoffman

Possible

120 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

557 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

131 Listeners