Future of Life Institute Podcast

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI


Listen Later

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.
 Topics discussed in this episode include:
-Inner and outer alignment
-How and why inner alignment can fail
-Training competitiveness and performance competitiveness
-Evaluating imitative amplification, AI safety via debate, and microscope AI
You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/
Timestamps: 
0:00 Intro 
2:07 How Evan got into AI alignment research
4:42 What is AI alignment?
7:30 How Evan approaches AI alignment
13:05 What are inner alignment and outer alignment?
24:23 Gradient descent
36:30 Testing for inner alignment
38:38 Wrapping up on outer alignment
44:24 Why is inner alignment a priority?
45:30 How inner alignment fails
01:11:12 Training competitiveness and performance competitiveness
01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
01:17:30 Imitative amplification
01:23:00 AI safety via debate
01:26:32 Microscope AI
01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
01:34:45 Where to follow Evan and find more of his work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

107 ratings


More shows like Future of Life Institute Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,376 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,429 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,087 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

589 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

608 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

288 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,155 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,553 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

488 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

531 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Possible by Reid Hoffman

Possible

120 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

556 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

131 Listeners