Future of Life Institute Podcast

Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI


Listen Later

It's well-established in the AI alignment literature what happens when an AI system learns or is given an objective that doesn't fully capture what we want.  Human preferences and values are inevitably left out and the AI, likely being a powerful optimizer, will take advantage of the dimensions of freedom afforded by the misspecified objective and set them to extreme values. This may allow for better optimization on the goals in the objective function, but can have catastrophic consequences for human preferences and values the system fails to consider. Is it possible for misalignment to also occur between the model being trained and the objective function used for training? The answer looks like yes. Evan Hubinger from the Machine Intelligence Research Institute joins us on this episode of the AI Alignment Podcast to discuss how to ensure alignment between a model being trained and the objective function used to train it, as well as to evaluate three proposals for building safe advanced AI.
 Topics discussed in this episode include:
-Inner and outer alignment
-How and why inner alignment can fail
-Training competitiveness and performance competitiveness
-Evaluating imitative amplification, AI safety via debate, and microscope AI
You can find the page for this podcast here: https://futureoflife.org/2020/07/01/evan-hubinger-on-inner-alignment-outer-alignment-and-proposals-for-building-safe-advanced-ai/
Timestamps: 
0:00 Intro 
2:07 How Evan got into AI alignment research
4:42 What is AI alignment?
7:30 How Evan approaches AI alignment
13:05 What are inner alignment and outer alignment?
24:23 Gradient descent
36:30 Testing for inner alignment
38:38 Wrapping up on outer alignment
44:24 Why is inner alignment a priority?
45:30 How inner alignment fails
01:11:12 Training competitiveness and performance competitiveness
01:16:17 Evaluating proposals for building safe and advanced AI via inner and outer alignment, as well as training and performance competitiveness
01:17:30 Imitative amplification
01:23:00 AI safety via debate
01:26:32 Microscope AI
01:30:19 AGI timelines and humanity's prospects for succeeding in AI alignment
01:34:45 Where to follow Evan and find more of his work
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable, consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

103 ratings


More shows like Future of Life Institute Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,230 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,403 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,115 Listeners

Lex Fridman Podcast by Lex Fridman

Lex Fridman Podcast

12,542 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,442 Listeners

The Origins Podcast with Lawrence Krauss by Lawrence M. Krauss

The Origins Podcast with Lawrence Krauss

490 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

203 Listeners

COMPLEXITY by Santa Fe Institute

COMPLEXITY

281 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

352 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

423 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

116 Listeners