Future of Life Institute Podcast

AIAP: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah


Listen Later

Just a year ago we released a two part episode titled An Overview of Technical AI Alignment with Rohin Shah. That conversation provided details on the views of central AI alignment research organizations and many of the ongoing research efforts for designing safe and aligned systems. Much has happened in the past twelve months, so we've invited Rohin — along with fellow researcher Buck Shlegeris — back for a follow-up conversation. Today's episode focuses especially on the state of current research efforts for beneficial AI, as well as Buck's and Rohin's thoughts about the varying approaches and the difficulties we still face. This podcast thus serves as a non-exhaustive overview of how the field of AI alignment has updated and how thinking is progressing.
 Topics discussed in this episode include:
-Rohin's and Buck's optimism and pessimism about different approaches to aligned AI
-Traditional arguments for AI as an x-risk
-Modeling agents as expected utility maximizers
-Ambitious value learning and specification learning/narrow value learning
-Agency and optimization
-Robustness
-Scaling to superhuman abilities
-Universality
-Impact regularization
-Causal models, oracles, and decision theory
-Discontinuous and continuous takeoff scenarios
-Probability of AI-induced existential risk
-Timelines for AGI
-Information hazards
You can find the page for this podcast here: https://futureoflife.org/2020/04/15/an-overview-of-technical-ai-alignment-in-2018-and-2019-with-buck-shlegeris-and-rohin-shah/
Timestamps: 
0:00 Intro
3:48 Traditional arguments for AI as an existential risk
5:40 What is AI alignment?
7:30 Back to a basic analysis of AI as an existential risk
18:25 Can we model agents in ways other than as expected utility maximizers?
19:34 Is it skillful to try and model human preferences as a utility function?
27:09 Suggestions for alternatives to modeling humans with utility functions
40:30 Agency and optimization
45:55 Embedded decision theory
48:30 More on value learning
49:58 What is robustness and why does it matter?
01:13:00 Scaling to superhuman abilities
01:26:13 Universality
01:33:40 Impact regularization
01:40:34 Causal models, oracles, and decision theory
01:43:05 Forecasting as well as discontinuous and continuous takeoff scenarios
01:53:18 What is the probability of AI-induced existential risk?
02:00:53 Likelihood of continuous and discontinuous take off scenarios
02:08:08 What would you both do if you had more power and resources?
02:12:38 AI timelines
02:14:00 Information hazards
02:19:19 Where to follow Buck and Rohin and learn more
This podcast is possible because of the support of listeners like you. If you found this conversation to be meaningful or valuable consider supporting it directly by donating at futureoflife.org/donate. Contributions like yours make these conversations possible.
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

103 ratings


More shows like Future of Life Institute Podcast

View all
EconTalk by Russ Roberts

EconTalk

4,230 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,403 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,115 Listeners

Lex Fridman Podcast by Lex Fridman

Lex Fridman Podcast

12,542 Listeners

Your Undivided Attention by Tristan Harris and Aza Raskin, The Center for Humane Technology

Your Undivided Attention

1,442 Listeners

The Origins Podcast with Lawrence Krauss by Lawrence M. Krauss

The Origins Podcast with Lawrence Krauss

490 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

203 Listeners

COMPLEXITY by Santa Fe Institute

COMPLEXITY

281 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

352 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

62 Listeners

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

423 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

116 Listeners