Future of Life Institute Podcast

AIAP: On DeepMind, AI Safety, and Recursive Reward Modeling with Jan Leike


Listen Later

Jan Leike is a senior research scientist who leads the agent alignment team at DeepMind. His is one of three teams within their technical AGI group; each team focuses on different aspects of ensuring advanced AI systems are aligned and beneficial. Jan's journey in the field of AI has taken him from a PhD on a theoretical reinforcement learning agent called AIXI to empirical AI safety research focused on recursive reward modeling. This conversation explores his movement from theoretical to empirical AI safety research — why empirical safety research is important and how this has lead him to his work on recursive reward modeling. We also discuss research directions he's optimistic will lead to safely scalable systems, more facets of his own thinking, and other work being done at DeepMind.
 Topics discussed in this episode include:
-Theoretical and empirical AI safety research
-Jan's and DeepMind's approaches to AI safety
-Jan's work and thoughts on recursive reward modeling
-AI safety benchmarking at DeepMind
-The potential modularity of AGI
-Comments on the cultural and intellectual differences between the AI safety and mainstream AI communities
-Joining the DeepMind safety team
You can find the page and transcript for this podcast here: https://futureoflife.org/2019/12/16/ai-alignment-podcast-on-deepmind-ai-safety-and-recursive-reward-modeling-with-jan-leike/
Timestamps: 
0:00 intro
2:15 Jan's intellectual journey in computer science to AI safety
7:35 Transitioning from theoretical to empirical research
11:25 Jan's and DeepMind's approach to AI safety
17:23 Recursive reward modeling
29:26 Experimenting with recursive reward modeling
32:42 How recursive reward modeling serves AI safety
34:55 Pessimism about recursive reward modeling
38:35 How this research direction fits in the safety landscape
42:10 Can deep reinforcement learning get us to AGI?
42:50 How modular will AGI be?
44:25 Efforts at DeepMind for AI safety benchmarking
49:30 Differences between the AI safety and mainstream AI communities
55:15 Most exciting piece of empirical safety work in the next 5 years
56:35 Joining the DeepMind safety team
...more
View all episodesView all episodes
Download on the App Store

Future of Life Institute PodcastBy Future of Life Institute

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

107 ratings


More shows like Future of Life Institute Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,377 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,430 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,083 Listeners

Robert Wright's Nonzero by Nonzero

Robert Wright's Nonzero

589 Listeners

Azeem Azhar's Exponential View by Azeem Azhar

Azeem Azhar's Exponential View

608 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

289 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,151 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,556 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

531 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

131 Listeners

Possible by Reid Hoffman

Possible

120 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

557 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

151 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

131 Listeners