AXRP - the AI X-risk Research Podcast

11 - Attainable Utility and Power with Alex Turner


Listen Later

Many scary stories about AI involve an AI system deceiving and subjugating humans in order to gain the ability to achieve its goals without us stopping it. This episode's guest, Alex Turner, will tell us about his research analyzing the notions of "attainable utility" and "power" that underlie these stories, so that we can better evaluate how likely they are and how to prevent them.

 

Topics we discuss:

 - Side effects minimization

 - Attainable Utility Preservation (AUP)

 - AUP and alignment

 - Power-seeking

 - Power-seeking and alignment

 - Future work and about Alex

 

The transcript: axrp.net/episode/2021/09/25/episode-11-attainable-utility-power-alex-turner.html

 

Alex on the AI Alignment Forum: alignmentforum.org/users/turntrout

Alex's Google Scholar page: scholar.google.com/citations?user=thAHiVcAAAAJ&hl=en&oi=ao

 

Conservative Agency via Attainable Utility Preservation: arxiv.org/abs/1902.09725

Optimal Policies Tend to Seek Power: arxiv.org/abs/1912.01683

 

Other works discussed:

 - Avoiding Side Effects by Considering Future Tasks: arxiv.org/abs/2010.07877

 - The "Reframing Impact" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

 - The "Risks from Learned Optimization" Sequence: alignmentforum.org/s/7CdoznhJaLEKHwvJW

 - Concrete Approval-Directed Agents: ai-alignment.com/concrete-approval-directed-agents-89e247df7f1b

 - Seeking Power is Convergently Instrumental in a Broad Class of Environments: alignmentforum.org/s/fSMbebQyR4wheRrvk/p/hzeLSQ9nwDkPc4KNt

 - Formalizing Convergent Instrumental Goals: intelligence.org/files/FormalizingConvergentGoals.pdf

 - The More Power at Stake, the Stronger Instumental Convergence Gets for Optimal Policies: alignmentforum.org/posts/Yc5QSSZCQ9qdyxZF6/the-more-power-at-stake-the-stronger-instrumental

 - Problem Relaxation as a Tactic: alignmentforum.org/posts/JcpwEKbmNHdwhpq5n/problem-relaxation-as-a-tactic

 - How I do Research: lesswrong.com/posts/e3Db4w52hz3NSyYqt/how-i-do-research

 - Math that Clicks: Look for Two-way Correspondences: lesswrong.com/posts/Lotih2o2pkR2aeusW/math-that-clicks-look-for-two-way-correspondences

 - Testing the Natural Abstraction Hypothesis: alignmentforum.org/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro

...more
View all episodesView all episodes
Download on the App Store

AXRP - the AI X-risk Research PodcastBy Daniel Filan

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like AXRP - the AI X-risk Research Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,371 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,426 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,083 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners

The Daily by The New York Times

The Daily

112,356 Listeners

Practical AI by Practical AI LLC

Practical AI

210 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,793 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Hard Fork by The New York Times

Hard Fork

5,473 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,106 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

97 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners