AXRP - the AI X-risk Research Podcast

21 - Interpretability for Engineers with Stephen Casper


Listen Later

Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets.

Patreon: patreon.com/axrpodcast

Ko-fi: ko-fi.com/axrpodcast

 

Topics we discuss, and timestamps:

 - 00:00:42 - Interpretability for engineers

   - 00:00:42 - Why interpretability?

   - 00:12:55 - Adversaries and interpretability

   - 00:24:30 - Scaling interpretability

   - 00:42:29 - Critiques of the AI safety interpretability community

   - 00:56:10 - Deceptive alignment and interpretability

 - 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery)

   - 01:10:40 - Why Trojans?

   - 01:14:53 - Which interpretability tools?

   - 01:28:40 - Trojan generation

   - 01:38:13 - Evaluation

 - 01:46:07 - Interpretability for shaping policy

 - 01:53:55 - Following Casper's work

 

The transcript: axrp.net/episode/2023/05/02/episode-21-interpretability-for-engineers-stephen-casper.html

 

Links for Casper:

 - Personal website: stephencasper.com/

 - Twitter: twitter.com/StephenLCasper

 - Electronic mail: scasper [at] mit [dot] edu

 

Research we discuss:

 - The Engineer's Interpretability Sequence: alignmentforum.org/s/a6ne2ve5uturEEQK7

 - Benchmarking Interpretability Tools for Deep Neural Networks: arxiv.org/abs/2302.10894

 - Adversarial Policies beat Superhuman Go AIs: goattack.far.ai/

 - Adversarial Examples Are Not Bugs, They Are Features: arxiv.org/abs/1905.02175

 - Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

 - Softmax Linear Units: transformer-circuits.pub/2022/solu/index.html

 - Red-Teaming the Stable Diffusion Safety Filter: arxiv.org/abs/2210.04610

 

Episode art by Hamish Doodles: hamishdoodles.com

...more
View all episodesView all episodes
Download on the App Store

AXRP - the AI X-risk Research PodcastBy Daniel Filan

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like AXRP - the AI X-risk Research Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,371 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,426 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

1,083 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

107 Listeners

The Daily by The New York Times

The Daily

112,356 Listeners

Practical AI by Practical AI LLC

Practical AI

210 Listeners

All-In with Chamath, Jason, Sacks & Friedberg by All-In Podcast, LLC

All-In with Chamath, Jason, Sacks & Friedberg

9,793 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

89 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

489 Listeners

Hard Fork by The New York Times

Hard Fork

5,473 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

132 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,106 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

97 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

209 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

133 Listeners