Machine Learning Street Talk (MLST)

047 Interpretable Machine Learning - Christoph Molnar


Listen Later

Christoph Molnar is one of the main people to know in the space of interpretable ML. In 2018 he released the first version of his incredible online book, interpretable machine learning. Interpretability is often a deciding factor when a machine learning (ML) model is used in a product, a decision process, or in research. Interpretability methods can be used to discover knowledge, to debug or justify the model and its predictions, and to control and improve the model, reason about potential bias in models as well as increase the social acceptance of models. But Interpretability methods can also be quite esoteric, add an additional layer of complexity and potential pitfalls and requires expert knowledge to understand. Is it even possible to understand complex models or even humans for that matter in any  meaningful way? 


Introduction to IML [00:00:00]

Show Kickoff [00:13:28]

What makes a good explanation? [00:15:51]

Quantification of how good an explanation is [00:19:59]

Knowledge of the pitfalls of IML [00:22:14]

Are linear models even interpretable? [00:24:26]

Complex Math models to explain Complex Math models? [00:27:04]

Saliency maps are glorified edge detectors [00:28:35]

Challenge on IML -- feature dependence [00:36:46]

Don't leap to using a complex model! Surrogate models can be too dumb [00:40:52]

On airplane pilots. Seeking to understand vs testing [00:44:09]

IML Could help us make better models or lead a better life [00:51:53]

Lack of statistical rigor and quantification of uncertainty [00:55:35]

On Causality [01:01:09]

Broadening out the discussion to the process or institutional level [01:08:53]

No focus on fairness / ethics? [01:11:44]

Is it possible to condition ML model training on IML metrics ? [01:15:27]

Where is IML going? Some of the esoterica of the IML methods [01:18:35]

You can't compress information without common knowledge, the latter becomes the bottleneck [01:23:25]

IML methods used non-interactively? Making IML an engineering discipline [01:31:10]

Tim Postscript -- on the lack of effective corporate operating models for IML, security, engineering and ethics [01:36:34]


Explanation in Artificial Intelligence: Insights from the Social Sciences (Tim Miller 2018)

https://arxiv.org/pdf/1706.07269.pdf


Seven Myths in Machine Learning Research (Chang 19) 

Myth 7: Saliency maps are robust ways to interpret neural networks

https://arxiv.org/pdf/1902.06789.pdf


Sanity Checks for Saliency Maps (Adebayo 2020)

https://arxiv.org/pdf/1810.03292.pdf


Interpretable Machine Learning: A Guide for Making Black Box Models Explainable.

https://christophm.github.io/interpretable-ml-book/


Christoph Molnar:

https://www.linkedin.com/in/christoph-molnar-63777189/

https://machine-master.blogspot.com/

https://twitter.com/ChristophMolnar


Please show your appreciation and buy Christoph's book here;

https://www.lulu.com/shop/christoph-molnar/interpretable-machine-learning/paperback/product-24449081.html?page=1&pageSize=4


Panel: 

Connor Tann https://www.linkedin.com/in/connor-tann-a92906a1/

Dr. Tim Scarfe 

Dr. Keith Duggar


Video version:

https://youtu.be/0LIACHcxpHU

...more
View all episodesView all episodes
Download on the App Store

Machine Learning Street Talk (MLST)By Machine Learning Street Talk (MLST)

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

95 ratings


More shows like Machine Learning Street Talk (MLST)

View all
The a16z Show by Andreessen Horowitz

The a16z Show

1,105 Listeners

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence) by Sam Charrington

The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

442 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

305 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

343 Listeners

Practical AI by Practical AI LLC

Practical AI

209 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

205 Listeners

Last Week in AI by Skynet Today

Last Week in AI

314 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

551 Listeners

Big Technology Podcast by Alex Kantrowitz

Big Technology Podcast

513 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

148 Listeners

Latent Space: The AI Engineer Podcast by Latent.Space

Latent Space: The AI Engineer Podcast

101 Listeners

This Day in AI Podcast by Michael Sharkey, Chris Sharkey

This Day in AI Podcast

228 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

688 Listeners

BG2Pod with Brad Gerstner and Bill Gurley by BG2Pod

BG2Pod with Brad Gerstner and Bill Gurley

475 Listeners

AI + a16z by a16z

AI + a16z

34 Listeners