Gradient Dissent: Conversations on AI

Zack Chase Lipton — The Medical Machine Learning Landscape

09.17.2020 - By Lukas BiewaldPlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

How Zack went from being a musician to professor, how medical applications of Machine Learning are developing, and the challenges of counteracting bias in real world applications.

Zachary Chase Lipton is an assistant professor of Operations Research and Machine Learning at Carnegie Mellon University.

His research spans core machine learning methods and their social impact and addresses diverse application areas, including clinical medicine and natural language processing. Current research focuses include robustness under distribution shift, breast cancer screening, the effective and equitable allocation of organs, and the intersection of causal thinking with messy data.

He is the founder of the Approximately Correct (approximatelycorrect.com) blog and the creator of Dive Into Deep Learning, an interactive open-source book drafted entirely through Jupyter notebooks.

Zack’s blog - http://approximatelycorrect.com/

Detecting and Correcting for Label Shift with Black Box Predictors: https://arxiv.org/pdf/1802.03916.pdf

Algorithmic Fairness from a Non-Ideal Perspective https://www.datascience.columbia.edu/data-good-zachary-lipton-lecture

Jonas Peter’s lectures on causality:

https://youtu.be/zvrcyqcN9Wo

0:00 Sneak peek: Is this a problem worth solving?

0:38 Intro

1:23 Zack’s journey from being a musician to a professor at CMU

4:45 Applying machine learning to medical imaging

10:14 Exploring new frontiers: the most impressive deep learning applications for healthcare

12:45 Evaluating the models – Are they ready to be deployed in hospitals for use by doctors?

19:16 Capturing the signals in evolving representations of healthcare data

27:00 How does the data we capture affect the predictions we make

30:40 Distinguishing between associations and correlations in data – Horror vs romance movies

34:20 The positive effects of augmenting datasets with counterfactually flipped data

39:25 Algorithmic fairness in the real world

41:03 What does it mean to say your model isn’t biased?

43:40 Real world implications of decisions to counteract model bias

49:10 The pragmatic approach to counteracting bias in a non-ideal world

51:24 An underrated aspect of machine learning

55:11 Why defining the problem is the biggest challenge for machine learning in the real world

Visit our podcasts homepage for transcripts and more episodes!

www.wandb.com/podcast

Get our podcast on YouTube, Apple, and Spotify!

YouTube: https://www.youtube.com/c/WeightsBiases

Soundcloud: https://bit.ly/2YnGjIq

Apple Podcasts: https://bit.ly/2WdrUvI

Spotify: https://bit.ly/2SqtadF

We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they’re working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast called Gradient Dissent. We hope you have as much fun listening to it as we had making it!

Join our bi-weekly virtual salon and listen to industry leaders and researchers in machine learning share their research:

http://tiny.cc/wb-salon

Join our community of ML practitioners where we host AMA's, share interesting projects and meet other people working in Deep Learning:

http://bit.ly/wandb-forum

Our gallery features curated machine learning reports by researchers exploring deep learning techniques, Kagglers showcasing winning models, and industry leaders sharing best practices.

https://app.wandb.ai/gallery

More episodes from Gradient Dissent: Conversations on AI