Linear Digressions

What makes a machine learning algorithm "superhuman"?

02.26.2018 - By Ben Jaffe and Katie MalonePlay

Download our free app to listen on your phone

Download on the App StoreGet it on Google Play

A few weeks ago, we podcasted about a neural network that was being touted as "better than doctors" in diagnosing pneumonia from chest x-rays, and how the underlying dataset used to train the algorithm raised some serious questions. We're back again this week with further developments, as the author of the original blog post pointed us toward more developments. All in all, there's a lot more clarity now around how the authors arrived at their original "better than doctors" claim, and a number of adjustments and improvements as the original result was de/re-constructed.

Anyway, there are a few things that are cool about this. First, it's a worthwhile follow-up to a popular recent episode. Second, it goes *inside* an analysis to see what things like imbalanced classes, outliers, and (possible) signal leakage can do to real science. And last, it raises a really interesting question in an age when computers are often claimed to be better than humans: what do those claims really mean?

Relevant links:

https://lukeoakdenrayner.wordpress.com/2018/01/24/chexnet-an-in-depth-review/

More episodes from Linear Digressions