Journal Club

Humanitarian AI, PyTorch Models, and Saliency Maps


Listen Later

George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.

 

...more
View all episodesView all episodes
Download on the App Store

Journal ClubBy Data Skeptic

  • 5
  • 5
  • 5
  • 5
  • 5

5

4 ratings