
Sign up to save your podcasts
Or


George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.
By Data Skeptic5
44 ratings
George's paper this week is Sanity Checks for Saliency Maps. This work takes stock of a group of techniques that generate local interpretability - and assesses their trustworthiness through two 'sanity checks'. From this analysis, Adebayo et al demonstrate that a number of these tools are invariant to the model's weights and could lead a human observer into confirmation bias. Kyle discusses AI and brings the question: How can AI help in a humanitarian crisis? Last but not least, Lan brings us the topic of Captum, an extensive interpretability library for PyTorch models.