
Sign up to save your podcasts
Or


Today we're going to talk about how we can explain neural networks. Neural networks are like black boxes that hide the way they model and represent data. That's why explaining them is very difficult. A very powerful approach is called SHAP. Using this method, we can calculate the impact of a feature according to a given model independently of the type of model we're using. It's very useful for black boxes like neural networks.
Home page: https://www.yourdatateacher.com
Link to the article: https://www.yourdatateacher.com/2021/05/17/how-to-explain-neural-networks-using-shap/
By Your Data TeacherToday we're going to talk about how we can explain neural networks. Neural networks are like black boxes that hide the way they model and represent data. That's why explaining them is very difficult. A very powerful approach is called SHAP. Using this method, we can calculate the impact of a feature according to a given model independently of the type of model we're using. It's very useful for black boxes like neural networks.
Home page: https://www.yourdatateacher.com
Link to the article: https://www.yourdatateacher.com/2021/05/17/how-to-explain-neural-networks-using-shap/