
Sign up to save your podcasts
Or


This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one.
Michael is a Distinguished Research Staff Member in the IBM
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
By aiandyou5
1010 ratings
This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Before I talked with Michael Hind, my usual remark on the subject was, "If you want a demonstration of the ultimate futility of explainability, try asking your kid how the vase got broken." But after this episode I've learned more than I thought possible about how we can teach AI what an explanation is and how to produce one.
Michael is a Distinguished Research Staff Member in the IBM
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.

32,220 Listeners

229,762 Listeners

14,356 Listeners

1,297 Listeners

168 Listeners

543 Listeners

345 Listeners

10,243 Listeners

552 Listeners

513 Listeners

5,596 Listeners

146 Listeners

229 Listeners

686 Listeners

86 Listeners