
Sign up to save your podcasts
Or


This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Michael Hind is a Distinguished Research Staff Member in the IBM
In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
By aiandyou5
1010 ratings
This and all episodes at: https://aiandyou.net/ .
Training an AI to render accurate decisions for important questions can be useless and dangerous if it cannot tell you why it made those decisions. Enter explainability, a term so new that it isn't in spellcheckers but is critical to the successful future of AI in critical applications.
Michael Hind is a Distinguished Research Staff Member in the IBM
In part 2, we talk about the Teaching Explainable Decisions project, some of Michael’s experience with Watson, the difference between transparency and explainability, and a lot more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.

32,007 Listeners

229,029 Listeners

14,324 Listeners

1,288 Listeners

170 Listeners

532 Listeners

334 Listeners

9,922 Listeners

512 Listeners

494 Listeners

5,510 Listeners

130 Listeners

227 Listeners

608 Listeners

173 Listeners