
Sign up to save your podcasts
Or


In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.
The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.
Follow Alice on Twitter: twitter.com/alicexiang Follow Abigail on Twitter: twitter.com/abigailhingwen Learn more about Intel's work in AI: intel.com/ai
By Intel Corporation4.9
1313 ratings
In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.
The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.
Follow Alice on Twitter: twitter.com/alicexiang Follow Abigail on Twitter: twitter.com/abigailhingwen Learn more about Intel's work in AI: intel.com/ai

32,003 Listeners

229,169 Listeners

30,175 Listeners

1,703 Listeners

9,927 Listeners

306 Listeners

15,931 Listeners

348 Listeners

769 Listeners