
Sign up to save your podcasts
Or
In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.
The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.
Follow Alice on Twitter: twitter.com/alicexiang Follow Abigail on Twitter: twitter.com/abigailhingwen Learn more about Intel’s work in AI: intel.com/ai
4.9
1313 ratings
In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.
The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.
Follow Alice on Twitter: twitter.com/alicexiang Follow Abigail on Twitter: twitter.com/abigailhingwen Learn more about Intel’s work in AI: intel.com/ai
1,646 Listeners
161 Listeners
26,400 Listeners
323 Listeners
111,438 Listeners
657 Listeners
56,016 Listeners
317 Listeners
190 Listeners
1,836 Listeners
5,923 Listeners
9,045 Listeners
1,542 Listeners
199 Listeners
458 Listeners