
Sign up to save your podcasts
Or
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
4.3
6363 ratings
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
4,329 Listeners
32,127 Listeners
6,277 Listeners
8,807 Listeners
111,437 Listeners
309 Listeners
5,888 Listeners
24 Listeners
391 Listeners
28 Listeners
120 Listeners
5,348 Listeners
12 Listeners
4 Listeners
1 Listeners