
Sign up to save your podcasts
Or
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
4.3
6565 ratings
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
1,642 Listeners
339 Listeners
284 Listeners
1,552 Listeners
23 Listeners
29 Listeners
262 Listeners
134 Listeners
5,496 Listeners
12 Listeners
6 Listeners
16,061 Listeners
552 Listeners
2 Listeners
317 Listeners