
Sign up to save your podcasts
Or


In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.
By Jedidiah Bracy, IAPP Editorial Director4.3
6565 ratings
In May 2024, the U.S. National Institute for Standards and Technology launched a new program called ARIA, which is short for Assessing Risks and Impacts of AI. The aim of the program is to advance sociotechnical testing and evaluation of artificial intelligence by developing methods to quantify how a given system works within real-world contexts. Potential outputs include scalable guidelines, tools, methodologies and metrics. Reva Schwartz is a research scientist and principal investigator for AI bias at NIST and the ARIA program lead. In recent years, she's also helped with NIST's AI Risk Management Framework.
IAPP Editorial Director Jedidiah Bracy recently caught up with Reva to discuss the program, what it entails, how it will work and who will be involved.

1,646 Listeners

333 Listeners

289 Listeners

1,598 Listeners

24 Listeners

29 Listeners

261 Listeners

138 Listeners

5,518 Listeners

12 Listeners

15,938 Listeners

6 Listeners

610 Listeners

386 Listeners

2 Listeners