
Sign up to save your podcasts
Or
Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.
Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.
4.6
2828 ratings
Last year, the World Privacy Forum, a nonprofit research organization, conducted an international review of AI governance tools. The organization analyzed various documents, frameworks, and technical material related to AI governance from around the world. Importantly, the review found that a significant percentage of the AI governance tools include faulty AI fixes that could ultimately undermine the fairness and explainability of AI systems.
Justin Hendrix talked to Kate Kaye, one of the report’s authors, about a range of issues it covers, from the involvement of large tech companies in shaping AI governance tools the role of organizations like the OECD in developing AI governance tools, to the need to consult people and communities that are often overlooked when making decisions about how to think about AI.
447 Listeners
6,293 Listeners
3,147 Listeners
10,700 Listeners
269 Listeners
1,480 Listeners
394 Listeners
537 Listeners
259 Listeners
5,462 Listeners
15,335 Listeners
3,364 Listeners
44 Listeners
315 Listeners
72 Listeners