
Sign up to save your podcasts
Or


AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that's been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
By Jedidiah Bracy, IAPP Editorial Director4.3
6565 ratings
AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that's been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.

1,646 Listeners

333 Listeners

289 Listeners

1,598 Listeners

24 Listeners

29 Listeners

261 Listeners

138 Listeners

5,518 Listeners

12 Listeners

15,938 Listeners

6 Listeners

610 Listeners

386 Listeners

2 Listeners