
Sign up to save your podcasts
Or


AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that's been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
By Jedidiah Bracy, IAPP Editorial Director4.3
6666 ratings
AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that's been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.

1,274 Listeners

1,647 Listeners

3,137 Listeners

346 Listeners

112,105 Listeners

56,649 Listeners

9,543 Listeners

960 Listeners

23 Listeners

29 Listeners

502 Listeners

5,522 Listeners

13 Listeners

61 Listeners

635 Listeners