
Sign up to save your podcasts
Or
AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that’s been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
4.3
6363 ratings
AI governance is a rapidly evolving field that faces a wide array of risks, challenges and opportunities. For organizations looking to leverage AI systems such as large language models and generative AI, assessing risk prior to deployment is a must. One technique that’s been borrowed from the security space is red teaming. The practice is growing, and regulators are taking notice.
Brenda Leong, a partner of Luminos Law, helps global businesses manage their AI and data risks. I recently caught up with her to discuss what organizations should be thinking about when diving into red teaming to assess risk prior to deployment.
4,324 Listeners
31,909 Listeners
6,279 Listeners
8,931 Listeners
111,187 Listeners
313 Listeners
5,917 Listeners
24 Listeners
389 Listeners
29 Listeners
118 Listeners
5,356 Listeners
13 Listeners
5 Listeners
1 Listeners