
Sign up to save your podcasts
Or


As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.
In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.
Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.
Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.
Questions asked:
Resources discussed during the episode:
Baselines for Watermarking Large Language Models
Haize Labs
By Kaizenteq Team4.9
99 ratings
As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.
In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.
Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.
Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.
Questions asked:
Resources discussed during the episode:
Baselines for Watermarking Large Language Models
Haize Labs

1,100 Listeners

374 Listeners

1,034 Listeners

2,343 Listeners

348 Listeners

178 Listeners

203 Listeners

199 Listeners

58 Listeners

10,278 Listeners

138 Listeners

40 Listeners

8,709 Listeners

637 Listeners

33 Listeners