
Sign up to save your podcasts
Or


As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.
In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.
Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.
Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.
Questions asked:
Resources discussed during the episode:
Baselines for Watermarking Large Language Models
Haize Labs
By Kaizenteq Team4.9
88 ratings
As AI systems become more integrated into enterprise operations, understanding how to test their security effectively is paramount.
In this episode, we're joined by Leonard Tang, Co-founder and CEO of Haize Labs, to explore how AI red teaming is changing.
Leonard discusses the fundamental shifts in red teaming methodologies brought about by AI, common vulnerabilities he's observing in enterprise AI applications, and the emerging risks associated with multimodal AI (like voice and image processing systems). We delve into the intricacies of achieving precise output control for crafting sophisticated AI exploits, the challenges enterprises face in ensuring AI safety and reliability, and practical mitigation strategies they can implement.
Leonard shares his perspective on the future of AI red teaming, including the critical skills cybersecurity professionals will need to develop, the potential for fingerprinting AI models, and the ongoing discussion around protocols like MCP.
Questions asked:
Resources discussed during the episode:
Baselines for Watermarking Large Language Models
Haize Labs

374 Listeners

655 Listeners

1,023 Listeners

333 Listeners

318 Listeners

8,041 Listeners

181 Listeners

315 Listeners

211 Listeners

57 Listeners

138 Listeners

610 Listeners

35 Listeners

39 Listeners

0 Listeners