
Sign up to save your podcasts
Or


In this episode of Ahead of the Breach, host Casey Cammilleri sits down with Tori Westerhoff, a member of Microsoft’s AI Red Team, to explore what offensive security looks like in the age of large language models and AI-driven systems.
Tori breaks down how AI red teaming differs from traditional security testing, what it takes to identify real-world abuse cases in generative models, and why understanding adversarial thinking is critical as AI becomes embedded in modern products. The conversation dives into model misuse, prompt manipulation, system-level risks, and how red teams collaborate with engineers to build safer AI from the ground up.
Whether you’re a penetration tester, security engineer, or just trying to understand how AI systems are tested before they reach production, this episode offers a rare look inside one of the most cutting-edge offensive security roles in the industry.
By Sprocket SecurityIn this episode of Ahead of the Breach, host Casey Cammilleri sits down with Tori Westerhoff, a member of Microsoft’s AI Red Team, to explore what offensive security looks like in the age of large language models and AI-driven systems.
Tori breaks down how AI red teaming differs from traditional security testing, what it takes to identify real-world abuse cases in generative models, and why understanding adversarial thinking is critical as AI becomes embedded in modern products. The conversation dives into model misuse, prompt manipulation, system-level risks, and how red teams collaborate with engineers to build safer AI from the ground up.
Whether you’re a penetration tester, security engineer, or just trying to understand how AI systems are tested before they reach production, this episode offers a rare look inside one of the most cutting-edge offensive security roles in the industry.