EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google
Guest:
Daniel Fabian, Principal Digital Arsonist, Google
Topic:
Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process?
What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems?
Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it?
What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle?
What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field?
Resources:
Video (LinkedIn, YouTube)
Google's AI Red Team: the ethical hackers making AI safer
EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?
EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons
Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]
EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google
Guest:
Daniel Fabian, Principal Digital Arsonist, Google
Topic:
Your RSA talk highlights lessons learned from two years of AI red teaming at Google. Could you share one or two of the most surprising or counterintuitive findings you encountered during this process?
What are some of the key differences or unique challenges you've observed when testing AI-powered applications compared to traditional software systems?
Can you provide an example of a specific TTP that has proven effective against AI systems and discuss the implications for security teams looking to detect it?
What practical advice would you give to organizations that are starting to incorporate AI red teaming into their security development lifecycle?
What are some initial steps or resources you would recommend they explore to deepen their understanding of this evolving field?
Resources:
Video (LinkedIn, YouTube)
Google's AI Red Team: the ethical hackers making AI safer
EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes?
EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw
EP198 GenAI Security: Unseen Attack Surfaces & AI Pentesting Lessons
Lessons from AI Red Teaming – And How to Apply Them Proactively [RSA 2025]