AI security can feel chaotic, but it makes more sense when you look at it through identity. In this episode, Ian Ahl explains why most "AI incidents" today come down to stolen credentials, abused OAuth tokens, and over-privileged accounts. He compares what's useful right now from NIST's AI RMF, Google's Secure AI Framework, and MITRE ATLAS, and points out what's still mostly theory.
Ian also shares a practical way to get started: Discover, Protect, Defend. We spend most of the time on discovery, on how to see real AI use across users, builders, and agents by watching runtime activity instead of just scanning configs. Think Slack or Teams events, Okta or Entra logs, and MCP user agents.
You'll hear real cases, including the Salesloft/Drift token theft and LM-jacking on AWS Bedrock. If your "AI security" sounds like old CSPM with a new label, this episode will help you reframe the problem and focus on what actually breaks.