
Sign up to save your podcasts
Or


In this episode of Access Granted, Nauman sits down with Ken Huang—co-author of the OWASP Top 10 for LLMs, contributor to NIST AI work, and co-chair of CSA’s AI Safety Group—to break down what practical GenAI security looks like.
They cover:
Why only a small fraction of organizations feel comfortable with their GenAI security posture
The three big risk buckets: prompt injection, MCP/tooling exposure, and goal manipulation / agent drift
How “shadow AI” emerges when there’s no dedicated GenAI security program
A concrete framework stack: NIST AI RMF → Maestro threat modeling → OWASP AI VSS → CSA AICM + red teaming
The role of cloud provider frameworks (Google SAIF, AWS CAF-E AI, Azure guidance) and how to combine them with community standards
Why traditional IAM (static SAML/OAuth scopes) doesn’t work for AI agents—and what task-scoped, intent-based, ephemeral access should look like
How to think about identity lifecycle and governance for AI agents, and why “no 24/7 God mode” should be a non-negotiable anchor for CISOs
If you’re trying to move from GenAI science projects to production systems without sleepwalking into a breach—or letting an agent delete your production database—this conversation will help you define the guardrails, frameworks, and identity controls you actually need.
By BritiveIn this episode of Access Granted, Nauman sits down with Ken Huang—co-author of the OWASP Top 10 for LLMs, contributor to NIST AI work, and co-chair of CSA’s AI Safety Group—to break down what practical GenAI security looks like.
They cover:
Why only a small fraction of organizations feel comfortable with their GenAI security posture
The three big risk buckets: prompt injection, MCP/tooling exposure, and goal manipulation / agent drift
How “shadow AI” emerges when there’s no dedicated GenAI security program
A concrete framework stack: NIST AI RMF → Maestro threat modeling → OWASP AI VSS → CSA AICM + red teaming
The role of cloud provider frameworks (Google SAIF, AWS CAF-E AI, Azure guidance) and how to combine them with community standards
Why traditional IAM (static SAML/OAuth scopes) doesn’t work for AI agents—and what task-scoped, intent-based, ephemeral access should look like
How to think about identity lifecycle and governance for AI agents, and why “no 24/7 God mode” should be a non-negotiable anchor for CISOs
If you’re trying to move from GenAI science projects to production systems without sleepwalking into a breach—or letting an agent delete your production database—this conversation will help you define the guardrails, frameworks, and identity controls you actually need.