Cirrius Talk

[Ep 032] Agentic AI Security Stop Prompt Injection Before It Stops You


Listen Later

Agentic AI security is now the difference between “we shipped” and “we leaked.” In this episode, we break down the real security risks AI agents introduce—and the practical ways to mitigate them.

What you’ll learn:

  • How prompt injection happens (and why LLMs struggle to separate trusted vs. untrusted instructions)
  • How data exfiltration can occur even without a malicious prompt (often from system/design errors)
  • How to reduce tool abuse risk by scoping tools, validating calls, logging, and using approvals for sensitive actions

Who it’s for: CTOs, CIOs, architects, engineers, and business leaders implementing or governing AI agents.

Guest + credibility: Greg is joined by Gavin Franklin and Tim to translate agent security into practical patterns teams can apply immediately.

CTA: Share this episode with someone responsible for AI rollout security. (And follow the show for more Agentic AI implementation guidance.)


Recommended Links

Episode Artifact

Gavin Franklin on LinkedIn

Tim Harting on LinkedIn

Cirrius Solutions

Cirrius Blog 

Greg Banks on LinkedIn

Jason Fowler Music

For questions or feedback, please contact us at [email protected]

...more
View all episodesView all episodes
Download on the App Store

Cirrius TalkBy Cirrius Solutions