
Sign up to save your podcasts
Or


Along the Edge — Episode 3
How do you break an AI agent? Javi Rivera — AI security researcher at ZioSec with 8+ years of offensive security experience from MITRE to ThreatX — breaks down the real-world techniques attackers use against agentic AI systems.
In this episode, we cover:
• Jailbreaks vs. prompt injections — what's the actual difference and why it matters
• Why classic attacks still work — SQL injection, command injection, and XSS through AI agents as a "middleman"
• System prompt extraction — how attackers use leaked instructions to craft targeted exploits
• MCP server security — why public MCP catalogs are the new supply chain risk and why there's no good solution yet
• Validating real findings vs. hallucinations — the hardest problem in AI pentesting
• Live demo — Gray Swan arena walkthrough showing indirect prompt injection in action
• Defense strategies — least privilege, sandboxing, guardrails, and why defense in depth still applies
• The coming threat — nation-state AI agents, automated offensive tooling, and why the next wave of attacks will be unprecedented
Whether you're a red teamer, AI developer, or security leader deploying agentic AI — this is the technical deep dive you need.
Resources mentioned: Gray Swan AI Arena, HackerPrompt, NVIDIA NeMo Guardrails, Docker MCP Hub
By Andrius UseckasAlong the Edge — Episode 3
How do you break an AI agent? Javi Rivera — AI security researcher at ZioSec with 8+ years of offensive security experience from MITRE to ThreatX — breaks down the real-world techniques attackers use against agentic AI systems.
In this episode, we cover:
• Jailbreaks vs. prompt injections — what's the actual difference and why it matters
• Why classic attacks still work — SQL injection, command injection, and XSS through AI agents as a "middleman"
• System prompt extraction — how attackers use leaked instructions to craft targeted exploits
• MCP server security — why public MCP catalogs are the new supply chain risk and why there's no good solution yet
• Validating real findings vs. hallucinations — the hardest problem in AI pentesting
• Live demo — Gray Swan arena walkthrough showing indirect prompt injection in action
• Defense strategies — least privilege, sandboxing, guardrails, and why defense in depth still applies
• The coming threat — nation-state AI agents, automated offensive tooling, and why the next wave of attacks will be unprecedented
Whether you're a red teamer, AI developer, or security leader deploying agentic AI — this is the technical deep dive you need.
Resources mentioned: Gray Swan AI Arena, HackerPrompt, NVIDIA NeMo Guardrails, Docker MCP Hub