AI Security Ops

Indirect Prompt Injection | Episode 44


Listen Later

In this episode of BHIS Presents: AI Security Ops, the team breaks down indirect prompt injection — the #1 risk in the OWASP Top 10 for LLM Applications — and why it represents one of the most dangerous and misunderstood threats in modern AI systems.

Unlike traditional attacks, indirect prompt injection doesn’t require malware, credentials, or even user interaction. Instead, attackers hide malicious instructions inside everyday content like emails, documents, or web pages — and wait for AI systems to unknowingly execute them.

From real-world exploits like EchoLeak to in-the-wild attacks observed by Palo Alto Unit 42, this episode explores how attackers are already abusing AI-powered tools in production environments — and why current defenses are struggling to keep up.

We dig into:
• What indirect prompt injection is and how it differs from direct attacks
• Why OWASP ranks prompt injection as the #1 LLM security risk
• How attackers hide payloads inside emails, documents, and web content
• The EchoLeak zero-click exploit against Microsoft 365 Copilot
• Web-based prompt injection attacks observed in the wild (Unit 42)
• Exploits targeting AI coding tools like Cursor IDE and GitHub Copilot
• How RAG systems amplify the risk through poisoned knowledge bases
• Why LLM architecture makes this problem fundamentally hard to solve
• Research showing modern defenses still fail 50%+ of the time
• Practical mitigation strategies: least privilege, human-in-the-loop, and observability

This episode focuses on the real-world security implications of AI adoption, showing how attackers are already leveraging these techniques — and what defenders need to understand as AI becomes deeply embedded in business workflows.

📚 Key References

Prompt Injection & LLM Risk
• OWASP Top 10 for LLM Applications 2025 — https://owasp.org

Real-World Attacks
• EchoLeak (CVE-2025-32711) — Aim Security / arXiv
• Unit 42 — Web-Based Indirect Prompt Injection in the Wild (March 2026) — https://unit42.paloaltonetworks.com

AI System Vulnerabilities
• Cursor IDE (CVE-2025-59944)
• GitHub Copilot (CVE-2025-53773)
• Lakera — Zero-Click MCP Attack — https://lakera.ai

Research on Defenses
• Zhan et al. — Adaptive Attacks Break Defenses (NAACL 2025)
• Anthropic System Card (Feb 2026)
• Google Gemini Security Research (2025)

Standards & Guidance
• NIST AI Risk Management Framework — https://nist.gov
• MITRE ATLAS — https://atlas.mitre.org
• ISO/IEC 42001 AI Management Systems

#AISecurity #PromptInjection #CyberSecurity #LLMSecurity #AIThreats #BHIS #AIAgents #ArtificialIntelligence #infosec

  • (00:00) - Intro & BHIS / Antisyphon Overview
  • (01:19) - OWASP Top 10 & Prompt Injection Context
  • (01:41) - Indirect Prompt Injection Explained (Stored Attack Analogy)
  • (02:54) - Real-World Attack Scenarios (Calendar & Hidden Payloads)
  • (05:10) - EchoLeak & Zero-Click Copilot Exploit
  • (06:10) - Weaponized Excel Prompt Injection PoC
  • (06:50) - Email Injection & AI Summarization Abuse
  • (09:07) - Why Detection & Prevention Are So Difficult
  • (14:02) - Mitigations & Final Thoughts

  • Click here to watch this episode on YouTube.

    Creators & Guests
    • Derek Banks - Host
    • Brian Fehrman - Host

    • Brought to you by:

      Black Hills Information Security 

      https://www.blackhillsinfosec.com


      Antisyphon Training

      https://www.antisyphontraining.com/


      Active Countermeasures

      https://www.activecountermeasures.com


      Wild West Hackin Fest

      https://wildwesthackinfest.com

      🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
      https://poweredbybhis.com

      Click here to view the episode transcript.


      ...more
      View all episodesView all episodes
      Download on the App Store

      AI Security OpsBy Black Hills Information Security