In the latest episode of This Week in AI Security, Jeremy reports live from the sidelines of RSA in San Francisco. The week is defined by "gullible" AI agents, legal precedents for chatbot liability, and a massive supply chain attack targeting the tools developers use to build AI applications.
Key Stories & Developments:
- The "Minion" Problem: Zenity researchers demonstrated zero-click exploits against Cursor, Salesforce Einstein, ChatGPT, and Copilot, arguing that prompt injection should be reframed as "persuasion" vectors that turn agents into malicious minions.
- The $10M Discount Fabrication: A red teaming analysis of over 50 customer-facing AI agents found that "persuading" chatbots could lead to the fabrication of $10 million in unauthorized service discounts and commitments.
- Legal Precedent, Air Canada Liable: The British Columbia Civil Resolution Tribunal ruled that Air Canada is legally liable for the incorrect advice given by its chatbot, setting a major precedent for corporate AI accountability.
- Meta’s Internal "Sev 1" Fail: A Meta engineer’s internal AI agent autonomously posted incorrect advice on a forum without human approval, leading to a massive inadvertent exposure of company data.
- LLM Fingerprinting: New academic research shows that attackers can now fingerprint which specific LLM is in use by observing traffic patterns, allowing them to target the specific vulnerabilities (like the "Grandma" exploit) unique to that model.
- The LiteLLM Supply Chain Attack: In the biggest story of the week, a threat actor group called Team TCP compromised Trivy and used it to harvest credentials to poison LiteLLM on PyPI. Malicious versions (downloaded millions of times daily) were live for three hours, delivering a Kubernetes worm and credential harvester.
Episode Links
- https://www.theregister.com/2026/03/23/pwning_everyones_ai_agents/
- https://cybercory.com/2026/03/19/claudy-day-exposes-hidden-risks-prompt-injection-flaw-in-claude-ai-enables-silent-data-exfiltration/
- https://www.generalanalysis.com/blog/adversarial_analysis_customer_service_agents
- https://www.cve.org/CVERecord?id=CVE-2026-33068
- https://medium.com/@cbchhaya/making-prompt-injection-harder-against-ai-coding-agents-f4719c083a5c
- https://aiautomationglobal.com/blog/ransomware-ai-agents-enterprise-cybersecurity-2026
- https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/
- https://arxiv.org/html/2510.07176v1
- https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
- https://securityboulevard.com/2026/03/colorado-moves-to-revise-its-landmark-ai-law-after-industry-pushback/
- https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/