In this episode for April 23, 2026, Jeremy explores a week where "first principles" in security are being forgotten in the rush to adopt AI. From guessable API endpoints exposing Anthropic’s most powerful model to a $10,000 fine for a lawyer’s AI "slop," the message of the week is clear: There is no AI without API security.
Key Stories & Developments:
- The Mythos API Leak: Unauthorized actors gained access to Anthropic’s Claude Mythos model by simply guessing API naming conventions. This classic case of Broken Function Level Authorization highlights a major oversight in the rollout of sensitive models.
- Shadow AI Agents: A new survey from the Cloud Security Alliance reveals that 82% of enterprises have unknown AI agents operating without security oversight.
- The $10K Hallucination: An Oregon lawyer was fined $10,000 for "AI slop" in court filings, setting a firm legal precedent that AI error does not excuse professional negligence.
- MCP Design Flaws: The Model Context Protocol (MCP), designed to wrap APIs in human language, is proving vulnerable to coercion. Attackers are using human language requests to probe back-end systems through NGINX.
- "Logjack": New research into "Logjack" shows how malicious prompts hidden in system logs can compromise the LLMs used to analyze them.
- Meta Keystroke Capturing: Reports indicate Meta is capturing employee keystrokes to refine internal AI training sets, raising massive concerns about insider risk and password exfiltration.
Shadow AI agents are the new Shadow IT. Are you part of the 82% with zero visibility into your AI agents? Discover every agent and API connection in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demo
Episode Links
https://www.inc.com/kevin-haynes/faulty-ai-leads-to-record-10000-fine-for-oregon-lawyer/91322007
https://www.nytimes.com/2026/04/17/us/oregon-winery-ai-legal-fight.html
https://techcrunch.com/2026/04/21/meta-will-record-employees-keystrokes-and-use-it-to-train-its-ai-models/
https://cloudsecurityalliance.org/press-releases/2026/04/21/new-cloud-security-alliance-survey-reveals-82-of-enterprises-have-unknown-ai-agents-in-their-environments
https://techcrunch.com/2026/04/20/app-host-vercel-confirms-security-incident-says-customer-data-was-stolen-via-breach-at-context-ai/
https://www.securityweek.com/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/
https://www.theregister.com/2026/04/16/anthropic_mcp_design_flaw/
https://www.darkreading.com/application-security/critical-mcp-integration-flaw-nginx-risk
https://www.helpnetsecurity.com/2026/04/16/llm-router-security-risk-agent-commands/
https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/
https://arxiv.org/abs/2604.15368
https://venturebeat.com/security/microsoft-salesforce-copilot-agentforce-prompt-injection-cve-agent-remediation-playbook
https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/
https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
https://www.darkreading.com/vulnerabilities-threats/every-old-vulnerability-ai-vulnerability
https://www.theregister.com/2026/04/20/lovable_denies_data_leak/