AI Security Ops

AI News Stories | Episode 36


Listen Later

This week on AI Security Ops, the team breaks down how attackers are weaponizing AI and the tools around it: a critical n8n zero-day that can lead to unauthenticated remote code execution, prompt-injection “zombie agent” risks tied to ChatGPT memory, a zero-click-style indirect prompt injection scenario via email/URLs, and malicious Chrome extensions caught siphoning ChatGPT/DeepSeek chats at scale. They close with a reminder that the tactics are often “same old security problems,” just amplified by AI—so lock down orchestration, limit browser extensions, and keep sensitive data out of chat tools.

Key stories discussed

1) n8n (“n-eight-n”) zero-day → unauthenticated RCE risk

  • https://thehackernews.com/2026/01/critical-n8n-vulnerability-cvss-100.html
  • The hosts discuss a critical flaw in the n8n workflow automation platform where a workflow-parsing HTTP endpoint can be abused (via a crafted JSON payload) to achieve remote code execution as the n8n service account. Because automation/orchestration platforms often have broad internal access, one compromise can cascade quickly across an organization’s automation layer. 
    ai-news-stories-episode-36
  • Practical takeaway: don’t expose orchestration platforms directly to the internet; restrict who/what can talk to them; treat these “glue” systems as high-impact targets and assess them like any other production system. 
    ai-news-stories-episode-36

2) “Zombie agent” prompt injection via ChatGPT Memory

  • https://www.darkreading.com/endpoint-security/chatgpt-memory-feature-prompt-injection
  • The team talks about research describing an exploit that stores malicious instructions in long-term memory, then later triggers them with a benign prompt—leading to potential data leakage or unsafe tool actions if the model has integrations. The discussion frames this as “stored XSS vibes,” but harder to solve because the “feature” (following instructions/context) is also the root problem. 
    ai-news-stories-episode-36
  • User-side mitigation themes: consider disabling memory, keep chats cleaned up, and avoid putting sensitive data into chat tools—especially when agents/tools are involved. 
    ai-news-stories-episode-36

3) “Zero-click” agentic abuse via crafted email/URL (indirect prompt injection)

  • https://www.infosecurity-magazine.com/news/new-zeroclick-attack-chatgpt/
  • Another story describes a crafted URL delivered via email that could trigger an agentic workflow (e.g., email summarization / agent actions) to export chat logs without explicit user interaction. The hosts largely interpret this as indirect prompt injection—a pattern they expect to keep seeing as assistants gain more connectivity. 
    ai-news-stories-episode-36
  • Key point: even if the exact implementation varies, auto-processing untrusted content (like email) is a persistent risk when the model can take actions or access history. 
    ai-news-stories-episode-36

4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (900k users)

  • https://thehackernews.com/2026/01/two-chrome-extensions-caught-stealing.html
  • Two Chrome extensions posing as AI productivity tools reportedly injected JavaScript into AI web UIs, scraped chat text from the DOM, and exfiltrated it—highlighting ongoing extension supply-chain risk and the reality that “approved store” doesn’t mean safe. 
    ai-news-stories-episode-36
  • Advice echoed: minimize extensions, separate browsers/profiles for sensitive activities, and treat “AI sidebar” tools with extra skepticism. 
    ai-news-stories-episode-36

5) APT28 credential phishing updated with AI-written lures

  • https://thehackernews.com/2026/01/russian-apt28-runs-credential-stealing.html
  • The closing story is a familiar APT pattern—phishing emails with malicious Office docs leading to PowerShell loaders and credential theft—except the lure text is AI-generated, making it more consistent/convincing (and harder for users to spot via grammar/tone). 
    ai-news-stories-episode-36
  • The conversation stresses that “don’t click links” guidance is oversimplified; verification and layered controls matter (e.g., disabling macros org-wide). 
    ai-news-stories-episode-36

Chapter Timestamps

  • (00:00) - Intro & Sponsors
  • (01:16) - 1) n8n zero-day → unauthenticated RCE
  • (09:00) - 2) “Zombie agent” prompt injection via ChatGPT Memory
  • (19:52) - 3) “Zero-click” style agent abuse via crafted email/URL (indirect prompt injection)
  • (23:41) - 4) Malicious Chrome extensions stealing ChatGPT/DeepSeek chats (~900k users)
  • (29:59) - 5) APT28 phishing refreshed with AI-written lures
  • (34:15) - Closing thoughts: “AI genie is out of the bottle” + safety reminders
  • Brought to you by:

    Black Hills Information Security 

    https://www.blackhillsinfosec.com


    Antisyphon Training

    https://www.antisyphontraining.com/


    Active Countermeasures

    https://www.activecountermeasures.com


    Wild West Hackin Fest

    https://wildwesthackinfest.com

    🔗 Register for FREE Infosec Webcasts, Anti-casts & Summits
    https://poweredbybhis.com 

    ...more
    View all episodesView all episodes
    Download on the App Store

    AI Security OpsBy Black Hills Information Security