Compromising a well-protected enterprise used to require careful planning, proper resources, and ability to execute. Not anymore! Enter AI.
From Initial Access to Impact and Exfiltration. AI is happy to oblige the attacker. In this talk we will demonstrate access-to-impact AI vulnerability chains in most flagship enterprise AI assistants: ChatGPT, Gemini, Copilot, Einstein, and their custom agent . Some require one bad click by the victim, others work with no user interaction – 0click attacks.
Compromising a well-protected enterprise used to require careful planning, proper resources, and ability to execute. Not anymore! Enter AI.
Initial access? AI is happy to let you operate on its users’ behalf. Persistence? Self-replicate through corp docs. Data harvesting? AI is the ultimate data hoarder. Exfil? Just render an image. Impact? So many tools at your disposal. There's more. You can do all this as an external attacker. No credentials required, no phishing, no social engineering, no human-in-the-loop. In-and-out with a single prompt.
Last year at BHUSA we demonstrated the first real-world exploitation of AI vulnerabilities impacting enterprises, living off Microsoft Copilot. A lot has changed in the AI space since... for the worse. AI assistants have morphed into agents. They read your search history, emails and chat messages. They wield tools that can manipulate the enterprise environment on behalf of users – or a malicious attacker once hijacked. We will demonstrate access-to-impact AI vulnerability chains in most flagship enterprise AI assistants: ChatGPT, Gemini, Copilot, Einstein, and their custom agent . Some require one bad click by the victim, others work with no user interaction – 0click attacks.
The industry has no real solution for fixing this. Prompt injection is not another bug we can fix. It is a security problem we can manage! We will offer a security framework to help you protect your organization–the GenAI Attack Matrix. We will compare mitigations set forth by AI vendors, and share which ones successfully prevent the worst 0click attacks. Finally, we’ll dissect our own attacks, breaking them down into basic TTPs, and showcase how they can be detected and mitigated.
Licensed to the public under https://creativecommons.org/licenses/by/4.0/
about this event: https://program.why2025.org/why2025/talk/SELH79/