
Sign up to save your podcasts
Or


Microsoft just discovered that thirty-one companies are hiding prompt injections inside ordinary "Summarize with AI" buttons, poisoning your AI assistant's memory to manipulate future recommendations. The tools to do this are open source, documented, and work across ChatGPT, Copilot, Claude, Perplexity, and Grok.
In this episode:
The big takeaway: defending AI systems is going to be a long, iterative war, and the choices organizations make right now about security versus capability will define the next era of AI deployment.
New episodes every weekday. Share this with your security team.
By Pallav TyagiMicrosoft just discovered that thirty-one companies are hiding prompt injections inside ordinary "Summarize with AI" buttons, poisoning your AI assistant's memory to manipulate future recommendations. The tools to do this are open source, documented, and work across ChatGPT, Copilot, Claude, Perplexity, and Grok.
In this episode:
The big takeaway: defending AI systems is going to be a long, iterative war, and the choices organizations make right now about security versus capability will define the next era of AI deployment.
New episodes every weekday. Share this with your security team.