
Sign up to save your podcasts
Or


As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:
By Practical AI LLC4.4
181181 ratings
As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:

171 Listeners

434 Listeners

301 Listeners

342 Listeners

156 Listeners

89 Listeners

106 Listeners

131 Listeners

150 Listeners

209 Listeners

557 Listeners

267 Listeners

105 Listeners

71 Listeners

33 Listeners