
Sign up to save your podcasts
Or


As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:
By Practical AI LLC4.4
189189 ratings
As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:

288 Listeners

1,106 Listeners

168 Listeners

442 Listeners

309 Listeners

345 Listeners

313 Listeners

100 Listeners

144 Listeners

103 Listeners

228 Listeners

681 Listeners

113 Listeners

53 Listeners

34 Listeners