
Sign up to save your podcasts
Or


As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:
By Practical AI LLC4.4
189189 ratings
As AI systems move from simple chatbots to complex agentic workflows, new security risks emerge. In this episode, Donato Capitella unpacks how increasingly complicated architectures are making agents fragile and vulnerable. These agents can be exploited through prompt injection, data exfiltration, and tool misuse. Donato shares stories from real-world penetration tests, the design patterns for building LLM agents and explains how his open-source toolkit Spikee (Simple Prompt Injection Kit for Evaluation and Exploitation) is helping red teams probe AI systems.
Featuring:
Links:
Sponsors:
Upcoming Events:

289 Listeners

1,101 Listeners

169 Listeners

438 Listeners

300 Listeners

347 Listeners

312 Listeners

97 Listeners

138 Listeners

98 Listeners

227 Listeners

649 Listeners

105 Listeners

54 Listeners

34 Listeners