
Sign up to save your podcasts
Or


AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Links
Recorded February 16, 2026
By Richard Campbell4.6
8282 ratings
AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!
Links
Recorded February 16, 2026

273 Listeners

382 Listeners

39 Listeners

288 Listeners

3,059 Listeners

2,011 Listeners

2,013 Listeners

888 Listeners

1,072 Listeners

781 Listeners

1,105 Listeners

1,391 Listeners

317 Listeners

242 Listeners

63 Listeners

98 Listeners