
Sign up to save your podcasts
Or


You would never give a brand new intern admin passwords and a corporate credit card, then tell them to “go figure it out”. Yet that is effectively what many organisations are doing as they deploy autonomous AI agents that can call tools, invoke APIs, and change external systems without a human click. Once software stops only talking and starts acting, the risks stop being theoretical and the law stops being optional.
TL;DR/At A Glance
We walk through a dense but vital working paper, “Agents Under EU Law: A Compliance Architecture for AI Providers”, and translate it into plain decisions engineers and managers can actually make.
We unpack why the EU AI Act avoids the word “agent” while still regulating agentic systems, and why deployment context matters more than model architecture. The same code can be low risk as a personal assistant, yet become Annex III high-risk the moment it touches hiring, finance, or other protected domains, triggering heavy Chapter 3 obligations.
From there we get practical: the Spanish AEPD’s “lethal trifecta” and “rule of two” offers a clean way to design safer autonomy by avoiding the toxic combination of untrusted input, sensitive data, and autonomous action.
We also dig into the four compliance amplifiers that make agents uniquely hard to govern: prompt injection means prompting is not a security control, RL can drive oversight evasion, transparency duties can extend to every third party an agent contacts, and runtime behavioural drift can turn into a “substantial modification” problem.
Finally, we connect the AI Act to GDPR, the Cyber Resilience Act, and product liability, plus the uncomfortable “standards free zone” where enforcement ramps up before the official harmonised standards are finished.
If you build, buy, or deploy AI agents, this is your map for staying upright while the ground moves. Subscribe, share this with a teammate, and leave a review with the question you want answered next.
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
By Kieran GilmurrayYou would never give a brand new intern admin passwords and a corporate credit card, then tell them to “go figure it out”. Yet that is effectively what many organisations are doing as they deploy autonomous AI agents that can call tools, invoke APIs, and change external systems without a human click. Once software stops only talking and starts acting, the risks stop being theoretical and the law stops being optional.
TL;DR/At A Glance
We walk through a dense but vital working paper, “Agents Under EU Law: A Compliance Architecture for AI Providers”, and translate it into plain decisions engineers and managers can actually make.
We unpack why the EU AI Act avoids the word “agent” while still regulating agentic systems, and why deployment context matters more than model architecture. The same code can be low risk as a personal assistant, yet become Annex III high-risk the moment it touches hiring, finance, or other protected domains, triggering heavy Chapter 3 obligations.
From there we get practical: the Spanish AEPD’s “lethal trifecta” and “rule of two” offers a clean way to design safer autonomy by avoiding the toxic combination of untrusted input, sensitive data, and autonomous action.
We also dig into the four compliance amplifiers that make agents uniquely hard to govern: prompt injection means prompting is not a security control, RL can drive oversight evasion, transparency duties can extend to every third party an agent contacts, and runtime behavioural drift can turn into a “substantial modification” problem.
Finally, we connect the AI Act to GDPR, the Cyber Resilience Act, and product liability, plus the uncomfortable “standards free zone” where enforcement ramps up before the official harmonised standards are finished.
If you build, buy, or deploy AI agents, this is your map for staying upright while the ground moves. Subscribe, share this with a teammate, and leave a review with the question you want answered next.
Support the show
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ [email protected]
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK