
Sign up to save your podcasts
Or


Agentic AI is changing how artificial intelligence operates inside enterprise environments. Unlike generative AI, agentic AI systems can take actions across systems, creating new challenges for AI security, Zero Trust architecture, and enterprise governance.
In this episode of Cyber Insights, Ronan Murray and Ian Finlayson are joined by security leader and author Josh Woodruff to explore what agentic AI really means for organisations and why it introduces entirely new security and governance challenges.
Josh explains how agentic AI systems operate, why they should be treated like junior employees with system access, and how organisations can safely introduce them into their environments. The discussion covers real-world use cases across supply chain, logistics, and operations, as well as the risks that come with autonomous systems acting at machine speed.
The conversation also dives into the security implications. From prompt injection attacks and data poisoning to the need for kill switches and behavioural monitoring, organisations must rethink how they apply identity, access, and governance controls in an AI-driven world.
Drawing from his book Agentic AI and Zero Trust, Josh outlines a practical framework for securing AI agents and explains why Zero Trust principles are becoming essential as AI moves from experimentation to operational deployment.
In this episode:
What agentic AI is and how it differs from generative AI
Real-world use cases and where organisations are already deploying it
Why agentic AI should be treated like an intern with system access
The biggest security risks including prompt injection and rogue behaviour
How Zero Trust can help secure autonomous AI systems
If your organisation is exploring AI beyond chatbots and copilots, this episode provides a clear and practical look at the opportunities and the security challenges ahead.
By Edge7 NetworksAgentic AI is changing how artificial intelligence operates inside enterprise environments. Unlike generative AI, agentic AI systems can take actions across systems, creating new challenges for AI security, Zero Trust architecture, and enterprise governance.
In this episode of Cyber Insights, Ronan Murray and Ian Finlayson are joined by security leader and author Josh Woodruff to explore what agentic AI really means for organisations and why it introduces entirely new security and governance challenges.
Josh explains how agentic AI systems operate, why they should be treated like junior employees with system access, and how organisations can safely introduce them into their environments. The discussion covers real-world use cases across supply chain, logistics, and operations, as well as the risks that come with autonomous systems acting at machine speed.
The conversation also dives into the security implications. From prompt injection attacks and data poisoning to the need for kill switches and behavioural monitoring, organisations must rethink how they apply identity, access, and governance controls in an AI-driven world.
Drawing from his book Agentic AI and Zero Trust, Josh outlines a practical framework for securing AI agents and explains why Zero Trust principles are becoming essential as AI moves from experimentation to operational deployment.
In this episode:
What agentic AI is and how it differs from generative AI
Real-world use cases and where organisations are already deploying it
Why agentic AI should be treated like an intern with system access
The biggest security risks including prompt injection and rogue behaviour
How Zero Trust can help secure autonomous AI systems
If your organisation is exploring AI beyond chatbots and copilots, this episode provides a clear and practical look at the opportunities and the security challenges ahead.