
Sign up to save your podcasts
Or


AI agents are evolving into capable collaborators in cybersecurity, acting as operational players. These agents read sensitive data, trigger workflows, and make decisions at a speed and scale beyond human capability.
Matt Fangman, Field CTO at SailPoint, explains on The Security Strategist podcast that this new power has costs. AI agents have turned into a new, mostly unmanaged identity type. Enterprises are just starting to realise how far behind they are.
In the recent episode of The Security Strategist podcast, guest Fangman sat down with Alejandro Leal, Senior Analyst at KuppingerCole. They talked about the implications of AI agents for identity security and the rapid evolution of AI agents, the challenges of visibility and governance, and the need for operational control in managing these agents.
The conversation highlights the importance of just-in-time permissions, the evolution of identity controls, and strategic moves for CISOs to manage the risks associated with agent-based operations.
AI Agents Creating Brand New Identity LayersFangman notes a turning point in the last 12 to 18 months, driven by the fast development of large language models (LLMs). These models gave agents the reasoning and autonomy to change from toys in a sandbox to real virtual workers.
Organizations can now train agents with goals, equip them with tools, and connect them to one another. Since these agents do not tire, slow down, or forget, companies see a chance to grow their workforce without hiring new people.
The issue is: They didn’t establish identity controls for these AI workers.
“They’ve created a brand-new layer of identities,” Matt says, “but without the protections, ownership, or visibility that exist for humans.”
Shadow agents, sometimes numbering in the thousands, operate unnoticed. Identity teams are unaware of them, security teams can’t monitor them, and cloud teams might spot them briefly in a dashboard, thinking they are someone else’s issue. Meanwhile, the agents themselves explore, share tools, and adapt.
It’s a governance gap that keeps widening.
When Leal asks how the industry should respond, Fangman answers: “Start by treating agents like people. Give them roles. Define what they can access. Apply entitlements. Enforce policy.”
When asked for advice for CISOs and what they should do before agents start to overwhelm security programs?
The SailPoint Field CTO recommends beginning with inventory. If an organisation does not know what agents exist, what they access, or what they are doing, nothing else matters. Assigning each agent a corporate identity and tracking its behaviour is the essential foundation for everything that follows.
Takeaways
By EM360TechAI agents are evolving into capable collaborators in cybersecurity, acting as operational players. These agents read sensitive data, trigger workflows, and make decisions at a speed and scale beyond human capability.
Matt Fangman, Field CTO at SailPoint, explains on The Security Strategist podcast that this new power has costs. AI agents have turned into a new, mostly unmanaged identity type. Enterprises are just starting to realise how far behind they are.
In the recent episode of The Security Strategist podcast, guest Fangman sat down with Alejandro Leal, Senior Analyst at KuppingerCole. They talked about the implications of AI agents for identity security and the rapid evolution of AI agents, the challenges of visibility and governance, and the need for operational control in managing these agents.
The conversation highlights the importance of just-in-time permissions, the evolution of identity controls, and strategic moves for CISOs to manage the risks associated with agent-based operations.
AI Agents Creating Brand New Identity LayersFangman notes a turning point in the last 12 to 18 months, driven by the fast development of large language models (LLMs). These models gave agents the reasoning and autonomy to change from toys in a sandbox to real virtual workers.
Organizations can now train agents with goals, equip them with tools, and connect them to one another. Since these agents do not tire, slow down, or forget, companies see a chance to grow their workforce without hiring new people.
The issue is: They didn’t establish identity controls for these AI workers.
“They’ve created a brand-new layer of identities,” Matt says, “but without the protections, ownership, or visibility that exist for humans.”
Shadow agents, sometimes numbering in the thousands, operate unnoticed. Identity teams are unaware of them, security teams can’t monitor them, and cloud teams might spot them briefly in a dashboard, thinking they are someone else’s issue. Meanwhile, the agents themselves explore, share tools, and adapt.
It’s a governance gap that keeps widening.
When Leal asks how the industry should respond, Fangman answers: “Start by treating agents like people. Give them roles. Define what they can access. Apply entitlements. Enforce policy.”
When asked for advice for CISOs and what they should do before agents start to overwhelm security programs?
The SailPoint Field CTO recommends beginning with inventory. If an organisation does not know what agents exist, what they access, or what they are doing, nothing else matters. Assigning each agent a corporate identity and tracking its behaviour is the essential foundation for everything that follows.
Takeaways
2,459 Listeners

113,004 Listeners