ChannelBuzz.ca

Shadow AI is an identity problem, and your employees already created it


Listen Later

Jack Hirsch, vice president of product at Okta

The rise of AI in the workplace is creating a new kind of risk for organizations: shadow AI. Employees can now spin up AI agents that connect directly to emails, files, and business systems—often without IT oversight. These agents can access sensitive data, and without proper controls, they become prime targets for cyberattacks.

In this episode of the podcast, we’re joined by Jack Hirsch, vice president of product at Okta, to explore what shadow AI is, why it matters for Canadian organizations, and how IT partners can help their customers manage it.

Jack discusses Okta’s latest tools, which provide real-time visibility into AI agents and their permissions. These capabilities make it easier for security teams to discover unmanaged agents, understand their access, and quickly bring them under identity-based controls.

We also touch on regulatory implications, including Canada’s proposed Bill C-8, which heightens expectations around cyber risk accountability, access controls, and transparency. As legislation moves forward, organizations will need to prove they understand not just who has access to sensitive systems—but which AI agents do as well.

For MSPs and IT resellers, this emerging landscape represents both a challenge and an opportunity. Jack shares insights into how partners can position themselves as trusted advisors for clients navigating AI risk, turning a potentially complex problem into a service opportunity.

Tune in to hear why identity management is becoming central to securing the agentic enterprise—and what your customers will need to stay ahead of shadow AI risks.

Read Full Transcript

Hello and welcome to the ChannelBuzz.ca podcast, bringing news and information to the Canadian IT channel for the last 16 years. I’m Robert Dutt, editor of ChannelBuzz.ca, and as always, your host for the show.

Okta has announced a new set of capabilities designed to help organizations uncover and manage a fast-growing risk: shadow AI. As AI tools become easier to use, employees are increasingly creating their own AI agents, connecting them to emails, files, SaaS apps, and internal systems to get work done faster. The problem is that many of these agents are created without security oversight, governance, or clear ownership. Once they’re connected to sensitive systems, they can quietly gain broad access to data, making them attractive targets for attackers and a potential liability for organizations.

Okta’s new solution is designed to address that gap. It gives security teams real-time visibility into AI agents across the enterprise, showing which agents exist, what they can access, and what permissions they’ve been granted. Just as importantly, it allows organizations to quickly bring unmanaged or risky agents under identity controls, treating them more like digital employees than anonymous tools.

That visibility matters even more in Canada, where proposed legislation like Bill C-8 is raising expectations around cyber risk accountability, access controls, and transparency. As AI becomes embedded into everyday workflows, organizations will be expected to know not just who has access to what sensitive data, but what machines and agents do as well.

To unpack what shadow AI really means, why identity has become central to managing AI risk, and what all this creates in terms of opportunity for Canadian IT partners, I’m joined today by Jack Hirsch, Vice President of Product at Okta. Let’s dive in.

Robert Dutt: Jack, thanks for taking the time. I appreciate it.

Jack Hirsch: My pleasure. Thank you for having me.

Robert Dutt: It feels like this is a topic that a lot of folks in the channel have been through with different flavors in the past. When you say “shadow X,” it certainly brings up memories of transitions past, but just to level set and set the parameters here, can you give me a quick definition on shadow AI? I almost said shadow IT. Can you give me a quick definition on shadow AI, and why it’s becoming both a security and governance issue?

Jack Hirsch: Sure. Well, look, it’s no secret now that AI is changing the shape of how work gets done in the modern era. You have these non-deterministic entities running around, and fundamentally, they’re exciting, they’re interesting on their own, but where they really light up in value, where you start to see efficiency and effectiveness gains from your carbon-based workforces, is when you start connecting them to tools. They need resource access to be truly productive.

So AI agents need resource access, and that’s when it can start to get scary, and that’s when shadow AI starts to create a ton of risk for modern organizations. We know that the point of authentication is now much stronger with phishing-resistant auth. However, post-auth security is the primary breach vector for the vast majority of cybersecurity incidents now, meaning the session token’s been cut. There’s access out in the ecosystem, and that’s why shadow AI is terrifying.

Unfortunately, the options available to the ecosystem to secure AI and to build it quickly have been not good enough, to put it bluntly. This leaves security leaders with this very, very difficult challenge of moving fast and potentially breaking things and giving away the keys to the kingdom to OpenClaw, or whatever it is that you want to do, or potentially stifling innovation. That’s a really, really difficult spot for security leaders to be in. So yeah, shadow AI is everywhere. The challenges are greater. The stakes have never been higher.

Robert Dutt: Yeah, so that’s sort of the problem space. So when employees spin up AI agents and connect them to emails, to files, to internal data, to systems, whatever it may be, I presume most of the problems emerge from unintended consequences, as is so often the case in technology. But what are some of the common ways that sensitive data ends up exposed without anyone really necessarily realizing it, or is that the nature of the problem?

Jack Hirsch: Well, look, I think there’s sort of the naive answer, and not to say that it’s easy or trivial. I don’t want to trivialize this, but the naive answer is, “Oh, prompt injection, data leakage, data poisoning. Oh yeah, who knows what the LLM will spit out?” But the actual scarier risk is around inadvertent access and the standing credentials that need to be given to AI agents for them to be productive.

If Rob, you and I work at Acme Corp, and we’re working on a project together and we want to spin up an AI agent, whose permissions do we give it? Most of the time now, a security leader is not going to be able to jump in front of every single moving train and slow them. They’ll just say, “Oh yeah, give it a set of static credentials. Give it an API key, but don’t give it Rob’s access. Don’t give it Jack’s access. Give it super user access, and we’ll trust it to do the right thing.”

And so you’re giving this untrained, very influenceable, non-deterministic entity the keys to the kingdom. And that’s really the primary risk vector here. And so it’s all an identity and access management problem. Fundamentally, these are identities that need to be discovered. They need to be controlled. They need to be governed. And their access needs to be managed in the same way that their carbon-based peers, us as humans, need to be governed as well.

Robert Dutt: So with that framing, it sounds like maybe identity is more important than traditional network or endpoint controls in terms of security in this world, where there are all these agents running around and doing whatever it is, hopefully, we want them to do and potentially what we don’t want them to do.

Jack Hirsch: I think this is where the traditional model of endpoint or network or identity-based detection and response falls flat. You can’t keep up with the incredible volume of AI agent activity out in the ecosystem to detect it all. Every single, even approved platforms are now starting to put AI sprinkles throughout their products. And so it’s sort of fighting an uphill battle there.

And so the reason this is truly an identity-centric problem is because, again, all those agents need access to resources inside of organizations. And the way that AI grew, and we saw this with how OpenAI and Anthropic and even Google with Gemini, their sort of growth paths were primarily consumer driven. And in a consumer world, it’s really easy. I’m spinning up, I’m literally sitting next to a machine that has a Claude bot spun up in a fully isolated environment, but I’m an individual user in that scenario. And so if I want to give it access, I can just OAuth myself. It’s super easy. And so the authorization mechanism wasn’t really thought about in an enterprise context.

And then when you get into an enterprise context, you have individuals that want to do exactly the same thing and access corporate resources. So it really is a new type of identity. We can talk about some of the differences between human and AI agent, but it’s fundamentally an identity and access management problem. These are digital identities, non-human identities that need access to resources within an organization.

And you actually see this being recognized by broader standards bodies. So for example, Cross App Access was something that we’ve been working on. It’s a new standard, it’s an extension of the OAuth protocol. And it’s something that we’ve been working on for years, two, three years now at this point. And we reintroduced it to the ecosystem this past summer, summer of 2025. And we introduced it first to ISVs and the people that were sort of around the Okta ecosystem had heard about it before.

But then the rest of the ecosystem, the adoption was wild because MCP had become a thing and people were trying to deploy MCP servers and AI agents into their enterprises. And no one, not at the time Anthropic or OpenAI or any of the big model providers, had taken on the challenge of enterprise authorization for AI agents. And so this standard that had been sort of latent and sitting somewhere in an IETF draft for a while got picked up and started gaining a ton of steam.

And just in November, right before Anthropic split off MCP and gave it away to the open ecosystem, it got merged into the MCP repo as the new default enterprise authorization mechanism for MCP. And so this isn’t something that’s Okta owned, it’s just a standard that we developed because we are independent. And as such, we are the sort of standard-bearer for the open security ecosystem. We believe that we need to be the rising tide that lifts all ships. And that’s why we develop open standards like Cross App Access.

So now, really excited, we’ve taken our own engineers and pushed this authorization code out into the open ecosystem so that many applications start picking up this capability, this new OAuth extension.

Robert Dutt: So at a high level, when you talk about the products that you guys are bringing to market, the solutions to address this, at a high level, what kind of new visibility or new insights are you giving organizations that are using these tools that they simply didn’t have before when it comes to discovering AI agents, the privileges they have, and what they’re up to?

Jack Hirsch: Yeah. So, I mean, maybe if I can even blow it up further and say, let’s talk about maybe three steps: discovery, then control, and governance.

So on the discovery side, there are many ways to discover, let’s date ourselves, shadow IT. There are many ways to discover, right? You can have a browser extension, you can have some sort of endpoint monitoring, you can have network monitoring. You can also check the resources themselves for access. And so we took a, initially, we’re taking a multi-pronged approach to doing the discovery, but we’re doing what we do best, which is integrating into over 8,000 ISVs and checking for resource access. And so who’s accessing these resources? Are they carbon-based? Are they digital-based?

And so the first phase of discovery with our ISPM product is being able to see who’s accessing these resources and why. And so that extended very, very nicely to AI agents. And it doesn’t really matter where the AI agents exist, right? It doesn’t matter if they’re part of a larger platform with something like Salesforce and Agentforce, or whether they’re homegrown, built off in some skunkworks team off to the side. Ultimately, when they get access to the resource, we see it.

And then you get into the control plane. So that’s just the discovery. Within the control plane, we want to meet our customers where they are. And we know that the vast majority of these things are going to be granted access via static credentials, just the god-mode tokens. And for those, we can harden them. We can effectively bring them under management. We can bring those credentials under management. We can observe them. We can rotate them. We can observe for anomalous behavior, et cetera. And so that’s like what you would consider a traditional PAM use case or maybe a modern IGA use case.

But then also with control, we give Cross App Access, which is a new mechanism that extends the amazing innovation that was OAuth and OAuth scopes, basically extending that to say, instead of checking with the end user for access to this resource, we can set policy. Now the IDP can set policy to control access to those resources.

And then to close the loop, there’s governance. And so standard governance flow, and actually I don’t even want to say standard governance flow because governance historically has this GRC compliance lens, but it’s very much a security-forward technology here. When you get to the state where you need to govern these identities and their access, we can run access certs in the exact same way based on whether or not they’re human or non-human.

And so every one of those agentic identities gets pulled into Okta’s Universal Directory. All of their access is controlled. All of it is governed. We still gather the same risk signal and risk pattern behavior from the Identity Threat Protection product. And that’s, I wish I could say that 10 years ago, we knew we were building an identity security fabric, this new category of product that’s going to cover every identity use case, every resource type, and every user type. However, that was the strategy, not knowing that AI agents were going to be born in the 2020s. And it just makes it so that we are really well positioned to capitalize on this opportunity. And it gives us a very novel approach to how we secure AI in a way that, it’s because we have this unified identity security fabric.

A basket of tools that don’t talk to each other, if you have a disparate IAM and IGA and PAM set of tools, in theory, you could stitch it all together, but you end up with higher costs and worse security outcomes. And so we actually took a much harder approach to market. And this is many years ago. Again, this predates the rise of AI agents, but we decided that we were not going to take an acquisitive strategy where we just bolt on a bunch of things and call them a “platform” in air quotes. And your order form would look like a drugstore receipt.

And so you’re not buying a list of products that happen to be on the same order form because we want to satisfy a CFO. We’re taking an approach that we want to drive end-to-end identity security outcomes for CISOs and IT leaders. So we’re doing the hard work deeply integrating these products across the fabric so that we can truly secure every identity, every use case, and every resource type.

Robert Dutt: Close to home here in Canada, we have a proposed Bill C-8 on the table. It’s raising expectations around visibility, around access control, accountability, risk, all of these things. I know there are similar ideas out there in terms of government around the world. How does legislation along these lines change the conversation for IT leaders, especially around the topic of shadow AI?

Jack Hirsch: So look, I am such a fan of this type of regulation because it pushes… When we enter highly regulated markets, regardless of where they are, and we can talk about C-8, I think it really does align with our identity security fabric narrative and what we’re angling for. But fundamentally, what we’re talking about is trust.

If I’m not mistaken, C-8 talks about resilience and reliability. Okta has industry leading availability and resilience. We proudly espouse our four nines of availability, but in reality, it’s much higher. And we target much higher. With the launch of our cell in Canada, and we can talk about the nature of that launch, but with the launch of our cell in Canada, we not only get multi-region disaster recovery, but we get Enhanced Disaster Recovery, which is a product that I really wanted to call Instant DR, because it’s a DNS flip, but the lawyers didn’t like that. So it’s Enhanced Disaster Recovery.

And so when you’re talking about resilience and reliability and running critical infrastructure, fundamentally, identity is critical infrastructure. We support governments, financial services, militaries, supply chain logistics with organizations like FedEx, healthcare.

And so maybe bringing it back to C-8, data residency, check, highly invested, especially with de-globalization pressures around the world. Supply chain governance, super, super important for us to maintain our independent posture here and to say, look, it doesn’t matter whether you’re buying from a monolithic platform or an independent provider of identity security. We are invested in making sure that your entire enterprise is secure.

And so just the same way FedRAMP was a standard-bearer and STIGs in the US were standard-bearers, or IRAP was pushing us in the right direction in Australia, or ISMAP in Japan, I think C-8 is a very, very welcome change. I think it highlights the need for robust identity security and it should put identity at the foundation of every security leader’s agenda this year.

Robert Dutt: Well, these pieces of legislation are still in the process and we can look forward. This is likely to see the light of day in some shape or another, but there’s still that sort of sense of maybe we should wait and see. I guess what I’m getting at is what’s the danger or the risk involved in waiting until regulations are finalized, on the books and in place, before starting to take action?

Jack Hirsch: So let’s just say at a personal level, I am not into promoting scare tactics. I know that it is very common in the security space for colors to be red. Our colors are blue. That’s not our vibe at Okta.

And so look, every organization has their own risk barometer. What I can say is the vast majority of breaches stem from some form of attack on identity. The vast majority of breaches, the implications of having a data breach, oftentimes they go, I think the average time to detection for a data breach is somewhere just shy of 300 days. And so you’re talking about millions of dollars in damages, huge reputational hit.

And there are scenarios, and I will not point to any recent security incidents that might have impacted large swaths of the industry, but not Okta. But I’ll just say the reason is because we believe strongly that having a lower risk profile should be easier, should be more elegant. People come to Okta not because of the, “Oh, you get it all done by the CLI.” Yeah, you can, but it’s elegant. It’s intuitive. It’s easier to use. It de-complexifies the world of identity security.

I’m sitting in front of my notepad here to take notes, and one of our product principles is productizing best practices. And so we want to make it easier for organizations to reduce their risk profile and make the end user experience elegant and memorable when it needs to be, and disappear into the background when it shouldn’t be memorable.

And so with that, look, I would advise everyone go down the rabbit hole. Just look at recent breaches. Look at how widely pervasive these breaches are. Look how easy it is to go after a phish, to buy a phishing kit on the dark web, and see the types of organizations that get hit by these and it’s everyone.

And so whether you’re waiting for legislation to be imposed to drive the standards or you are just looking to have an appropriate barometer of risk for your organization, you shouldn’t have to choose between ease of use and cost and lower risk and greater security. And so I would just say everyone’s going to be on their own journey. I’m not a salesperson. I’m on the product team. But I fundamentally think that identity is one of the pillars of Zero Trust. I believe that it should be. It’s foundational. It is the foundation. If I had nothing else to do, if I were starting my own company today and I wanted to build a security practice for my company to manage our organizational risk, it would start with identity, 110%.

Robert Dutt: We’ve taken sort of a general market-wide view of the technology problem and now of the regulatory side of things. This is a podcast for IT solution providers. So sort of going with that “if I were starting a business today” line that you just started there, for MSPs and resellers, where do you see the biggest opportunity to help customers get ahead of shadow AI, both in terms of reducing customer risk and in terms of new services, new types of services that they can bring to market?

Jack Hirsch: I’ll take it in two parts. One is just you can’t control what you don’t see. And so for VARs and MSPs and sort of operators in the technology ecosystem, I would say look at Okta’s ISPM product. It is amazing what you learn by wiring it. And it’s not just for Okta as an IDP. It’ll wire into any IDP. It will wire into multiple IDPs. It’ll wire into over 300 SCIM-based apps because it’s wired into the Okta Integration Network, and there’s a large set of SCIM apps that work natively with ISPM. And just see what you can find.

I optimized my life, my product world for hugs and high fives. And I’ll never forget, I’m sure this person knows exactly who they are. It was a security leader in Australia, ran out of their office after trying ISPM during a merger and they used it to reduce risk during the merger as they were establishing a trust relationship between their organizations. And it basically made this person look like a superstar in front of their C-suite and board because it was like the entire risk burndown chart for their entire M&A transaction to establish the technical risk barometer. So I would just say ISPM is an incredible starting point. A+, highly recommend. You can’t control what you can’t see.

And then I think on the second part, of course ISPM will discover AI as well. And then the second part is just, I wouldn’t lose sight of the experience. And so making sure that you’re creating an elegant experience by your choice of products, not only for the admins that you might work directly with or the leadership that might be engaging with you, but also for the end users. And knowing when tools should be elegant, easy to use, easy to configure, and when they should just sort of fade into the background. That’s ultimately what we work on at Okta. It’s our strong conviction from a product standpoint, that it needs to be an absolutely elegant, unmatched user experience for partners, for admins, for end users, and for customers.

Robert Dutt: I think we’ve gone over a lot of the territory that I wanted to go over, but just to kind of bring things home, looking ahead over the balance of 2026 or into the first half of next year, what do you think are going to be the biggest mistakes that organizations might make when it comes to agents and identity? And what can solution providers be doing now to make sure their customers don’t make those mistakes?

Jack Hirsch: This is an easy one. I think there’s sort of two categories of mistakes. One is getting worried because everything is moving so fast, getting that sort of analysis paralysis to say, “I’m going to see where it shakes out. How important is this AI thing?” Or even if you’re an AI bull, waiting to see who the winners and losers are before you establish any sort of program around it. That’s, I think, one big category of things not to do. I would say, go after it immediately. The capabilities you need are already out there. They might be newer. They might feel a little bit less familiar. But again, ultimately, these are identities that need access to your corporate resources. So I think that is one big category.

The other big category is, I would not look at point solutions for this. Anyone that is saying, “We’re going to secure your AI.” That’s great. But what is an AI? It’s an identity. It can be a resource in some scenarios, right? With agent-to-agent, agents acting as resources, but ultimately they’re just identities. That’s for the identity nerds. Sorry. Just as a caveat for the identity nerds out there like myself.

But fundamentally, you need a unified platform that gives you that unified view of core access management, core governance, core privileged access, brings all of those identities, whether it be human or non-human, into a single directory and can discover them, can control them, can govern them. And it shouldn’t matter whether they were built by your users, by third parties, by partners, by your supply chain contractors. That unified identity security fabric will deliver comprehensive security and it should be deeply orchestrated into any technology stack. And those products already exist, and it just so happens that Okta is building a reference implementation.

Robert Dutt: Works out well for you then, doesn’t it?

Jack Hirsch: It does.

Robert Dutt: I appreciate your taking the time, Jack. It’s been an interesting conversation and it’s a fascinating and ever-evolving area.

Jack Hirsch: Thank you very much. All right. Thanks, Rob. And thanks everyone. Appreciate the time.

There you have it, a look at shadow AI through an identity lens with Jack Hirsch from Okta. I’d like to thank Jack for joining us for the show and thank you for listening today. The podcast will be back in your feed tomorrow as we take a look at the launch of Lexful, an AI-first documentation tool for MSPs that boasts, if you can believe it, a robotic channel chief. We’ll find out all about that tomorrow. You’ll want to be sure to catch that, so please subscribe to or follow the podcast in your podcast app of choice. And if it allows you to do so, please consider leaving a rating or review of the show. Until tomorrow, I’m Robert Dutt for ChannelBuzz.ca and I’ll see you in the channel.

...more
View all episodesView all episodes
Download on the App Store

ChannelBuzz.caBy ChannelBuzz.ca