For Immediate Release

FIR #500: When Harassment Policies Meet Deepfakes


Listen Later

AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle.

What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands.

Links from this episode:

  • The Emerging Threat of Workplace AI Abuse
  • The next monthly, long-form episode of FIR will drop on Monday, February 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson.

    Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that.

    Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches.

    Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available.

    Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think.

    Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500?

    Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics.

    An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message.

    What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Not just to protect the individual involved, but to preserve trust across the organization. Because once people see that this kind of harm can happen without consequences, psychological safety collapses.

    What also struck me reading this, Shel, is that while it’s written for HR leaders, a lot of what determines the outcome doesn’t actually sit in policy or process. It sits in communication. In moments like this, people are watching very closely. They’re listening for what leaders say and just as importantly, what they don’t. Silence, careful wording, or reluctance to name harm can easily be read as uncertainty or worse, tolerance. That puts communicators right in the middle of this issue.

    There are some things communicators can do before anything happens. First, help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity. Second, prepare leaders for that first conversation because tone and language matter long before any investigation starts. And third, reinforce shared expectations early. So when something does go wrong, people already know where the organization stands. This isn’t crisis response, it’s proactive preventative communication. In other words, this isn’t really a story about AI tools, it’s a story about trust — and how organizations communicate when that trust is tested.

    Shel Holtz: I was fascinated by this. I saw the headline and I thought it was about something else altogether because I’ve seen this phrase, “workplace AI abuse,” before, but it was in the context of things like work slop and some other abuses of AI that generally are more focused on the degradation of the information and content that’s flowing around the organization. So when I saw what this was focused on, it really sent up red flags for me. I serve on the HR leadership team of the organization I work for. I’ll be sharing this article with that team this morning.

    But I think there’s a lot to talk about here. First of all, I just loved how this article ended. The last line of it says, “AI has changed the mechanics of misconduct, but it hasn’t changed what employees need from their employer.” And I think that’s exactly right. From a crisis communication standpoint, framing it that way matters because it means we don’t have to reinvent values. We don’t have to reinvent principles. We just need to update the protocols we use to respond when something happens.

    Neville Hobson: Yeah, I agree. And it’s a story that isn’t unique or new even — the role communicators can play in the sense of signaling the standards visibly, not just written down, but communicating them. And I think that’s the first thing that struck me from reading this. It is interesting — you’re quoting that ending. That struck me too.

    The expectation level must be met. The part about not all of it sitting in process and so forth with HR, but with communication — absolutely true. Yet this isn’t a communication issue per se. This is an organizational issue where communication or the communicator works hand in glove with HR to manage this issue in a way that serves the interest of the organization and the employees. So making those standards visible and explaining what the rules are for this kind of thing — you would think it’s pretty common sense to most people, but is it not true that like many things in organizational life, something like this probably isn’t set down well in many organizations?

    Shel Holtz: It’s probably not set down well from these kinds of situations before AI. Where I work, we go through an annual workplace harassment training because we are adamant that that’s not going to happen. It certainly doesn’t cover this stuff yet. I suspect it probably will. But yeah, you’re right. I think organizations generally out there — many of them don’t have explicit policies around harassment and what the response should be.

    I think the most insidious part of how deepfakes are affecting all of this is that they flip the burden of proof. A victim has to prove that something didn’t happen, and in the court of workplace opinion, that’s really hard to do. It creates a different kind of reputational harm.

    Neville Hobson: Yeah.

    Shel Holtz: From traditional harassment, the kind we learn about in our training — you know, with he said, she said type situations — there’s a certain amount of ambiguity and people are trying to weigh what people said and look at their reputations and their credibility and make judgments based on limited information available. With deepfakes, there’s evidence. I mean, it’s fabricated, but it’s evidence. And some people seeing that before they hear it’s a deepfake just might believe it and side with the creator of that thing.

    The article does make a really critical point though, and that’s that it’s rarely about one bad actor. The person who created this had a malicious intent, but people who share it, people who forward it along and comment on it and laugh about it — that spreads the harm and it makes the whole thing more complex and it creates complicity among the employees who are involved in this, even though they may think it’s innocent behavior that just mirrors what they do on public social media. And from a comms perspective, that means the crisis isn’t just about the perpetrator, right? It’s about organizational culture. If people are circulating this content, that tells you something about your workplace that needs to be addressed that’s bigger than that one individual case.

    Neville Hobson: Yeah, I agree. Absolutely. And that’s one of the dynamics the article highlights that I found most interesting — about how harm spreads socially through sharing, commentary, laughter, or quiet disengagement. Communicators need to help prevent normalization — this is not acceptable, not normal. They’re often closest to these informal channels and cultural signals. That gives communicators a unique opportunity, the article points out.

    For example, communicators can challenge the idea that no statement is the safest option when values are being tested. Help leaders understand that internal silence can legitimize behavior just as much as explicit approval and encourage timely, values-anchored communication that says, “this crosses a line,” even if the facts are still being established.

    It is really difficult though. Separately, I’ve read examples where there’s a deepfake of a female employee that is highly inappropriate the way it presents her. And yet it is so realistic — incredibly realistic — that everyone believes it’s true. And the denials don’t make much difference. And that’s where I think another avenue that communicators, especially communicators, need to be involved in. HR certainly would be involved because that’s the relationship issue. But communicators need to help make the statements that this is not real, that it’s still being investigated, that we believe it’s not real. In other words, support the employee unless you’ve got evidence not to, or there’s some reason — legal perhaps — that you can’t say anything more. But challenge people who imply it’s genuine and carry that narrative forward with others in the organization.

    So it’s difficult. It doesn’t mean you’ve got to broadcast a lot of details. It means going back to reinforcing those standards in the organization, repeating what they are before harmful behavior becomes part of, as the article mentions, organizational folklore. It’s a tricky, tricky road to walk down.

    Shel Holtz: And it gets even trickier. There’s another layer of complexity to add to this for HR in particular. And that is an employee sharing one of these deepfakes on a personal text thread or on a personal account on a public social network — sharing it on Instagram, sharing it on Facebook — which might lead someone in the organization to say, “Well, that’s not a workplace issue. That’s something they did on their own private network.” But the deepfake involves a colleague at work, and we have to acknowledge that that becomes a workplace issue.

    Neville Hobson: Yeah, it actually highlights, Shel, that therefore education is lacking if that takes place, I believe. So you’ve got to have already in place the policies that explicitly address the label “AI abuse.” It’s a workplace harm issue. It’s not a technical or a personal one. And it’s not acceptable nor permitted for this to happen in the workplace. And if it does, the perpetrators will be disciplined and face consequences because of this.

    So that in itself though isn’t enough. It requires more proactive education to address it — like, for instance, informal communication groups to discuss the issue, not necessarily a particular example, and get everyone involved in discussing why it’s not a good thing. It may well surface opinions — again, depends on how trusted people feel or open they feel — on saying, “I disagree with this. I don’t think it is a workplace issue.” You get a dialogue going. But the company, the employer, in the form of the communicators, have the right people to take this forward, I think.

    Shel Holtz: But here’s another communication issue that isn’t really addressed in the article, but I think communication needs to be involved. The article outlines a framework for addressing this. They say stabilize, which is support and safety; contain, which is stop the spread and investigate — and investigate broadly, not just the creator. I mean, who helped spread this thing around? Yeah, that’s pretty good crisis response advice.

    But what strikes me is the fact that containment is mentioned almost as a technical IT issue when it’s really a communication challenge. Because how do you preserve evidence without further circulating harmful content? This requires clear protocols that everybody needs to understand. So communicators should be involved in helping to develop those protocols, but also making sure that they spread through the organization and are aligned with the values and become part of the culture.

    Neville Hobson: Okay, so that kind of brings it round to that first thing I mentioned about what communicators can do before anything happens, and that’s to help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity and set out exactly what the organizational position is on something like this. That will probably mean updating what would be the equivalent of the employee handbook where these kinds of policies and procedures sit, so that no one’s got any doubt of where to find out information about this. And then proactive communication about it. I mean, yes, communicators have lots to address in today’s climate. This is just one other thing. I would argue this is actually quite critical. They need to address this because unaddressed, it’s easy to see where this would gather momentum.

    Shel Holtz: Yeah. So based on the article, you’ve already shared some of your recommendations for communicators. I think that updating the harassment policies with explicit deepfake examples is important. This is the recommendation I’m going to be making where I work. I think managers need to be trained on that first-hour response protocol. Managers, I think, are pretty poorly trained on this type of thing. And generic e-learning isn’t going to take care of it. So I think there needs to be specific training, particularly out in the field or out on the factory floor, where this is, I think, a little more likely to happen among people who are at that level of the org. I don’t think you’re going to see much of this manager to manager or VP to VP. So I think it’s more front line where you’re likely to see this — where somebody gets upset at somebody else and does a deepfake.

    So those managers need to be trained. I think you need to have those evidence-handling procedures established and IT completely on board. So that’s a role for communicators. Reviewing and strengthening the reporting routes — who gets told when something like this happens and how does it get elevated? And then what are the protocols for determining what to do about it? And include this scenario in your crisis response planning. It should be part of that larger package of crises that might emerge that you have identified as possible and make sure that this is one of them.

    Yeah, this article really ought to be required reading for every HR professional, every organizational leader, every communication leader, because as we’ve been saying right now, I think most organizations aren’t prepared. What the article said is the technology has outpaced our policies, our training, and our cultural norms. We’re in a gap period where harm is happening and institutions are scrambling to catch up. Time to stop scrambling, time to just catch up, start doing this work.

    Neville Hobson: Yeah, I would agree. I think the final comment I’d make is kind of the core message that comes out of this whole thing that summarizes all of this. And this is from the employee point of view, it seems to me. So accept that AI has changed how misconduct happens, not what employees need. Fine, we accept that. Employees need confidence that if they are targeted, the organization will do the following: take it seriously, act quickly to contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Those four things need to be in place, I believe.

    Shel Holtz: Yeah. And what the consequences are — you always have to remind people that there are consequences for these things. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #500: When Harassment Policies Meet Deepfakes appeared first on FIR Podcast Network.

    ...more
    View all episodesView all episodes
    Download on the App Store

    For Immediate ReleaseBy Neville Hobson and Shel Holtz

    • 5
    • 5
    • 5
    • 5
    • 5

    5

    20 ratings


    More shows like For Immediate Release

    View all
    Freakonomics Radio by Freakonomics Radio + Stitcher

    Freakonomics Radio

    32,248 Listeners

    How I Built This with Guy Raz by Guy Raz | Wondery

    How I Built This with Guy Raz

    30,199 Listeners

    The Daily by The New York Times

    The Daily

    113,122 Listeners

    Up First from NPR by NPR

    Up First from NPR

    56,927 Listeners

    Today, Explained by Vox

    Today, Explained

    10,329 Listeners

    Worklife with Adam Grant by TED

    Worklife with Adam Grant

    9,140 Listeners

    The Spin Sucks Podcast with Gini Dietrich by Gini Dietrich, Founder of Spin Sucks

    The Spin Sucks Podcast with Gini Dietrich

    68 Listeners

    Throughline by NPR

    Throughline

    16,489 Listeners

    The Happiness Lab with Dr. Laurie Santos by Pushkin Industries

    The Happiness Lab with Dr. Laurie Santos

    14,416 Listeners

    A Bit of Optimism by Simon Sinek

    A Bit of Optimism

    2,225 Listeners

    Huberman Lab by Scicomm Media

    Huberman Lab

    29,343 Listeners

    Fly on the Wall with Dana Carvey and David Spade by Audacy

    Fly on the Wall with Dana Carvey and David Spade

    12,902 Listeners

    The Mel Robbins Podcast by Mel Robbins

    The Mel Robbins Podcast

    21,016 Listeners

    The 7 by The Washington Post

    The 7

    1,249 Listeners

    AI Explored by Michael Stelzner, Social Media Examiner—AI marketing

    AI Explored

    99 Listeners