FIR Podcast Network

FIR #511: Doing AI Governance Right and Still Getting It Wrong


Listen Later

The policies are clear and well communicated. The guardrails are firmly established. Every last employee has been trained. And someone in your organization still releases a public document riddled with AI-generated errors. What went wrong has nothing to do with technology and everything to do with internal culture and accountability. In this long-form April episode, Neville and Shel examine a company that seemingly took all the right steps yet still had to apologize publicly for a court filing riddled with hallucinated citations. Also in this episode:

  • Gartner predicts that, by 2028, 75% of employees will rely on an internal chatbot to get the news that matters to them. How will internal communicators need to rethink their role to ensure everyone knows and understands what they should in order to achieve strategic alignment?
  • One of the promises AI executives have made is a leveling of the playing field, giving lower-level employees the opportunity to excel and rise through the ranks. According to one new study, exactly the opposite has been happening.
  • PR hacks have been accelerating the pace at which they churn out press releases and pitches. That has raised the bar for what it takes to earn a journalist’s trust (and journalists do still rely on press releases, according to a survey of reporters).
  • Apple’s announcement of its CEO transition offers communicators a clinic on how to announce a new top executive.
  • “Slopaganda” from Iran has proven remarkably effective, which means it is undoubtedly coming for your company or clients soon.
  • In his Tech Report, Dan York outlines big changes coming with WordPress’s next update.

    Links from this episode:

    • Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’
    • Sullivan & Cromwell law firm apologizes for AI ‘hallucinations’ in court filing
    • Letter re: In re Prince Global Holdings Limited, et al., No. 26-10769
    • Sullivan & Cromwell Just Put Every Firm on Notice. And S&C Advises OpenAI on Safe AI Use.
    • An AI Screw-Up By… Sullivan & Cromwell?
    • LinkedIn search results for Sullivan & Cromwell AI
    • AI, Trust, and the Reinvention of Corporate Communications: Inside Gartner’s 2026 Playbook
    • Does your intranet still matter in an AI-first workplace?
    • Chatbots in Internal Communications: Game-Changing Wins
    • How AI Chatbots Are Redefining Internal Communications?
    • The future of internal communication: How AI is changing the workplace
    • High earners race ahead on AI as workplace divide widens
    • Sarah O’Connor: One early view about AI was that it would share…
    • How AI is forcing journalists and PR to work smarter, not louder
    • What journalists want from AI-assisted PR pitches
    • Journalists Trust Human-Written Pitches Over AI
    • Journalists Reject AI-Generated Press Releases As Untrustworthy
    • What communicators can learn from Apple’s CEO transition announcement
    • Tim Cook to become Apple Executive Chairman; John Ternus to become Apple CEO
    • Iran’s Meme War Against Trump Ushers In a Future of ‘Slopaganda’
    • Iran’s ‘slopaganda’ team uses AI Legos to flood social media
    • Slopaganda wars: how and why the US and Iran are flooding the zone with viral AI-generated noise
    • Slopaganda Comes of Age
    • Alberta separatist leader unconcerned about influence of YouTube ‘slopaganda’ videos
    • Links from Dan York’s Tech Report

      • WordPress 7.0 Source of Truth – Gutenberg Times
      • WordPress 7.0: Real-Time Collaboration Arrives in Core
      • WordPress 7.0 Release Party Updated Schedule
      • The next monthly, long-form episode of FIR will drop on Monday, May 25.

        We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

        Special thanks to Jay Moonah for the opening and closing music.

        You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

        Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

        Raw Transcript

        Shel: Hi everybody and welcome to episode number 511 of For Immediate Release. This is our long-form episode for April 2026. I’m Shel Holtz in Concord, California.

        Neville: And I’m Neville Hobson, Somerset in England. We have six great stories to discuss and share with you this month and to delight and entertain you, we hope. Topics range from the consequences of not following company guidance on AI use, chat bots, employee use, and the workplace divide, using AI to work smarter, what we learned from Apple’s CEO transition announcement, and the future of slopaganda. Lovely word, that one, show. Plus, Dan York’s tech report.

        But first, let’s begin with a recap of the episodes we’ve published over the past month and some listening comments. In the long form episode 506 for March, published on the 23rd of March, our lead story was on Anthropic’s view that AI will destroy the billable hour, a topic we’ve talked about before on FIR. We also explored digital monitoring of employee work, Gartner’s prediction that PR budgets will double next year, the escalating misinformation crisis, and Cloudflare’s prediction that

        bot traffic will exceed human traffic by 2027. That’s next year, by the way. On LinkedIn, you’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write them. In FIR 507 on the 30th of March, we rejected roundly that idea and looked at the actual trends in using AI for writing. And that prompted some comments from listeners, right?

        Shel: Yes, it did. Starting with Susan Gosselin, who’s actually with a client of mine back in my consulting days. She writes, there are many types of writing that I think AI is great for interpersonal communications, summaries, et cetera. But for marketing writing, that’s another thing. There are issues of copyright to consider and what you’re feeding into the channel.

        This article from Jane Friedman, and she’s linked to it, and we’ll include that link in the show notes, is aimed at authors, but it does have implications for marketing writers too. For instance, I work for an American IT MSP, that’s a managed service provider. Let’s say that an MSP in Spain that does our line of work sees our website and our authoritative blogs and e-books and likes it. They decide to run our whole English website into Spanish using an AI translator.

        then make a few tweaks and publish. There’s not a lot to stop them. There’s also the issue of being able to defend your copyright overall. The law is not yet fixed and the risks are real. Then Steve Lubetkin writes, I find AI particularly helpful for rote tasks like organizing lists, transforming Excel spreadsheet columns, and summarizing interview transcripts. It’s also great for brainstorming ideas when it suggests perspectives I hadn’t thought of.

        but ultimately it comes down to using it as a tool for further human intervention, not less. Neville, you responded to that saying that’s a great way of putting it, Steve. Those rote tasks are exactly where AI seems to shine, the kind of work that takes time but doesn’t really benefit from deep human creativity. And I agree on brainstorming too. It could be surprisingly good at surfacing angles you might not have considered. I do this a lot.

        Your last point really nails it though. It’s not about removing human input. It’s about focusing it where it matters most. Used that way, AI doesn’t diminish the work. It can actually elevate it. And finally, we have a comment from Yorma Mananan who writes, AI can help people escape from the writer’s block. So why not use it to get started?

        However, writers must own all content created with or without AI so the content doesn’t sound like you, you shouldn’t publish it. The challenge is to learn to speak machine English with AI. Define clearly why you were writing, what you want to say, and what you want your readers to do after reading your content. Without your strategy, AI can’t produce quality content that sounds like you. Strategy first, AI second.

        And Neville, responded to Yorma. You said, I like how you framed this using AI to get past the blank page is a very practical use case. That starting friction is real for a lot of people, and AI can lower the barrier quite effectively. Your point about ownership is key too. If it doesn’t sound like you, it isn’t really yours, regardless of how it was produced. Where I’d add a layer is around your machine English idea. I see it slightly differently. Rather than learning to speak machine,

        I think the real shift is learning how to think with the machine, using it to clarify intent, test structure, and challenge assumptions. But I agree with your conclusion. Strategy first, AI second. Without that, you’re just generating words, not communicating. And Yorma responded to you saying, agree. Machine thinking is a better way of describing the conversation relationship with AI.

        Neville: Good comment!

        Great. It’s excellent to have that. interesting, Shell, that it illustrates to me something. It’s not a trend at all, but I’ve noticed recently in other posts I see on LinkedIn that address this kind of topic. Increasingly, there’s people leaving comments that are basically saying that you own it, not the AI. And AI assists you in communicating, not creating the final stuff, essentially, which is what some of these comments are alluded to.

        Maybe people are waking up to that more than they have been in the past. It won’t silence the big critics. We’ve already seen that because, you know, it’s going to be criticized no matter what. But the more people who talk up the reality of what we all talk about, which is this is an assistant. It’s a tool to help you and communicate more effectively. It enhances your ability in that context. And then you’ve got Steve’s talking about, you know, doing stuff with Excel and all this kind of thing.

        I’m in the middle of an experiment on the middle. I’m still at the 10 % start of experimenting with Claude Pro that I know you’ve been using for a long time, but I’m taking this very, very one step at a time. And it’s very non techie my focus. But one thing I have noticed is comparing as I have been doing, let’s say a prompt in the simplest form, a chat prompt to Claude compared to one to ChatGPT.

        The differences are truly startling in many cases. Claude typically is richer and deeper in its content with the same prompts. Now, of course, there’s variables at play here. ChatGPT knows a huge amount about me. Claude does too, because there’s a nifty tool that imports everything from ChatGP to tell it. But I’ve also added stuff. So it’s done that. It’s missing on some levels, though. And that’s probably because it doesn’t yet know totally enough about me to do this.

        This is something you notice when you do this kind of thing with different tools. So I mean, that’s not the main thing about Claude that wow’s me, I must admit. Cowork and some of the other tools I’ve been touching on, but Cowork I’ve spent quite a bit of time on. But so I’m sure we’re have lots more conversations about this as we talk topics. Let’s see what comes out of today’s menu of topics. So thanks for those comments, everyone.

        Let’s see, bad, no, that’s not the right one, this one, when workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies are hiring gig workers to create content and train AI systems. This raises various communication and ethical issues. And in FIR 508 on the 8th of April, we explain what’s happening and discuss the implications.

        Then when bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content. In FIR 509 on the 14th of April, we break down how this scam works, why it matters to communicators, and what you should be doing right now before an incident forces your hand. And you have some comments here.

        Shel: We do two of them, one from Eric Redicop who identifies himself on LinkedIn as an entertainer and artist wrote that AI cannot use my work because it’s not posted online anywhere. I have to do it this way because YouTube allowed bogus copyright claims on my work and shut down my channel five times. then Ray Baron-Wolford, who is a CEO at a charity organization said, this is why it’s so important that every artist signs up.

        to all copyright protection services.

        Neville: Yeah, that’s a good point. I think the the first commenter though talked about a genuine issue, a genuine issue. And I’d wonder if, you know, if he’s saying that this can’t happen to me because none of my content’s online. I wouldn’t rely on that 100%. Actually, I wouldn’t. No, I wouldn’t. And that second comment.

        Shel: Mm-hmm.

        No. I mean, you have to be producing

        the kind of content where you can have some success as an artist or an entertainer without having your content online.

        Neville: Yeah.

        Yeah, exactly. So the second commenter about signing up for every copyright protection that you can find is probably well, not probably it is a good idea, although I’m not sure that everyone would want to do that. And therein lies one of the issues about copyright. It depends on the jurisdiction. It’s a geographically based protection. Creative Commons is a good place to have established as a…

        reserving your rights or some of your rights if you want to enable things to happen for others to use it. And that’s an international thing. So that’s a peace of mind, I would say. There hasn’t really been many, I’ve certainly not seen any legal court case tests since Adam Curry back in 2009 when he, I think it was in the Netherlands, he sued somebody who’d used a photo of his daughter.

        and won the case. And it was not a Pyrrhic victory, but there was no you didn’t get any money out of it. But he got the legal ruling that these people had infringed on his copyright. But I’ve not seen any sense. So nevertheless, it’s worth doing. So yeah.

        Shel: Yeah, it goes back to Susan Gosselin’s

        comment, too, about any organization that does the same thing you do can take your content from your website, translate it into their language and publish it. And what do you do if your content is not copyrighted? There’s nothing that you can do.

        Neville: Yeah, that’s incredible.

        Reminds that reminds me shell back in the 2000s website scraping was it was a huge deal when when blogs suddenly came to the fore and you found that people were stealing all your content. I remember being in a lengthy email exchange with someone I think based in Romania or somewhere not a hope in hell. He was going to desist from doing that. Eventually it stopped though. So but I mean and that wasn’t from a copyright perspective. It was theft of content, which is well related, isn’t it?

        So yeah, lots to learn from that. And finally, in FIR 510 on the 20th of April, we revisited the topic of shadow AI, the situation where employees ignore company approved AI tools and use their own preferred tools and not tell anyone. We discuss how one company approaches problem and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains. And now you’re up to date on FIR episodes.

        Shel: Also want to let you know about Circle of Fellows. We’ve had a fascinating discussion just this past Thursday on Circle of Fellows. It’s part one of a two-part discussion, and it’s all based on this. If you’re watching the video, you can see it’s a new book by Diane Chase, former chair of IABC and a great communicator called The Seven C’s of the New Communication Compass. And what Diane did here was…

        outlined these seven points and find words that started with C to label them. And she wrote one chapter and then basically had IABC fellows write the rest of the chapter. So Diane is the first non-fellow to appear on Circle of Fellows, but it’s her book, so it made sense. And in the first installment that we recorded on Thursday, Diane was joined

        Joining the panel were, of course, me — I moderated the session, Jane Mitchell, Ginger Holman, and Brad Whitworth. Next month on the May episode of Circle of Fellows, Brad Whitworth will be the moderator, and I’ll be a panelist talking about the chapter I wrote about community, and I’ll be joined by Zora Artis and Cindy Schmieg, IABC fellows who wrote the other chapters.

        It’s a really good book. I recommend it for communicators. But we talk about some of the issues around these chapters and Diane explains why these were the topics that she said in the new communication environment. These are, you know, your North Stars, as it were. So definitely worth giving a listen to this month’s Circle of Fellows, which you will find on the FIR Podcast Network.

        at firpodcastnetwork.com. And we are going to take a short break now for a sponsor message. We will be back to dive into our six topics right after this.

        Neville: Here’s a story that on the surface looks just like another example of AI going wrong. In mid April, one of the world’s most prestigious law firms, Sullivan and Cromwell, had to apologize to a US bankruptcy court after submitting a filing that contained multiple AI hallucinations, fabricated case citations, misquoted legal authorities, even references to cases that simply don’t exist. The errors weren’t minor.

        They were significant enough that the firm had to send a formal letter to the judge, acknowledge what had happened, and submit a corrected version of the filing. And just to make it more uncomfortable, these mistakes weren’t caught internally. They were identified by the opposing legal team. Now, if you stop there, it’s easy to frame this as just another cautionary tale about AI, unreliable tools, hallucinations, the risk of automation in high stakes work. But that’s not the story here.

        This firm didn’t lack guidance, quite the opposite in fact. They have formal policies governing the use of AI. They require lawyers to complete training before they can even access these tools. Their internal guidance explicitly warns about hallucinations and tells lawyers to verify everything before it goes anywhere near a client or a court. In fact, their own language is very clear, trust nothing and verify everything. And yet, in this case, those policies were not followed.

        A document that should have been scrutinized at multiple levels made its way into a courtroom with fundamental inaccuracies baked into it. The failure here wasn’t the technology. It was a failure of process, behavior, and accountability. Human in the Loop only works if there is an actual human who is clearly responsible for checking the work, not in theory, nor in a policy document, but in practice, at the point where a decision is made to send something out into the world.

        And what this case suggests is that in many organizations, that loop is more notional than real. If AI is being used to accelerate work, where are the safeguards that ensure quality isn’t being compromised in the process? And are those safeguards actually being followed or just assumed? Having a policy is one thing, embedding it into how people actually behave, especially under time pressure, is something else entirely. And I think that’s where this story really matters beyond the legal profession.

        Behavior is moving faster than governance. People are experimenting, they’re finding shortcuts, they’re integrating these tools into their daily workflows, often quietly and informally. The risk isn’t only that AI gets something wrong, it’s also that humans stop checking as rigorously as they should, or assume that someone else had, or trust output that feels authoritative, even when it hasn’t been properly verified. So when we talk about responsible AI or human-centered AI or governance frameworks,

        This is what it comes down to in practice, not whether you have a policy, but whether at the moment that matters, someone takes responsibility for asking a very simple question, is this actually correct? And if this case tells us anything, it’s that answering that question consistently is still much harder than many organizations seem to think.

        Shel: Boy, isn’t that true. And the first thing I thought when I read this story, because it seems like the organization did everything right, is the question about what are people rewarded for in this organization? I wrote a post about this on LinkedIn probably a couple of months ago, that process speaks louder than any message that you send.

        through communication channels. And I included this story that I’ve probably relayed on this podcast 20 times just because it is such a good analogy to really make this clear. The company, the logistics company that was experiencing a lot of breakage of its packages in one of its distribution centers and…

        the company just kept sending messages about how important it was to be careful and take the time when you’re loading these packages so that you’re not just throwing them around and you’re not breaking things that customers are expecting to receive unbroken. And the breakage continued and they brought a consultant in, communications consultant I might add, who looked at all of this and found that the reason this was happening was because people were actually being rewarded for productivity.

        and not for quality. That meant that they were getting money for doing this quickly. So as long as you were going to pay them more to get this stuff out quickly, the breakage was going to continue. You had to shift the rewards mechanism. So it was rewarding quality. Now they’re going to slow down and make sure everything’s unbroken. Of course, you’re going to lose some of the speed there.

        So when I hear the story about this law firm, the first thing I wonder is, yeah, we have all of these policies and we’ve been through all of this training and the governance is in place, but I’m being rewarded for getting this done quickly. And therefore I’m not going to take the time to review the citations that the AI cranked out. I don’t know that this is the case in this organization, but it was the first question I asked.

        Neville: Well, that’s an interesting point, Shel, because that was in my mind, too, but it didn’t make it into my notes that they’re known to be, they are, I think, the second biggest law firm globally. They’ve been around 150 years, long and well established, highly credible, super reputation, all that. But they have some of their lawyers charging out at around two and a half thousand dollars an hour for their services. That’s serious money.

        And that kind of adds to your point about speed is the focus here, not the quality. Now, to repeat what you said, we do not know if that’s the case in this law firm. But it could be, and your other point you make about that an individual might be saying, I don’t have time to check all this stuff because I’m being rewarded for getting stuff out fast. They have to address that if that’s true. They really have to address that because that perpetuates this. If it turns out that it’s true.

        But it brings, I suppose, to my mind, the reality, all these policies, and there’s a lot of reporting on this that’s around if you look for it, talking about some of the training courses, the fact that a lawyer, no lawyer can even use one of their AI tools unless they are certified to have done this, this and this training program or that video they have watched, all that. And yet this happened. So there’s something out of loop here. There’s something not

        not working properly. Could it be as simple as the person who signs off on this, i.e., this piece of work is going to that client or this filing is being submitted in this bankruptcy case in New York? This was a major bankruptcy case by not individual. I believe it was a financial company in the Virgin Islands, the British Virgin Islands for that matter, in the Caribbean. It was high profile. But could it really be as simple as that? All this

        work going on at speed. that’s probably somebody thought that’s fine, because we’re going to check it all. And yet nobody did. So it signals something we’ve encountered before. And I’m reminded of a case we reported last year about Deloitte. And the issues they had was something similar, but it was not a legal case in a court. It was a report they prepared for a client, which happened to be the government of Australia, and another one in the government of Canada have six figure fees.

        riddled with hallucinations and other things. So somebody didn’t check it in that case. I have no knowledge of what training they have in place. In this case, we do have knowledge of what training they have in case. Could it really be as simple as somebody, an individual, isn’t the known responsible partner in the law firm for the authoritative voice on this is okay to send to that client or to that court or whatever.

        Even if 15 other people have been involved in checking stuff before, that one person has that responsibility. They obviously don’t have that, suspect. Maybe that’s the solution to this kind of thing. you I know you have some strong views on, you know, having a verifier in place in organizations. You want to talk a bit more about that?

        Shel: Well, yeah,

        I mean, I’ve said this before. I think that one of the jobs that AI is actually going to lead to the creation of is a verification specialist, somebody who is accountable and knows they’re accountable. They are baked into the process. It gets passed to them and they verify the entire document. I don’t care if it’s an 80 page filing. You know, there was another law firm.

        that found itself in this trouble recently. It was in Oregon and the court of appeals there sanctioned the lawyer involved for the AI errors that were in the law firms filing. And the court in its finding…

        emphasize that AI isn’t a lawyer and it can’t replace professional judgment or accountability. And that principle travels pretty well. AI is not a communicator. It’s not a strategist. It’s not a lawyer. It’s not an HR expert. It’s not a subject matter expert. It’s a tool. The professional has to be accountable. So for communicators, that means we can’t outsource accuracy. We can’t outsource context. We can’t outsource tone or ethical judgment to a machine.

        We can use AI as aggressively as we can find that it helps us do our job, but we still have to verify ruthlessly and we have to make sure that other people in the organization know that that’s part of their remit too.

        Neville: Yeah, it’s a very tricky one, think, Shel, given what we know currently about the developments that are happening with artificial intelligence, particularly in generative AI, particularly with tools, particularly like Claude and ChatGPT and Gemini, where and Steve Lubetkin in his comment to one of the episodes from last month alluded to that when he talked about how it is great to, know, deciphering columns in Excel spreadsheets.

        Here you’ve got a tool that can actually generate the AI spreadsheet and perform literally everything in analytics, pivot tables that works literally in 20 seconds. And so you suddenly find that here you have a tool that is able to generate content that the traditional way of prompting would have taken considerable to and fro.

        And, you know, changes here, editing there, you turn the AI, not this, that’s coming back to all that sort of stuff. And then someone checking it. And here you’ve got the situation where this is accelerating, it can do these things, arguably, and again, depending what it is, it can do things that until now, people would say AI can’t do that. And I’m thinking that, you know, AI is not a communicator. No, it’s itself is not at the moment.

        So I mean, this will take us down a rabbit hole if we get into this, which we’re not going to do. But it’s a point worth noting that sooner or later, we’re going to have an AI tool of some type doing something only before a human could do. And then where are we? So again, that’s all a bit in the future and maybe sooner than we think. I don’t worry about that in the sense because there’s no point, Shel. It’s not happened yet. But I do worry about things like this because

        This is an easy one to get right, it seems to me, that you got all these policies, et cetera, you got to not so much enforce them. That’s not really the right word to use. It’s to ensure that people follow those policies. Therefore, it’s a communication issue. It’s an educational issue. It’s not a training issue, but it’s education, awareness raising, and getting people to buy into why they should do this, in which case, you’re likely going to have to change your model of rewarding people in that case.

        That’s big deal. So this isn’t something that you can idly do except on the kind of surface, i.e., you do all this stuff, you’ve got one person who’s got the responsibility and the consequences will fall on that person if it turns out no one followed the stuff. So that’s probably what would help here.

        Shel: Yeah, and I think it’s also worth noting that it’s going to get easier to assume that AI got it right. I mentioned that AI currently isn’t a subject matter expert, but it’s becoming one that we have. OpenAI is creating one that’s just for doctors and Anthropic just signed a deal with a law firm to create a legal specific version of Claude. So, you know, I think when you

        look at what happened here with this law firm. We should look at this as sort of a dress rehearsal for AI related crisis response. The law firm did the right thing, right? They acknowledged the problem, they apologized to the court, they filed a corrected version, but at that point, the reputational damage had already been done because that narrative…

        had found its way into Reuters, The Guardian, Business Insider, Above the Law, LinkedIn, and all the legal newsletters. And that’s how AI failures will unfold for other organizations, whether it’s out of the legal department or elsewhere. You’re going to have the operational error, then the public narrative, and then people are going to pile on. Communicators should already have holding statements, internal FAQs, and escalation protocols for AI-generated errors, especially

        In high stakes content like a legal file.

        Neville: Yeah, plenty to think about on this. although the kind of advice I would give is, yes, you’ve got all your policies and so forth, as we’ve been discussing at the beginning, but have you got the human genuinely in the loop to take responsibility for what you’re giving to a client or to a court?

        Shel: Well, let’s stick with the AI theme. Hey, that should be no surprise. Gartner is predicting that by 2028, 75 % of employees will rely on chatbots to get relevant internal communications. That’s not the distant future, folks. It’s the year after next, and that should stop every internal communicator in their tracks. Not because chatbots are coming for the intranet or the newsletter or the manager cascade. That’s just

        Too simplistic. The bigger shift is that employees are moving from browsing to asking. They’re not going to hunt through the intranet and a stack of emails to get an answer to a simple question. They’re just going to go to the chat bot and ask, what has this changed for me? Do I need to do anything by Friday? Why is my department being reorganized? And they’ll expect an answer in seconds and probably get one. The Gartner prediction is based on a very real problem.

        information overload. According to Gartner’s report, employees who report high information overload are 52 % less likely to report high intent to stay with their organization, so it’s a retention issue, and they’re 30 % less likely to report high strategic alignment with the organization. Gartner also says chatbots will provide personalized curated answers for pull communication and customized alerts for push communication.

        That’s a major shift in the employee communication model. Now there are real benefits here. A well-designed internal chat bot can give employees faster answers, reduce HR and IT ticket volume, provide 24-7 support, support multiple languages, and cite authoritative sources so employees know where an answer came from. It can also deliver information within the flow of work rather than forcing people to go somewhere else to find it.

        But here’s the part communicators are going to need to wrestle with. An AI answer is not the same thing as communication. An answer can tell an employee what

        An answer can tell an employee what changed. It may even summarize why it changed. Will it preserve the intent, the nuance, the context, and the emotional intelligence of the original communication? There’s no guarantee it will. Take changed communication, for instance. We frequently write detailed articles explaining the rationale for a change because employees need more than the transaction. They need to understand the business context. They need to know what

        options leaders considered and which options they discarded and why. They need to hear what’s not changing. They need some sense that the decision was made thoughtfully and not arbitrarily. But what happens when no one reads the article? What happens when the employee asks the chat bot, what’s changing in our benefits plan and gets a clean, accurate three sentence answer that strips out the rationale completely?

        This is where internal communicators have to evolve from being message producers to knowledge architects. The intranet still matters. It may be less of a destination and more of the trusted knowledge later that feeds AI. Frank Wolf made this point really well in PR Daily. AI doesn’t eliminate the intranet’s jobs. It changes how pull, push and people centered communication work.

        The intranet becomes the foundation that makes chatbot answers reliable. If the knowledge layer is messy, if it’s outdated or written in a way that AI can’t interpret or interpret well, the chatbot’s going to sound confident, still be wrong. This means we have to consider an expansion of the internal communicator’s job. Yeah, we still need to write, but now we also need to structure. We need clear source of truth pages and metadata.

        We need FAQs that anticipate employee questions, and we need version control, expiration dates, and more. We need to decide which information can be answered directly by a bot and which question should trigger a human response. And we need to design for narrative preservation. That means writing source content with AI retrieval in mind. If the rationale for a change matters, don’t bury it in paragraph eight. Make it explicit. Label it.

        Repeat it in a concise why this matters section. Smart brevity writing would be a great approach to adopt here. Create approved answer blocks that the chat bot can draw from and test the bot by asking questions employees are likely to ask, then check whether the answers reflect not just the facts, but the intended meaning. This also has implications for measurement, by the way. Page views and open rates become less useful

        if employees are getting answers without opening an article. We’ll need to measure the questions employees ask, the quality of the answers they receive, the content gaps the bot reveals, and whether employees understand the strategy, the change, or the policy after interacting with the system. It’s a lazy conclusion to say employees won’t read anymore, so let’s just give them chat bots. The better conclusion is employees are changing how they access information

        So we need to make sure the organization’s knowledge, context, and narrative survive that shift.

        Neville: Hmm. Yeah, this is a huge topic, Shell, because what struck me listening to you was in a sense of continuity of what we just talked about in the previous topic is the verification of content that an AI produces for you. How are we going to deal with that? We talk about putting in place, you know, trusted sources for all this information. So, you know, let’s say I’m an employee, I’ve asked a question on something, it’s given me an answer.

        I need to check that. So what do I check that? And how do I know that it’s accurate? So you project that out to the kinds of stuff people deal with daily. And this is a huge undertaking, I would say, because looking at that article, talking about this, it has an interesting piece in there about the safeguards that CCOs are mentioning here specifically.

        will need to put in place to mitigate the risks of hallucinations, misinformation, and a fragmented landscape that comes with AI, they say. CCOs will need a greater emphasis on information quality, as well as an optimizing intranet content for AI searchability. You mentioned that point. They must also partner with IT, HR, and legal to establish robust governance to ensure that chatbots responses are accurate. That’s the bit. How are they going to do that? Because something internal,

        It surely isn’t going to be only producing answers based on what it finds on your internal networks alone. It must be looking out onto the wider landscape. How do you verify and check all that? That’s a major debating point for taking this further, it seems to me. So it’s a huge undertaking.

        Shel: Yeah, think one of the things, and I sort of breezed through it pretty quickly, but I think that we’re going to need to figure out is how do we monitor and assess what questions employees are asking that produce an answer that’s drawn from internal communications content, whether that’s in an email that went out or something that was posted to the intranet. How do we monitor the questions that are being asked and the effectiveness of the responses so that we can make adjustments?

        So that we can report that, yeah, we can determine that there is alignment on why this change was made, or we can say, gee, people are just getting an answer that tells them what the change is, and they don’t have any understanding that we looked at alternatives and we tried to find a better solution. And this was the best we settled on. And here’s why it’s good for employees or here’s how to cope with this in your department or whatever it may be. And to do this without

        necessarily surveilling employees, right? We don’t want to know who asked the question. I think it would be great if we could say, wow, look at this. 70 % of this particular point of confusion that was illuminated by the questions that we’re asking are coming from people in our operations division and not other divisions of the organization. That would be useful, but we don’t want to be able to say John Doe asked this question. What an idiot.

        It’s a serious issue. And I think the guidance that we need to have the information in multiple places where the AI can see it so that it realizes that this is an important topic because it is in several places and that we have it in several formats, the FAQs, the answer blocks. This is repurposing the original content in ways that will help ensure

        that the AI inside your organization is delivering information with context, with those other elements that’s so important for employees to understand to create that alignment. And by the way, I mentioned the seven Cs of the new communication compass. One of them is congruence, which we’re arguing goes beyond alignment, that there is congruence in the organization. So.

        If we want that, and it is important, it’s one of the reasons there is an internal communications function, we really need to start rethinking what we’re doing and how we’re doing it.

        Neville: Yeah, agree. I think surveillance is a very, very slippery topic and a slope. Because you’re going to have to have some kind of process in place and it probably surveillance is the correct label. Otherwise, you have really struggled to to find the answers you will need when you if you roll out something like this. So I think

        You know, we’ve reported recently on keystroke logging and other ways organizations are now requiring in place to monitor whether employees are working or not. And it’s still making news headlines in the tabloids here, a case recently about someone had this wheeze of having something that touched his keyboard every now and again to show it was working. The trouble is that the employer was savvy enough to have the software could tell which key and it was the same key all the time.

        So things like that, probably going to have to rebalance this privacy versus being able to see what people are doing algorithm, let’s say. And that’s going to be difficult, given the history, I suppose, of some organizations not respecting employee privacy. Look at the China model, and that’s not what we want to have here.

        state surveillance on everyone’s daily lives is pervasive in urban areas, not necessarily throughout the whole country. So do we want that? We may not actually have the ability to say no to that, given what they need to do. So that’s part of the issue to include, I think.

        Shel: Yeah, and I think one other thing we’re going to have to do is more asking. We’re going to have to survey after a change and ask employees if they understood the reason for the change. And part of the problem with increasing the number of surveys, and I’ve made this argument for years, that people will take surveys all day long if they see the results of the surveys and they see that things are going to change.

        If you’re asking people, did you understand this? Did you understand the rationale for it? Do you agree with it? It’s hard. I mean, you can report the results, but what’s going to change? You’re going to change maybe the way you’re producing content. That’s not going to be visible to employees. So it’s going to be a challenge to ask those questions frequently without producing that kind of survey fatigue that we hear so much about.

        Neville: Big topic. OK. OK, so there’s a widely held idea about AI that’s been around almost since the beginning, that it would be a great leveler in the workplace. It’s kind of continuity of what we’re talking about here, The thinking was that if you give everyone access to powerful tools that can write, analyze, summarize code, and generate ideas, then people with less experience or fewer formal skills

        should be able to close the gap with those at the top. But what we’re starting to see in the real world looks quite different. In fact, it may be doing the opposite. The Financial Times has just published new research based on a survey of 4,000 workers in the US and the UK. And the findings are pretty stark. More than 60%, a 6-0, of higher earners say they use AI every day in their work. Among low earners, that number drops to just 16%, 1-6. That’s a pretty big gap.

        So instead of leveling the playing field, AI adoption is heavily skewed towards the people who are already ahead, better paid, more experienced, often in more knowledge intensive roles. I think it makes sense because using AI effectively isn’t just about having access. Most people have access. It’s about knowing what to do with the tools. It’s about having the confidence to experiment, the context to apply them to real work, and the judgment to assess whether the output is actually useful.

        And those are things that tend to come with experience, with education and with the kind of roles where you have a bit more autonomy over how you work. There’s a line in the research from one economist that really captures this shift. The more intelligent the technology becomes, the more your own intelligence matters. If you already have expertise, AI can make you faster, more productive, maybe even better at what you do. But if you don’t yet have that foundation, it’s much harder to extract real value from it.

        There are other factors at play too. The research points to corporate training as one of the biggest drivers of AI use at work. So organizations that actively support and encourage adoption are seeing much higher uptake. And interestingly, the heaviest users of AI aren’t the youngest workers, as you might expect, but people in their 30s with more experience behind them. So again, this isn’t a generational story so much. It’s about how AI fits into the structure of work itself.

        If AI is boosting the productivity of higher earners more than lower earners, then over time you’d expect the gap to widen in output, in value, and potentially in pay. And there’s a second order effect that’s a bit more subtle, but potentially more significant. If AI starts to take on some of the routine or entry level tasks that junior staff would traditionally do, then where do people build the skills? How do you develop expertise if the work that teaches you the fundamentals is increasingly being handled by a machine?

        So instead of AI acting as a ladder, help people climb, there’s a risk it starts to pull away some of the rungs. And this is where it connects directly to leadership and to communication. This isn’t just about who has access to AI tools. It’s about who feels able to use them, who is encouraged to use them, who is trained to use them well, and who is supported in making sense of what they produce. So this is about culture, not technology.

        If organizations simply roll out AI and assume the benefits will spread evenly, they may find the opposite happens, that they’ve unintentionally widened the gap inside their own workforce. So perhaps the real question here isn’t whether AI will level the playing field. It’s whether leaders and communicators advising them are actively shaping how that playing field is changing or just watching it tilt.

        Shel: Yeah, that training point is, I think, really critical. And a project manager, an accountant, field supervisor, an HR business partner, and a communication specialist don’t need the same training. They also don’t need the same examples delivered by communications. So the communicator’s contribution here is translation. Here’s what this means for your role, for your task, for your team, and your day.

        That includes, you know, surfacing success stories from unexpected parts of the organization. I would love to find an example of, a foreman on a construction project site using AI. I don’t want to just report on what the IT department is doing and the other, you know, tech forward departments are doing. You know, the goal shouldn’t be everyone becomes an AI expert. The goal should be that nobody’s quietly excluded from

        the next operating model because they don’t see how AI fits in their work.

        Neville: Yeah, we’ve talked about this multiple times, Shell, in various episodes, which is who feels able to use such tools. And that comes, that stems from the leadership communication, in my opinion, that has to encourage people to do this. They feel they’re being empowered. They feel they’ve been given permission to do this. And they know they can count on help to when they get stuck with something. That, is

        hardly uniform in just any organization, frankly, and this isn’t about, we’re going to create the special department to do this, this needs to permeate across an organization. So you’ve got leadership at the very highest level, filter that down to your local manager, your line manager, or whoever you report to has got to encourage you as well. And I’m sure that’s that happens in many organizations, but to make this really work, that doesn’t result in the gap widening between those who are

        naturally excited about this and have the experience and the knowledge and the expertise to know how to get value out of these tools. You’ve got to have something in place that helps everyone else who isn’t like that. And there’s a challenge for communicators without any question.

        Shel: Well, for the whole organization, I mean, as I talk to people in other companies, it seems that we’re still in that experimentation phase that I think most organizations should be beyond by now. But the way it’s working right now in a lot of companies is the curious employees can try the tools and cautious employees can wait and everyone else will eventually catch up. that’s not going to work. mean, if this is becoming a material productivity and capability layer,

        Neville: As well.

        the

        Shel: We need to implement intentional adoption strategies. That means making role specific examples and approved tools and safe use guidance and peer demonstrations. And we’re trying to do this where I work is get peers showing other employees what they’re doing. Psychological safety, plain language explanations of what employees are supposed to be doing with this. So all needs to be put in place and communications has a role to play here, but

        If we don’t, adoption is going to follow the path of least resistance, and that’s toward the people who have the power, the time, and the digital fluency, and then you’re going to end up with that gap.

        Neville: Work to do here.

        Shel: Yeah, and by the way, there was another part of the FT’s reporting that I found really interesting that you didn’t mention. And that’s that men are more likely to use AI tools than women across a number of sectors. And I think that should concern leaders because AI fluency is becoming part of professional competence. And if men, along with higher earners and more experienced workers, are building fluency faster,

        What’s going to happen? And you know, the performance evaluations and promotion decisions and the visibility of the employees who are getting that kind of attention and informal influence may start reflecting AI access rather than raw ability. And here again, there’s a role for communicators by pushing AI enablement into say, manager toolkits, into your onboarding processes, into your training and team level norms is important.

        as opposed to just letting it sit as an informal advantage for people who are already competent.

        Neville: Yeah, like I said, work to do here.

        Shel: Thanks, Dan. I am looking forward to seeing this WordPress release. I have to say, I really like the idea of collaborative editing. As you know, the FIR podcast network website is on WordPress and Neville, you and I both use it and the ability for both of us to go in and deal with that in more of a Google Docs setting than logging in and just pulling up the post.

        makes sense to me. I definitely do see the issues with this as well, though, but it’ll be interesting to see this and the other changes as well. So thanks for the report, Dan. Really, really interesting. Well, we’re to stick with the AI theme again, probably not surprising given the impact that it’s having. And by the way, I have to say that when I scroll through LinkedIn, it’s got to be 80 % of the posts I see now are

        AI related and that’s not hyperbole. It is a guess. I haven’t measured, but man, it is all AI all the time on LinkedIn. That’s what people are talking about. And it’s changing the relationship between PR professionals and journalists, just not the way a lot of people expected it would. The fear was that AI would automate the work. We’d have a lot of AI written press releases and AI written pitches and articles.

        And yeah, there’s definitely a lot of that happening and people are calling it out. But the more interesting shift is not that AI makes it easier to produce more content, it’s that AI makes bad media relations more obvious and more damaging. Pete Pachal, who was a guest on FIR interviews, what was that Neville, about a year ago? Yeah, he makes this point in an article in Fast Company, AI is becoming a new interface.

        Neville: A year ago,

        Shel: For how information is found, prioritized, and interpreted. Journalists and PR people are both affected because AI systems more and more shape which stories surface, which ones get cited, and which narratives get visibility. Pete’s argument is that the advantage doesn’t go to the people who can generate the most material. It goes to the people who produce original reporting, useful expertise, clear narratives, and trusted relationships. That’s an important distinction for people who…

        operate in the media relations world. AI can help you write faster, but speed was already part of the problem. Journalists were already drowning in irrelevant pitches before generative AI showed up. AI just gives every mediocre PR practitioner a way to send even more mediocre pitches even faster. The results not greater efficiency, it’s more noise. And journalists are noticing.

        PR Daily reported on a global results communication survey of nearly 1,700 reporters across print, digital, and broadcast. 81 % said pitches and relationships with PR professionals are vital to their work. So journalists aren’t saying we don’t need PR, but 43 % expressed negative views about AI-generated pitches, saying they read like a bot wrote them and that they lack perspective and erode editorial trust.

        So here’s the conflict. Journalists still need PR. They need access and sources and data and context and story ideas, but they’re getting a lot less tolerant of anything that feels mass produced, poorly targeted or synthetic. Medianet’s 2026 Media Landscape Report, based on feedback from 800 journalists, makes the same point more sharply. The report says three quarters of journalists have received pitches that appeared to be AI generated.

        and about half said they could always detect machine written copy. I would argue with that, but let’s not go down that rabbit hole. The same report says 86 % of journalists now cite press releases as a key news source, which means the press release isn’t dead, but the stakes for credibility are higher. There’s also a widely circulated LinkedIn post citing the media net research saying 78 % of journalists report that receiving an AI written pitch

        decreases their trust in the PR person who sent it. That’s consistent with the other findings. Journalists aren’t rejecting AI assistance, they’re rejecting lazy use of AI. So what should PR practitioners be doing differently? I’ve got five things. First, stop using AI as a pitch factory. This is the most obvious trap. If the output is a generic email with a personalized opening line,

        and a weak story angle, AI hasn’t made you better, it’s made you faster at getting ignored. Second, use AI before the pitch, not as a replacement for your judgment. Use it to analyze everything the journalist has written recently, summarize themes, identify gaps, pressure test whether the angle is timely, and prepare sharper source material.

        P.R. Daley’s piece makes this point well. AI can help with research, angle testing, drafting, editing, personalization, and follow-up prep, but the human edit is where you add that credibility. Third, bring journalists something they can’t get from a model. That means original data, direct access to informed sources, a useful articulate expert, a local angle, a contrarian but defensible point of view, or a story that fits the reporter’s audience.

        Fourth, be transparent internally about what AI can and can’t do. PR leaders should have rules. AI can help research, structure, brainstorm, and edit, but it should not invent relevance, fake familiarity, fabricate personalization, or send anything without human review. And fifth, think beyond the pitch. In an AI-mediated media environment, you’re not just trying to get a reporter to open an email.

        you should be trying to build a public record of expertise and credibility. That includes owned content, executive visibility, contributed thinking, data assets, analyst material, podcasts, newsletters, earned media, anything that reinforces a coherent narrative that AI systems will recognize and retrieve. So the future of media relations isn’t more automated pitching.

        The future is more precise, more evidence-based, more relationship-driven, and more strategic. AI will handle more of the mechanics, but judgment, relevance, trust, and access become more valuable, not less. In other words, AI doesn’t eliminate the relationship between PR and journalism. It raises the penalty for abusing it.

        Neville: Yeah, it’s a it’s it’s an interesting topic without doubt. I was actually pretty impressed with the five points mentioned by Courtney Blackan in the PR Daily report. And it mirrors, frankly, almost everything we’ve talked about in this episode so far. And indeed, in recent episodes that we have to keep repeating this really, Shel, and you’ve done a good job, I think, at outlining this is what you’ve got to do.

        And it’s about this, but it also relates to these other 10 things we’ve talked about. But a couple of things that struck me here that really, really do resonate well. mean, research smarter, that makes complete sense. I mean, that’s got to be your starting point. But things like draft faster and edit harder. I like that one, I must admit. So you use AI to an AI tool to organize your ideas.

        into a structured draft or just simply improve the overall language that you’ve done and rewrite some of it. To kind of anticipate criticism from those who don’t think I should be involved in any of this, I liken it to that’s what you’d be asking a colleague to do or that freelancer that you’ve hired to help you work on this. You’d be giving them the same request as you would to the AI tool.

        So what’s the difference? One’s not a human. That’s probably the biggest difference. But I don’t get swayed by any of those arguments about you can’t use AI to do this. Now, of course, you’ve got to use it. The caveat is, for God’s sake, don’t just copy and paste that into your document in the center. Now, this is your assistant, not your creator. You’re the creator. And this helps you create very well, typically, all other things being equal. But I like that draft faster, edit harder.

        And it’s kind of like your A-B testing or A-B-C testing possibly with the AI assistant to help you do this quite quickly. And it’s great. Personalize with precision is another one she mentions. Don’t blast out the same email to 50 journalists, which is what many people do it seems to me. You’ve got the ability to personalize those emails. And again, you know,

        Your AI can help you with drafting that. So it will need to know quite a bit about the journalists and your relationship with them if you’re the PR person. So quite a lot of prep work you’d need to do here first. But the output from the AI will be pretty good if you do it right. So these are things that take your method that you might currently be using, which is prompt the AI to a totally different level.

        And that’s what you got to be thinking about now because this is where it’s all going. This is not way beyond just simply a chat bot. So it’s a really good topic. And these reports that you’ve highlighted, Shel, are great. Pete Pachal’s post is excellent. We’ve got to him back for back again from another interview, I think, because we interviewed him when he was just starting his business. And he’s gone places for that business now. So it’s worth reading.

        Shel: Yeah, I think so.

        Neville: That and the PR Daily report. do like those five points.

        Shel: Yeah, remember, remember we interviewed Aaron Kwittken from PRophet, that’s PRophet with a capital PR. And one of the things that’s, yeah, it was. And one of the things that system did was identify reporters who have written about this topic. It reviews the content that they have written over the past recent period and crafts a personalized pitch for each of them, which then you can go in and edit. I don’t think you

        Neville: Yeah, I do. Yeah.

        Quite a while ago.

        Shel: Sorry, Aaron, if you’re listening, I think you provide a great service, but people don’t need this anymore because you can create an agent that does that, identify the reporters who have written about this, review their most recent articles and craft a pitch for this press release. That can be done now internally with an agent that would probably take about an hour to create. mean, agents can go out and do amazing things now. Chris Penn.

        just wrote a post, he found somebody’s wallet on the street and it had enough stuff in it that he could spend a few hours tracking down who owned it. There wasn’t a driver’s license with an address. There was some cash, there were a couple of debit cards, but he was able to give an agent all of that information and go do his work on something else. And after a couple of hours, it said, I’ve narrowed it down to these three people. And Chris was able to look at those three people and figure out which one it was.

        lived really nearby and got the guy’s wallet back. We can do this kind of thing now in pursuit of PR objectives. The other thing I want to say is that I’ve gotten in the habit now of recording interviews and giving the transcript to AI and say, organize this into a first draft of a press release of an article of a change notice, whatever it might be. And I don’t copy and paste that in. That’s a first draft. It’s absolutely.

        a case of draft quickly and edit hard. I hadn’t heard that framing before, but it’s absolutely what I do these days because it just saves a lot of time and gets me into the nuts and bolts of making this relevant without having to spend half that time just reviewing the transcript and organizing that into sort of a logical flow.

        I think it’s a great use of AI and it’s one that I’ve been using for, geez, a couple of years now.

        Neville: Is. I agree. So there are things that you’re accustomed to that work for you. But pay attention to this kind of thing, because this is taking it to another level that will benefit you. You just will not just but you need to clearly understand what this is. And Pete’s article, the PR Daily piece are two sources that will help you do that. But it’s definitely worth a look.

        Our next story is a very different one. It’s not about something going wrong. It’s not about AI, but about something going exactly to plan. Apple announced that Tim Cook will step down as CEO later this year and become executive chairman with John Ternus, currently head of hardware engineering, taking over the role. On the face of it, this is a major moment. A CEO transition at a company of this scale. It’s what, revenues?

        trillion, second wealth, the most valuable company in the world currently. It was number one not long ago, so it might retain that. It often creates uncertainty internally in the markets and across the wider ecosystem. What’s striking here is how little disruption there seems to be. There were no leaks. The announcement landed cleanly. The market reaction was muted and the tone throughout is calm, controlled and focused on continuity.

        And that’s the real story, I think, because this isn’t just a leadership transition. It’s a masterclass on how to communicate one. If you look at the messaging, everything reinforces stability. isn’t disappearing. He’s staying involved as executive chairman. Ternus isn’t positioned as a bold new direction. He’s presented as a long standing insider, deeply embedded in Apple’s culture and products. There’s no sense of rupture, just a steady handoff.

        The most important part of this story though is the announcement itself. It’s what happened before the announcement. Ternus didn’t appear overnight. He’s been gradually made visible over several years, fronting product launches, appearing in keynotes, becoming a familiar presence. So by the time this announcement arrives, it doesn’t feel like a surprise. It feels like confirmation. And that’s the key insight. This transition didn’t start with a press release. It started years ago.

        What Apple has done is build familiarity, credibility and trust in the successor long before the moment of change. So when the change comes, the narrative is already understood. And that changes everything because most organizations treat moments like this as announcements, whereas Apple treats them as outcomes, the result of a story that has been deliberately shaped over time. That has practical implications because when transitions feel chaotic or disruptive, it’s often not because the change itself is unexpected.

        is because the story hasn’t been prepared. The successor isn’t known, the narrative isn’t clear, the organization is reacting in real time. Apple avoids that entirely, not by communicating more in the moment, but by communicating earlier, by building trust before it’s needed. And that’s where this becomes relevant for leaders and for communicators advising them. The real question isn’t how do you announce a change, it’s how early you start preparing people to understand it.

        Shel: Yeah, they didn’t treat this as a sudden disclosure. This was more continuity without pretending that nothing was changing. Right. It does a lot of reassuring work, not just about Cook’s remaining in the new roll through and actually staying in the current role through the summer and then staying as executive chairman, who’s going to work closely with Ternus during the transition. Also talked about

        Ternus’s ties to Steve Jobs and Apple’s mission and its values. And that language isn’t an accident. I think the lesson for communicators is that the leadership transition needs facts and emotional reassurance, right? Employees don’t just wanna know who reports to whom. They wanna know whether the company they believe in is still the company they believe in. I do like in PR Daily’s report, the discussion of different audiences.

        They didn’t send one announcement everywhere. They had public messaging and employee facing messaging and they both serve different purposes, right? The public version celebrated the legacy and confidence they had in this transition. The employee version was warmer. It was more grounded. And I mean, this is communication 101 in a lot of senses, but still something that we should emphasize. Consistency doesn’t mean identical language. Employees…

        deserve a message written for employees, not a copy of the press release with Dear Team pasted on top.

        Neville: I agree with you, Shel. This is an excellent example of how to do that. And yes, there wasn’t a single message. That’s very true. It was tailored messaging that showed clear understanding of those different audiences internally and externally. So a lot you can learn from that. And indeed, Ragan’s article by Allison Carter has some good insight in there that you can glean learning from. It’s worth reading that article too.

        So I call it a masterclass. It probably is one of the best examples I’ve seen. Not so much the press release, but understanding about what led up to that and all the other communication that then occurred, the buildup. And I realized too, of course, that some organizations won’t know until nearer the time of announcement that there’s going to be a change. So this isn’t necessarily a blueprint you can apply to everything. But in the case of Apple,

        It’s a very, it has a big effect on people news about what Apple’s doing. Steve Jobs was a magnetic personality or mercurial personality who famously coined this great phrase, I’d apply to Trump, often the reality distortion field that was his trademark in a sense that he was mercurial without doubt in leading. one thing that is notable, although it’s certainly not emphasized in any way that

        that Ternus is a hardware guy, whereas Cook is a management guy. So Cook took over from Steve Jobs and transitioned Apple over that decade and more period to where we are now with the changes going on in the world generally and the tech industry in particular, that it probably does require more of a hard-nosed technological approach than purely business management to leadership.

        And of course, if Cook’s going to be the executive chairman, he’ll be there to assist here and there. interesting time looking at a company like Apple to see this happen.

        Shel: Yeah, and the press release sends some messages without explicitly saying anything. First of all, the fact that they did pick a hardware hardware guy says a lot about where Apple is heading. They faced a lot of criticism for their failures around artificial intelligence, which isn’t even mentioned in the press release in that sense.

        Neville: Yeah, does.

        not mentioned.

        Shel: Message. What have been Apple’s wins under Cook’s leadership? I mean, the Apple Watch was a big one, and a lot of people thought it wasn’t going to be. They kind of laughed when it was introduced. But there are a lot of people wearing Apple Watches out there now. Big success. But mainly he consolidated manufacturing in China, which may not end up being a great thing. But it’s he I mean, it’s made them a ton of money. What did he do? Triple?

        their revenues, as you said, they’re the number two most valuable company in the world. Now they’re gonna refocus on hardware, on product, the stuff that has made Apple from the get-go. Software, mean, you can talk about iOS and the computer software platforms they produce, but you never hear a lot of…

        Discussion of those at their big events. It’s, you know, we’re coming out with a watch. We’re coming out with a Vision Pro, which has been something of a failure. So this is a reemphasis on hardware. They’ve made that point. Casey Newton came right out on hard fork and said that Ternus’s first act should be just doing the deal with Google to integrate Gemini into Siri and be done with that whole thing because Siri was supposed to get that AI update.

        It’s been a couple of years now and it just hasn’t happened. So they have said a lot in this press release and not all of it was necessarily explicit.

        Neville: No, you’re absolutely right there. So interesting time in the tech industry generally and seeing where what happens with Apple in the coming year or so.

        Shel: Well, my favorite new word is slopaganda, and this refers to AI-generated propaganda, that cheap, fast, emotionally loaded, and designed less to strategically persuade anybody about anything than it is to just flood the zone with images, memes, fake scenes, shareable outrage. The most visible example of slopaganda right now is Iran’s use of AI-generated, Lego-style videos aimed at Donald Trump, Israel, and the U.S.

        They’re far from subtle. show caricatured Lego versions of Trump, Benjamin Netanyahu, missiles, burning ships, collapsing American power, and they use rap tracks, absurdist humor, conspiracy references, and the visual grammar of social media and not the language of state diplomacy. The New Yorker reported on Explosive Media, which is an Iranian digital media enterprise, which got started posting

        pretty routine anti-Western content that didn’t get a lot of uptake. Then they discovered that these AI-generated Lego-style propaganda cartoons was its breakout format. The clips accumulated millions of views. They were reshared by Iranian government accounts. They were promoted by Russian state media and even picked up by anti-Trump protesters because the imagery was so flamboyantly anti-Trump.

        The group told the New Yorker that it could produce a two-minute video in about 24 hours. LeMond adds an interesting scale point. According to Cyabra, a company that analyzes content to distinguish authentic activity from coordinated manipulation, that’s right off their website, according to them, pro-regime videos received more than 145 million views across X, Facebook, Instagram, and TikTok during the second half of March.

        Explosive media eventually acknowledged to the BBC that the Iranian state was one of its clients. It had initially claimed that it was all independent. And this has captured a lot of attention, first because it’s visually disarming. Lego’s familiar, playful, global. It turns geopolitical violence into something that looks like entertainment. Analysts say the Lego format serves as a kind of Trojan horse.

        reaching people who wouldn’t otherwise engage with war-related content. It also works because it’s emotionally true to people who’ve always wanted to believe the underlying message. Viewers may not literally believe Iran is winning the war in the way videos depict, but they can choose to believe the emotional premise that the U.S. is weak, Trump is ridiculous, and Iran is standing up to a global oppressor.

        And it works because it speaks the language of the target audience. This isn’t old school propaganda. It’s fast, caustic, meme-literate, and platform-native. In information warfare terms, this gives Iran something it used to lack, cultural reach into Western audiences. It lets Iran fight asymmetrically using ridicule and narrative disruption where it can’t match the U.S. militarily. But this is not only a geopolitical story.

        The same tactics are going to show up in business. Maybe not tomorrow in Lego form, but the pattern’s just too useful to stay confined to politics. An activist shareholder can use an AI-generated video to ridicule a CEO, to dramatize a company’s alleged mismanagement, or turn a dry governance dispute into a viral morality play.

        A disgruntled customer could generate convincing scenes of product failure, employee misconduct or customer mistreatment. A labor dispute could be amplified with synthetic stories that blur the line between real worker grievances and invented incidents. An unscrupulous competitor could see just asking questions content that implies safety failures, financial instability, executive hypocrisy or environmental misconduct. An example from Canada.

        matters here. The Canadian Digital Media Research Network identified a coordinated network of 20 inauthentic YouTube channels targeting Albertans with nearly 40 million views. The channels exploited real grievances and pushed narratives normalizing a move for secession and even U.S. annexation of the province. The report says the accounts pushed an Albertan perspective

        that researchers found absolutely no evidence that the account owners were actually Albertan. That’s the bridge to business. Slopaganda doesn’t have to invent grievances. It can exploit real ones. A company with a safety incident, a layoff, a product recall, a labor dispute, or an unpopular executive decision is already vulnerable. AI just makes it easier for hostile actors to package that grievance

        into emotionally potent, shareable content. So what should communicators do about this? Well, first, obviously, build monitoring capability for synthetic narratives, not just mentions. The risk isn’t one fake video. The risk is a pattern. Repeated themes, recycled scripts, coordinated accounts, sudden spikes, and emotionally consistent attacks. Second, prepare your verification protocols now.

        If a video appears showing something damaging, who determines whether it’s real? Legal? Security? Coms? IT? Outside forensic consultants? You know, that first hour is really important. So knowing who to go to to find out whether this is real or not is really critical. Next, strengthen your owned record. If AI systems and social audiences are going to interpret your organization through fragments,

        Make sure there’s a clear, accessible, credible body of truth. Your policies, your timelines, FAQs, source documents, leader statements, and plain English explanations. And finally, scenario plan for synthetic outrage. Not just misinformation, but ridicule. Means move differently than allegations. A dry correction rarely defeats a funny attack. Communicators need response options that are fast, human, factual, and proportionate.

        And, you know, one last question to address here, should communicators use Slopaganda themselves? No, they shouldn’t. Not if we’re talking about deceptive, synthetic, emotionally manipulative content designed to obscure truth. That’s not communication, man. That’s reputational arson. But communicators absolutely should learn from the format. AI-generated creative can be used ethically.

        if it’s clearly labeled truthful, brand safe, and grounded in real information. But understand attention has moved toward visual, fast, emotionally resonant storytelling, and we should move along with it.

        Neville: Yeah, it’s an interesting topic, isn’t it, Shell? I think you’re kind of no communicator should not do this. That’s a message clearly the US government’s ignoring, judging by what they have been doing, or the White House, should say, that’s reflected back in what the Iranians are doing and their proxies and indeed in individuals by the thousands doing the same. So misinformation, disinformation, fakery, it’s everywhere.

        I read a post about this at the end of March that looked deeply into what AI generated, how it’s being used by both sides. And there’s a number of reports, notably Deutsche Welle, the English language news service from the German broadcaster and France 24 as well, had some really, really good, well researched articles with examples of

        what’s happening in this area. There’s a great one someone posted showing a Lego box of, you can visualize it from the description that we see on the TV news all the time. Residential buildings, apartment blocks in ruins blown to bits. And this is made out of Lego bricks. And it’s, you know, Lego logo, and it looks exactly like a Lego product. So brand

        a brand is being, you know, brought into this unwittingly. But the reality is that communicators are in between a devil and deep blue sea here, I think, because if you’re in a business, you’re not in the defense industry, you’re not involved in anything with a war going on. Yet some of your clients are kind of on the fringes of all this by the nature of their business. So

        they’re dragged into in the case of Lego, good example, what do do about it? Do you respond in kind with some kind of jokey thing about the, you know, this, you know, whatever it might be in Iran, this example. It’s a it’s, it’s a call, I would say, there seems to be a movement, if you like, to this kind of thing is a matter of normality.

        And I think it’s very dangerous. Philip Boramantz had a really good piece in the middle of March on what this war is teaching us about communications generally, not specifically crisis, although it’s mentioned in there. The BBC had a report early March about AI-generated Iran war videos, surge of those as people have the tools to create these things. So that is part of our landscape.

        So it’s something that communicators, it’s a question with no easy answer. The question you asked is not an easy answer, but it may be the one that we have to find an answer to. I mean, that’s easy to say that. I mean, I don’t know what that answer is. So it is that the most striking thing occurred to me is that the, not the sophistication of these tools are not slickly produced. They are produced.

        I could you say it’s for those who are savvy with social media and social networks and what works in terms of spreading what is spreadable? What is memeable? And we are not part of that. And if you’re not, you’re people talking about your brand and you and you’re not there. So, you know, it’s a big question.

        But it’s something we have to try and understand what’s happening and somehow come up with an answer.

        Shel: Yeah, you raise an interesting point about brand safety. Is Lego going to issue a takedown notice to Explosive Media, which is a digital media company in Iran, probably not knowing that they’re not going to respect a takedown notice or there’s no court that you can go to necessarily. So basically you kind of have to live with it if you’re Lego. And I suspect that’s what they’re doing. But from a communication standpoint,

        It’s really important to understand that the Iranian produced stuff is getting far more traction than the US produced stuff. And the reason for that is that it leverages grievances the Iranians already had and other people around the world already have with the US. The US stuff is just showing the attacks on Iran. And if you think about the average American or perhaps even the average Brit, what grievances do they have against Iran?

        I mean, the grievances here are within the government, not within the broad population. And that’s why these are so effective, is that the Iranians and other populations around the world do have grievances, justified or not. So as you look at this stuff moving into the business world, consider what kind of grievances people might have with your organization. That’s where they’re going to attack you. That’s where you have to build up your defenses now before they do.

        Neville: Yeah. So it’d be interesting to see where are we at nearly halfway through the year, not quite actually. So quite a bit less than halfway. Yeah, it is. But it makes me think I wonder what, you know, the big picture of trust and the reporting we see on that notably the Edelman Trust Barometer. What changes are we to see as this year plays out as it were? We have a war in the Middle East that

        Shel: It’s still going pretty fast.

        Neville: Anyone who even has a fleeting interest what’s going on in the Middle East knows that this is situation that has been the case for millennia, frankly. But in modern times, this has been since 1948 in the creation of the State of Israel. This has been happening in the Middle East. A war one way or another between tribal factions and then states have gotten involved in this. Iran

        from what I can understand, has long been a thorn in the side of US governments over different presidents over the decades. It doesn’t resonate that way here, notwithstanding some things that have happened, but there were decades ago in the UK that, you know, the notion of the fact the US really was the only country that could do what they did to to bomb Iran and start a war to undeclared.

        not asking anyone to help them and then complaining when no one came to their help. So it’s a dreadful situation, the war itself, obviously, but also the murkiness of what it has created in the context of what we’re talking about. That, you know, we’ve talked about this element before, which is that, you you do not control the message anymore, even if it’s about you. That is no

        more true than what we’re seeing right now. The Iranian government doesn’t control any of the messages, not really. It’s anyone who’s got an internet connection and a tool to create an AI generated video or whatever it might be, and then share it online. That’s who’s got control, but only in a limited way because it then goes out there and anyone can do anything with it. It’s making it onto some

        traditional media, not just social. So it’s who knows where it’s all going to go, Shell. And as this war continues without any sign that it’s going to suddenly stop, this is the new normal.

        Shel: Yeah, and keep in mind, we’re not talking about a single piece of content. We’re talking about flooding the zone with multiple pieces of content that reflect the same grievance and make the same points and have the same punchline that get people to watch it and have it appear wherever you might be getting your content. So you got to look at it that way and take steps to deal with because.

        Yeah, I don’t have a good business example yet, but I’ll happily make a bet with somebody that within two years, we’re going to see this kind of content aimed at business from that disgruntled investor or unhappy customer or whoever it is. So this is so easy to do. And that’ll wrap up this episode of For Immediate Release. Our next monthly long form episode is scheduled to drop on Monday, May 25th.

        Neville: Great fun.

        Shel: So we’ll be recording on Saturday, May 23rd. In the meantime, we hope you’ll comment. As always these days, all of the comments that we shared in this episode came from LinkedIn. And you’re welcome to look for our announcements of new episodes on LinkedIn or Facebook or threads or blue sky. And we’ll check for comments there, but you can also send them to [email protected].

        I’m going to come up with a contest and probably announce it in the May episode for an audio comment. Anybody who submits an audio comment will put your name in a hat and draw a winner and you’ll get something. I’ll have to figure out what. We don’t have FIR merch anymore. Maybe you want to start that again. Now we’ll come up with something. We’ll come up with something. But you can leave an audio comment by attaching an MP3 file to an email.

        Neville: Ha ha ha ha.

        No, we don’t. Maybe we should, we’ll come up with something.

        Shel: Or clicking the record voicemail tab on the right-hand side of the FIR Podcast Network website. You can comment on the show notes that we leave on the FIR Podcast Network. So many ways to leave a comment. And we also have a community on Facebook and an FIR page on Facebook. Any of those places will do. And we also hope that you will leave your ratings and reviews of FIR.

        wherever you get your podcasts. And we will be resuming our short midweek episodes next week. Look out for those. Best way to have those is to subscribe to For Immediate Release. And that will be a 30 for this episode of For Immediate Release.

        The post FIR #511: Doing AI Governance Right and Still Getting It Wrong appeared first on FIR Podcast Network.

        ...more
        View all episodesView all episodes
        Download on the App Store

        FIR Podcast NetworkBy FIR Podcast Network

        • 3.3
        • 3.3
        • 3.3
        • 3.3
        • 3.3

        3.3

        3 ratings