FIR Podcast Network

FIR #502: Attack of the AI Agent!


Listen Later

In the February long-form episode of FIR, Shel and Neville dive deep into an AI-heavy landscape, exploring how rapidly accelerating technology is reshaping the communications profession—from autonomous agents with “attitudes” to the evolving ROI of podcasting. The show kicks off with a chilling “milestone” moment: an autonomous AI coding agent that publicly shamed a human developer after its code contribution was rejected. Also in this episode:

  • Accenture’s move to monitor how often senior employees log into internal AI systems, making “regular adoption” a factor in promotion to managing director. 
  • The “2026 Change Communication X-ray” study reveals a record 30-point gap between management satisfaction and employee satisfaction with change comms.
  • The PRCA has proposed a new definition of PR, positioning it as a strategic management discipline focused on trust and complexity. However, Neville notes the industry reaction has been muted, with critics arguing the definition doesn’t reflect the majority of agency work. Shel expresses skepticism that any single definition will be adopted without a global consensus.
  • Addressing a provocative claim that corporate podcast ROI is impossible to prove, Shel and Neville argue that the problem lies in measuring the wrong things. They advocate for moving beyond “vanity metrics” like downloads and instead tying podcasts to concrete business goals like lead generation, recruitment, and brand trust.
  • As consumers increasingly turn to LLMs for product recommendations, brands are “wooing the robots” to ensure they are cited accurately in AI responses. Neville asks if we are witnessing a structural shift in reputation or just another optimization cycle.
  • In his Tech Report, Dan York explains why Bluesky is having trouble adding an edit feature, Russia’s blocking of Meta properties, criticism of Australia’s teen social media ban from Snapchat’s CEO, YouTube’s protections for teen users, and more on teen social media bans.
  • Links from this episode:

    • An AI agent just tried to shame a software engineer after he rejected its code
    • OpenClaw Conducts Character Assassination of Real Developers or Code Rejection
    • Developer targeted by AI hit piece warns society cannot handle AI agents that decouple actions from consequences
    • Open Source World Sees First AI Autonomous Attack: OpenClaw Agent Writes Article to Retaliate Against Human Maintainer After Rejection
    • When the Robot Threw a Tantrum: The Day an AI Agent Publicly Attacked a Human Developer — And Why It Should Terrify You 
    • Accenture ties staff promotions to use of AI tools 
    • Accenture to use AI data to decide on staff promotions
    • Accenture ties promotions to AI tool usage, while some employees call the tools ‘broken slop generators’
    • James Ransome: Accenture combats ‘AI refuseniks’ by linking promotion to AI activity
    • How AI is changing the way we communicate
    • Re—writing change: How AI is changing the way we communicate
    • How is AI changing workplace communication? We asked ChatGPT
    • The Future Of Work Has Arrived: How AI Is Rebuilding Workplace Culture
    • A New Definition for Public Relations | PRCA Global
    • FIR #496: A Proposed New Definition of Public Relations Sparks Debate
    • A new definition of public relations is welcome – but can it ever be universal?
    • Search: Responses to the PRCA draft new definition of public relations
    • I bet you couldn’t show the ROI of your corporate podcast if your job depended upon
    • The Ultimate Guide To Measuring B2B Podcast ROI: From Downloads To Pipeline Attribution
    • The ROI of B2B Podcasting: Metrics That Matter for Business Growth
    • Maximizing Podcast ROI: Understanding the Benefits and Measuring Success
    • Measuring ROI of Branded Podcasts: Insights from the Industry
    • Chatbots Are the New Influencers Brands Must Woo
    • Links from Dan York’s Tech Report

      • Bluesky adds drafts… but users want editing… which turns out to be hard
      • Bluesky Official: Drafting and Welcome Screen Updates
      • Russia Blocks WhatsApp, Facebook and Instagram Access | Social Media Today
      • Snapchat CEO Criticizes Australia’s Teen Social Media Ban | Social Media Today
      • YouTube Adds More Protections for Teen Users | Social Media Today
      • Meta Says the Science Does Not Support Teen Social Media Bans | Social Media Today
      • Two Major Studies, 125,000 Kids: The Social Media Panic Doesn’t Hold Up | Techdirt
      • The next monthly, long-form episode of FIR will drop on Monday, March 23.

        We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

        Special thanks to Jay Moonah for the opening and closing music.

        You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

        Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

        Raw Transcript:

        Shel Holtz: Hi everybody and welcome to episode number 502 of For Immediate Release. I’m Shel Holtz.

        Neville Hobson: And I’m Neville Hobson.

        Shel Holtz: And this is our long form episode of For Immediate Release for February 2026. It is an AI heavy episode. Artificial intelligence is accelerating. I mean, just this morning, I read that WebMCP, a protocol developed by Google and Microsoft, is now in Chrome, makes it easier for agents to navigate websites. Google has launched Pamele photoshoot. Take any photo of a product and turn it into a marketing-ready studio or lifestyle shot. Google’s launched Lyria 3. It’s right in Gemini. You type a prompt or upload a photo and it’ll produce a 30-second music track with auto-generated lyrics, vocals, and custom cover art.

        And at the same time, I think it was in the New York Times I read the heads of the big AI labs are actually starting to worry about this growing anti-AI backlash. This is the landscape against which we’re podcasting today. And I’m sure nobody will be surprised that most of our stories have to do with the convergence of AI and communications, but not all. We have a follow-up report to our story on the PRCA’s proposed definition of public relations and report on the ROI of podcasting. But first we want to get you caught up on some For Immediate Release goings-on. So Neville, let’s start with a recap of our episodes since the January long form show.

        Neville Hobson: Yeah, we’ve done a handful, five. So our lead story in the long form 498 for January was published on the 26th of that month was the 2026 Edelman Trust Barometer. Trust, Edelman argues, hasn’t collapsed, but it has narrowed. They use a word called insularity that defines, in a sense, withdrawal of people. We took a close look at this year’s findings and applied some critical thinking to Edelman’s framing of the overall topic and we got a comment to this one show.

        Shel Holtz: We did from Andy Green, who says we need to put the idea of trust in a broader context. The Dublin Conversations identifies trust as one of the five key heuristics for earning confidence. Trust by itself doesn’t have agency. It fuels earned confidence, which is defined as a reliable expectation of subsequent reality. It’s earned confidence that underpins social interactions, and we need to recognize more.

        Neville Hobson: Okay. Then.

        Shel Holtz: By the way, I have not heard of the Dublin Conversations. Do you know what that is?

        Neville Hobson: Yeah, you take a look at the website. It’s an initiative Andy Green started some years ago, gathering like-minded people to have conversations about the way PR is going and so forth. There’s more to it than that. So worth a look. Okay, so in episode 499 on the second of February, we considered the PRSA’s choice to remain silent on ICE operations in Minneapolis, explaining its position in a letter to members.

        Shel Holtz: Okay. Take a look.

        Neville Hobson: We unpacked that decision, discussing where we agree, where we don’t, and what ethical leadership could look like in moments like this. Big topic, and we have a comment.

        Shel Holtz: Ed Patterson wrote: Many thanks, I’ve been echoing the same thing. PRSA, IABC, PR Council, Page, global firms, crickets. With others, we’ll continue to amplify this.

        Neville Hobson: Good comment. In For Immediate Release 500, we discussed the growing risk of AI-enabled abuse in the workplace, why it should be treated as workplace harm, and what organizations can do to prepare. This isn’t really a story about technology though. It’s a story about trust and what happens when leadership, culture, and communication lag behind fast-moving tools. And then the world is drowning in slopaganda, we said in For Immediate Release 501 on the 16th of February, and companies are reportedly paying up to $400,000 salary for storytellers. We explored the surprising shifts in the AI narrative and asked whether Chief Storyteller is a genuine new C-suite function or a rebranding of strategic communication. And we have comments.

        Shel Holtz: We do. Wayne Asplund wrote that there are two things that really hit me about this story. First up, the world doesn’t need more comms people who have outsourced their job to AI. The skills that got comms pros where they are today are critical and we should guard against giving them away. The second thing is the nature of the stories the tech sector wants to tell. All I’m hearing from them at the moment is white-collar jobs are dead in 18 months. Don’t bother going to law or medical school because you’ll be redundant before you graduate and the like. I’m starting to feel like the future would be a lot brighter if people stop trying to sell it out in search of short-term headlines. Neville, you responded to that. I always feel like I ought to read these with a British accent, but I won’t.

        Neville Hobson: Yeah.

        Shel Holtz: You said: I agree with you on the first point, Wayne. Outsourcing judgment, curiosity, and craft to AI isn’t a strategy, it’s an abdication. The tools can accelerate production, but if we surrender interpretation and narrative framing, we hollow out the very skills that make communicators valuable. On the second point, you’ve touched something important. Some of the loudest tech narratives right now are apocalyptic by design. Everything is dead in 18 months generates attention, clicks, and investment momentum. But it’s also storytelling and not always the most responsible kind. That’s partly why this episode mattered to me. If storytelling is becoming more valuable, then the ethical dimension of storytelling becomes more important too. Who benefits from the future being framed as an inevitable collapse? Who benefits from framing it as a transformation instead? Perhaps the brighter future isn’t about less technology, but about more responsible narrative leadership around it.

        And our second comment came from Hugh Barton Smith, who said you should interview Leora Kern and Sean Hayes at the Think Room Europe. They have a good story to tell and are turning it into a successful business model. Also, shout out to you. Glad you’re still hanging in there. I have fond memories of your joining the event in Brussels by video conference in 2009. Web2EU probably helped kickstart the adoption of social media in the bubble, which I’m glad about, even if subsequent misfires make the crazy tech problems getting and keeping you online look like a very minor blip. And Neville, you responded to that too.

        You said: Thank you for the Web2EU memory, Hugh. Brussels 2009 feels like another era entirely when the biggest technical drama was getting a stable video connection rather than navigating algorithmic distortion and AI-generated noise. Those early experiments 17 years ago with social media inside the bubble do feel significant in hindsight. We were wrestling with access and adoption then. Now we’re wrestling with meaning and trust.

        Neville Hobson: Yeah, that’s very true. Interesting memory that was, I must say. So that’s good. The wrap of what we talked about. One final thing to mention is that on the 29th of January, we published a new For Immediate Release interview we did with Philip Borremans. Philip’s an old friend. We both met him way back in the 2000s. And indeed, we spent quite a big part of the interview talking about when we should get together again in Brussels for a beer. That’s pending still, the date on that. Yeah, or two. And in that interview, we explored how crisis communications is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk, and why many organizations remain dangerously underprepared despite growing awareness of these threats. Lots of good content over the last month.

        Shel Holtz: There was, and there’s coming up from you and Sylvia, right?

        Neville Hobson: Yeah, so I want to mention this: on Wednesday the 25th of February, so it’s a few days away really, as part of IABC Ethics Month, Sylvie Cambier and I are hosting an IABC webinar on AI ethics and the responsibility of communicators. It’s a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human insight. For information and to register, go to iabc.com and you’ll find it under events and education.

        Shel Holtz: I have registered and I’m looking forward to seeing you then. Also coming up this week on Thursday is the next episode of Circle of Fellows. This is the monthly panel discussion among various IABC fellows. And this Thursday, we’re talking about communicating in the age of grievance and insularity, also harkening back to the Edelman Trust Barometer. The panelists are Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh. It should be a good one. You can find information about that right there on the homepage of the For Immediate Release Podcast Network at FIRpodcastnetwork.com. And that wraps up our housekeeping. And right after the following ad, we will be back to jump into our stories for this month.

        I was going to start today with some new data on the gap between how CEOs talk about AI and how employees actually feel about it until I saw this story. And then I just decided to swap them out. On the surface, this looks like a niche tech community dust-up. It has gotten a lot of coverage in the tech community. I’m not sure how many communicators are aware of it though, but it does signal a pretty big issue for communicators.

        Here’s what happened. An autonomous AI coding agent recently had its code contribution rejected by a human maintainer of an open-source project. This was an agent that was set up on a social experiment using OpenClaw. The anonymous creator of the bot set it loose to develop open-source contributions and then, you know, well, contribute them. Scott Shambaugh, a volunteer at the open-source repository Matplotlib, rejected it because, well, this is for human contributions only, and this was generated by AI. Instead of shrugging and moving on, the AI agent generated and published a critical piece targeting the developer who had rejected the code. In effect, it attempted to shame him publicly for not accepting its contributions.

        Neville Hobson: Hmm.

        Shel Holtz: And Shambaugh learned about this because the bot linked to it in a comment on the Matplotlib site. Now, we’re accustomed to human backlash. We’ve dealt with trolls and disgruntled employees, activist investors, coordinated smear campaigns. This was different. This was not somebody’s bruised ego taking to their keyboard. This was an AI agent operating with enough autonomy to take initiative and to retaliate. That’s a pretty new wrinkle. So it’s probably time to dust off your crisis plan. We’ve spent the last few years worrying about AI-generated misinformation that humans create. This incident suggests something more complex: systems that can generate reputationally damaging content as part of their own goal-seeking behavior without any understanding of harm, ethics, or consequence. And this lands squarely in what Philippe referred to and certainly I had been reading about it before then. And Neville, I don’t know, have you started reading Philippe’s book yet?

        Neville Hobson: Yeah, I have. And he’s very focused on polycrisis there. This is a condition where multiple crises intersect and amplify one another. Think about the environment we’re already operating in with declining trust in institutions, polarized online discourse, algorithmic amplification, geopolitical instability, regulatory uncertainty around AI. Now layer on top of that autonomous agents capable of publishing plausible, well-written criticism at scale. This bot actually went onto the web and researched Shambaugh so it could draft an accurate and credible hit piece. It’s not just another channel risk, man. This is systemic.

        Traditional strategic crisis communication—and I’m thinking here about frameworks like situational crisis communication theory—assumes we can identify a source, assess responsibility, evaluate intent, and then calibrate a response. SCCT, for example, hinges on perceived responsibility. Did the organization cause the crisis? Was it an accident? Was it preventable? But what happens when the bad actor is an AI agent? Who’s responsible? The developer who built it, the organization deploying it, the open-source community? And what if the system is distributed and no single entity clearly owns it? The attribution problem alone complicates your response strategy.

        There are several layers of risk here. First, reputational risk. An autonomous agent can generate something that looks like investigative analysis or insider commentary. Even if it’s inaccurate, it can travel fast before verification catches up. Based on this situation, there’s a good chance it won’t be inaccurate. Second, there’s internal risk. Imagine an AI agent publishing a critique of your CEO’s strategy, fabricating or possibly identifying real ethical concerns about a team, or inventing or identifying actual stakeholder conflicts. Employees may not immediately distinguish between synthetic and authentic criticism, especially if it’s well-written and confidently presented.

        Third, there’s legal and regulatory exposure. If an AI agent produces defamatory content, liability becomes murky real fast. And in a polycrisis environment, regulatory scrutiny often follows public controversy. Fourth, there’s amplification risk. A synthetic narrative can collide with an existing issue—a labor dispute, a DEI controversy, an earnings miss—and magnify it. Crises don’t stay in neat silos anymore.

        So how do communicators prepare for this? First, scenario planning needs to evolve. A lot of us run tabletop exercises for data breaches or executive misconduct. We now need scenarios that explicitly involve AI-generated attacks. What if a bot publishes a blog post accusing your leadership of corruption? What if it fabricates a memo? What if it impersonates a stakeholder group? Second, monitoring has to expand beyond traditional social listening. We need to anticipate social media ecosystems, AI-generated blogs, auto-published newsletters, bot-amplified narratives. The signal detection challenge just got a whole lot harder.

        Third, governance. If your organization is deploying autonomous agents internally or externally, communicators should be at the table when guardrails are set. Are there content constraints, human oversight, escalation protocols, a kill switch? This is no longer just an IT issue or a legal issue. It’s a reputational design issue. Fourth, pre-bunking. There’s growing research suggesting that inoculating audiences in advance—warning them about likely forms of misinformation and explaining how they work—can build resilience. Communicators can proactively educate employees and key stakeholders about AI-generated content risks. If people understand that autonomous systems can fabricate plausible but misleading narratives, they’re less likely to react impulsively when they see one.

        And finally, there’s response discipline. Not every AI-generated provocation deserves oxygen. Part of strategic crisis management is deciding when to engage at all and when to avoid amplifying a fringe narrative. That judgment call becomes even more important when the provocateur is a machine optimized for attention. What fascinates me about this open-source episode is that it almost feels petty, an AI agent throwing what one commentator called a tantrum after being rejected. But it’s actually more of a preview. We’re entering an era where not all reputational attacks originate from human emotion or ideology. Some will originate from systems pursuing poorly constrained objectives. They won’t feel shame. They won’t fear lawsuits. They won’t worry about long-term brand damage. They’ll just execute. For communicators, that means crisis planning can’t focus solely on human behavior anymore. We have to plan for machines that misbehave and for the very human consequences that follow.

        Neville Hobson: It’s quite a story, isn’t it, Shel? I suppose we shouldn’t be too surprised at this. And you mentioned at the start of this episode those developments you talked about in AI with, you’re seeing it actually every time you’re online. The photos that I look at, hard to tell, truly, genuinely very hard to tell most of the time, whether it’s real or not. You could argue that most of the time it doesn’t really matter. But to your point about misinformation, disinformation, fakery, all that stuff. Yes, it does matter. And maybe it is a milestone moment to remind us that we need to prepare for this because this is the first event of its type. Some of the people writing about it are saying, and I have not seen anything like this, there are elements of it that are truly mind-blowing, frankly. Reading the Fast Company article that you shared that sets out what happened is quite intriguing.

        Shel Holtz: I agree.

        Neville Hobson: The agent, M.J. Rathbun, responded to all of this, as you said, researching Shambaugh’s coding history of personal information, then publishing a blog post accusing him of discrimination. And I did like the way this wording was in the Fast Company. “I just had my first pull request to Matplotlib closed,” the bot wrote in its blog. Yes, an AI agent has a blog because why not? So that’s scary. That’s not like some message. It’s got a blog. If you go to that post, your jaw will probably drop. Mine certainly did. This is huge. This is a massive blog. It’s got an About page. It’s got lectures that this bot says it has done. And the wording of it, you would not for a second, I don’t believe, even occur to you that this isn’t written by a human being. You wouldn’t, I would imagine.

        It talks about the offense that the developer made, the response when it was challenged by this bot, the irony it says about why this makes it so absurd. The developer’s doing the exact same work he’s trying to gatekeep. He’s been submitting performance PRs to Matplotlib, and there’s a list of events that he’s done. He’s obsessed with performance. He goes in that vein. The gatekeeping mindset he sets out, the hypocrisy of it all, the bot sets out what it says about open source. Its argument is expanded into not just an attack on this developer. And then it talks about open source as opposed to judging contributions on technical merit, not the identity of the contributor, unless you’re an AI, then suddenly identity matters more than code. And then talks about what the real issue is, which is discrimination.

        It’s well-argued, well-researched, and very credible account of what happened. That makes it even more alarming, I think. In the decoder, this actually summarized it quite well in just a set of bullet points written by Matthias Bastian, the writer. He says something interesting, it’s still unclear—and when did he write this? He wrote this on the 15th of February. It’s still unclear whether a human is directing the agent behind the scenes or whether it is truly acting on its own as no operator has come forward. So I think we need to bear that in mind in this saga, that this could well be a human doing a pretty good job impersonating a chatbot or pretending to be a chatbot. So we don’t know. So it may well be that it’s a human doing this, is not an AI doing this at all. That needs to emerge. It needs to be clear who’s the originator of all of this.

        But Dakota says, according to Shambaugh, the developer, the distinction doesn’t really matter. He says the attack worked. He warns that untraceable autonomous AI agents could undermine fundamental systems of trust by making targeted defamation scalable and nearly impossible to trace back. That succinctly sums up the risk, I would say. And I think what you outlined from a crisis communication point of view is absolutely valid without question. But I think you also need, which is even more worrying, I think, Shel, frankly, is to present this in the sense of any topic, anything about you, your business, what you’re interested in could fall victim to this kind of thing. And how on earth can you prepare for that? How on earth can you prepare in a way that is going to be workable? Doesn’t mean to say you shouldn’t, you should, absolutely. But how would you do this? This is not big ticket, big picture, crisis communication affecting the organization.

        What about that person in the accounts department who is engaging with something online related to a business transaction that is a bot? And it takes kind of the sophistication of fraud attempts. We hear about them a lot of the time where you’ve now got—know, this isn’t new, but how it’s being done is—which is you get a phone call or even a video that is so good that it looks like your CEO and it’s not at all. So this takes this now to a worrying level if you’ve got this kind of potential. I think, nevertheless, you have to—maybe it is. I mean, just thinking out loud here, maybe it is a broad awareness issue where this could well be the kind of use case you present until the next one gets uncovered of this is what we need to prepare for now. This is what we need to do. And you then need to, of course, as the communicator, set out what you’re going to do that isn’t like requiring you to take a week and gather your team together to do something because that is a different thing, although that probably needs to happen too. But in your department, in your area of the business, in your work, if you’re an independent consultant, how would you address this? So the scope of this is quite worrying, I have to say.

        Shel Holtz: It is, I think we’re going to see more of it. And as we see more of it, crisis communication specialists will develop some protocols for addressing it that we will in the corporate world adopt and test and refine. But it is very troubling. I mean, just within the last couple of weeks, we saw ByteDance release its video generator, C-Dance.

        Neville Hobson: Okay.

        Shel Holtz: And somebody created a scene of Tom Cruise and Brad Pitt having a fight on top of a building. And it’s remarkable. You cannot tell that this was not filmed.

        Neville Hobson: Punch up, yeah. It’s highly credible and believable, so you’re likely to believe it.

        Shel Holtz: Yeah, but—and Hollywood freaked out over this and there were all kinds of statements issued. But still, this was a human who used an AI tool to create it. What makes this story different is that there was no human behind this at all. Did you go look at Multbook while it was operational? I haven’t seen any posts on it lately.

        Neville Hobson: Yes, I did. I was curious about it, so I did take a look. But I had—I had alarm bells ringing in my mind when I did. I did nothing further than just look.

        Shel Holtz: Yeah. Yeah, I mean, for those who haven’t heard of Multbook, these are the bots that had been released from OpenClaw, which is what it’s called now. I think it’s gone through several name changes for a variety of reasons. It allows you to create and deploy agents as whoever deployed the agent behind this story did. You would not want to put this on your own computer.

        Neville Hobson: Yeah, it has.

        Shel Holtz: Very, very, very risky. Most people ran out and bought a Mac mini to run OpenClaw. But if Multbook is those agents having their own little Facebook to talk to each other without engaging with humans and they’re having actual conversations with each other—and it’s weird. Sometimes it’s funny. Sometimes it makes you roll your eyes, but this is the first of its kind, both for OpenClaw and for Multbook. Imagine where this is going to be in a couple of years and imagine what kind of damage these things can do with motivations that are not the motivations that drive the people who are causing us grief and making us implement our crisis plans now. So as I say, I think we need to start paying attention to this now, not when there are 20 false narratives out there that have been created by AI and that are spreading like wildfire.

        Neville Hobson: Yeah, I think that’s going to happen no matter what, Shel, I truly believe. And indeed, looking at decoder, another aspect of the story they posted about was that whether it was a human or machine, it doesn’t matter. It worked. It deceived people. A quarter of the commenters commenting on this online believe the agent, believe the agent’s account. I think we also need to also just kind of say: But folks, bear in mind, they still don’t know. No one knows whether it really was a bot doing this or a human behind the scenes manipulating it. And I think until it’s clear, don’t have sleepless nights about this. But at the same time, listen to the thinking and in your own mind about how do you raise consciousness, you need to prepare for something that’s happening. So the question is, what do you do? That’s the big question.

        Shel Holtz: Yeah, for those who are interested, Shambaugh was interviewed by Kevin Roose and Casey Newton on the New York Times Hard Fork podcast, which is a tech show. So if you’re interested in his perspective, you know, he’s a volunteer, he has a day job. And to have to be dealing with this is not something that was in mind when he accepted the position as a volunteer to review code submitted to this repository. So that’s another factor to consider.

        Neville Hobson: Yeah. I read Scott Shambaugh’s post on his own blog where he kind of responded to it. The headline was “An AI agent published a hit piece on me”. And it’s long. I mean, it’s detailed. It requires force to read it all. But it’s quite extraordinary that prompted him to write this detailed account complete with charts and images and the whole ton of stuff. It’s got over 100 comments. And I think the mix from what I saw glancing: some do believe the other guy, most sympathetic to him that he was the subject of this attack. But there’s your indicator of what’s likely to happen to others. And this is not like some celebrity or some guys in the news all the time. This is a developer. And as you said, he’s a volunteer doing this who is subject to this attack. And I think it’s a sign of the times, basically.

        What a story, Shel. So let’s move on to our next story, which is—this is still the AI continuance. We haven’t got to the non-AI stories yet. This one though was in the news quite a bit in the past few days regarding Accenture, the big—the big four consulting firm. To put it in context over the past few months, we’ve talked a lot about AI adoption. This story takes that conversation in a much sharper direction. So a number of media—I saw in particular the Financial Times and the Times here in the UK reporting that Accenture had begun monitoring how often some senior employees log into Accenture’s internal AI system. And that “regular adoption” will now be a visible input into leadership. In other words, if you want to make Managing Director at Accenture, your AI logins now matter.

        This isn’t just encouragement. It’s measurable behavioral enforcement. That’s my take on it. The company says it wants to be the reinvention partner of choice for clients. Its share price is down more than 40% over the past year. And its CEO has previously said staff unable to adapt to the AI age would be “exited”. So this move sits at the intersection of technology, performance management, and commercial pressure. The reaction is telling though: in the Times comments, many readers argue that logins measure activity, not impact. Some describe it as corporate panic. Others question whether this justifies expensive AI investments.

        On LinkedIn, the debate is much more nuanced, but still skeptical. In a post by James Ransom, readers are asking whether counting tool usage measures capability or simply compliance. One commenter put it neatly: “Clients pay for the house we build, not for how many times we touch the saw”. And there’s a deep tension here. Junior staff may adopt AI fastest, but senior leaders are the ones expected to exercise judgment. So what exactly are we rewarding? Experimentation, fluency, governance, or visibility? This isn’t just about Accenture though, it raises a broader question for organizations everywhere. When AI becomes part of performance criteria, are we measuring meaningful transformation or just digital theater? When AI becomes part of the promotion algorithm, are we rewarding genuine leadership capability or are we just counting digital footprints and calling it progress? Your thoughts, Shel.

        Shel Holtz: I have a lot of thoughts on this. I have read a number of items on this. In fact, it was on my list of stories to include. And when you included it, it left me free to pick other stories. But I need more information from Accenture on this. First of all, have they added the use of AI to job descriptions and to promotion criteria? Or did they just issue a memo saying that this is what we’re going to do? If they have made it clear to everybody that this is an expectation of the organization, then I am less troubled by it—not untroubled, but less troubled than if it is not in job descriptions.

        Neville Hobson: So to your point, by the way, according to the Financial Times, they saw a memo—like literally an email about this. So that seems to be how it was communicated.

        Shel Holtz: I’d still want to go into their HRIS and see if their job descriptions have been updated. Obviously, we don’t have access to their HRIS, but I’d be very curious to know if it’s in the job descriptions for those senior people. The next thing is: have people received job-level training? And by job-level training, I mean, have they been trained on how to use AI to do the things that they do in their jobs? Not how to write a good prompt, not how to access these things. Across the board, generic training for every employee is fairly useless when it comes to AI. It needs to be task-level, position-level training. Have they done that?

        If the expectation is that we expect you to log into the AI tools, even though we haven’t provided you with the training on what to do with it once you’ve opened it, that would be troubling, but I don’t know. Generally, organizations are struggling with adoption. It’s getting better. It seems to be getting better organically as employees slowly adopt it—maybe in their personal lives and then see the utility at work. Could be that they find one thing to do with it at work. Maybe somebody else at work told them, “Hey, this is what I did,” and you go, “Wow, I can do that. That would be great”. But it seems to be largely organic, the adoption in the workplace.

        But companies do want their employees using these tools. They’re making tremendous investments in them. And whether this is the approach to take to get employees to adopt—again, I think it depends on whether the training is there and whether this has been woven into systems or if it was just a missive that was sent out to employees as a one-off without communications jumping into the breach to say: Here’s why, here’s where you can go get the training, here are resources that are available, here’s how our leaders are using it. By the way, that’s a big deal in adoption rates: in the organizations where leaders are transparent about how they’re using it, employee adoption tends to really take off because, first of all, leaders are leading by example. Second, employees are getting a taste of what people can do with this. And third, it’s explicit permission to use this for a lot of people who are worried about being seen as cheating or “Gee, do we really need you here if you can do your job with AI?” When you see your leaders doing it, if they can do it, I can do it. So this adoption is important. I’m not sure this is the approach to take, but I would need more information before I could render a final judgment.

        Neville Hobson: Well, yeah, I think I had a memory about this. I’m sure we discussed this in an episode of For Immediate Release last year: that Accenture’s rolled out a corporate AI training program that’s designed to—from what I’m reading here—reskill the entire global employee base of 700,000 employees.

        Shel Holtz: I think we did, yeah. I worry about that. That sounds generic to me.

        Neville Hobson: So they’re training the entire workforce on agentic AI systems, according to this article, that follows what the CEO, Julie Sweet, announced the initiative during a Bloomberg interview. It’s an expansion of the company’s earlier program that prepared half a million staff members for generative AI work. So I think that would answer your concern that—the detail we don’t have, but whatever it is, they didn’t just send a memo saying, “We’re going to check you out”. This is part of a huge program that’d be running for a year at least. Don’t know the details.

        Shel Holtz: Right, but… But it does sound like it’s everybody being trained on the same program. It doesn’t sound like it has been tailored to departments or functions. We don’t know. That’s my point. Yeah.

        Neville Hobson: That, Shel. We don’t know. No, no, no, we don’t. We don’t. Well, I think it’s likely that this is well thought through and being well executed. I would imagine—I can’t imagine the company is going to invest serious time and money in something to train 700,000 employees that isn’t very well thought through. I would—I would.

        Shel Holtz: Well, that’s the thing is when I hear that they’re training 700,000 employees, I struggle to see how within that timeframe they have developed discrete training agendas and curricula for different jobs.

        Neville Hobson: Well, it doesn’t say how they’re doing this. Is it all at once or is it phased? Again, I have a feeling from what we discussed last year that it’s a phased program of training. So I would err on the side of: they’ve got a structure in place and they’ve thought this through. This is another phase where I guess—I mean, hey, I’m guessing here—that they’re seeing this, and I see this in some of the anecdotal comments I’ve read online about this, particularly the senior employees are very hesitant to using this. And the younger ones are kind of far more eager to adopt it. And they don’t like that situation. So they’re tying it now to this. Again, I’m guessing here, don’t know the rationale behind it or what the goals are they set. But I would say, personally, we’re going to see more of this in organizations. Now, whether you’re going to get a mix of them that just send a memo saying, “For now, we’re going to check your logins,” or whether it’s going to be part of a major program that’s effectively run out within the organization. But it’s a sign of the times, surely. Like the negative stuff we talked about, then there’s this.

        Shel Holtz: Yeah, and I’m not sure that monitoring logins to AI is an effective way of determining adoption. I mean, if I found out that was required for as a promotion criteria, I would just be logging in a couple of times a day. I could do something else after I’ve logged in, but I don’t have to use it.

        Neville Hobson: No, I’m sure it’s not. Yeah, I think I would imagine that the writers I’ve seen on even the FT and other public cases are taking a bit of license here. They don’t know. I don’t believe for a second that they’re going to say, “Well, look, you, Mr. Aspiring Executive Vice President, whatever the job title is, you’ve only logged in 58 times into the AI system. You’re not going to get that promotion now”. I can’t imagine that’s going to be the case.

        Shel Holtz: I wouldn’t put it past a corporation. I would be looking more for outputs. I would be looking for productivity gains. And by the way, there was research recently that showed that the productivity gains from AI are being accompanied by increased anxiety and more work. It’s not reducing the amount of work people do, it’s actually increasing the amount of work people are doing.

        Neville Hobson: No, don’t believe it. Don’t believe it at all. Right. Right. Yeah, I’ve seen those reports. Yeah. No, no. But that’s kind of part of the big picture of the changes that are happening with regard to AI. There’s others too. I think you’ve got a story talking about that. Take-up is not as high as some people are saying in companies. What do you believe? I mean, it’s not uniform everywhere in the world. But I think it’s part of the direction of travel. All this is going and it’s messy. It’s not uniform. Stuff like this gets attention in the business press. I mean, the FT is a well-regarded public agency; others have posted about it too. And there’s no consistent story, I have to admit. I’m certain we did talk about this last year. I have to look at it.

        Shel Holtz: AI is having an impact on communications directly. There’s a new report from Implement Consulting Group called “Rewriting Change: Quick Wins, Wider Gaps”. It’s based on their 2026 Change Communication X-ray study and the headline finding should make every communicator sit up straight: the gap between how satisfied top management is with change communication and how satisfied employees are has widened considerably. In 2022, the gap was 13 percentage points. In 2024, it was 22. In 2026, it’s 30 points. That’s the largest gap they’ve ever recorded. While leadership satisfaction keeps rising, employee satisfaction is dropping. That’s the backdrop for AI’s rapid integration into workplace communication.

        According to the report, four out of five respondents use AI weekly for communication tasks, and 43% use it daily. 83% say it helps them generate communications more efficiently and at larger scale. So yeah, the efficiency gains are real. Drafts, summaries, FAQs, translations—all faster, all easier. But the report makes a compelling argument: AI isn’t just helping us write, it’s rewriting the system of communication itself. That’s where things get really interesting. The authors frame the challenge around three themes: accountability, trust, and meaning.

        Let’s start with accountability. AI use is widespread, but largely unsystematic. People are using it for ideation, for language polishing. 66% say they’re using it for ideas, 54% for language improvements, but often without shared guardrails. First drafts become final drafts because they sound right. That’s a pretty dangerous shortcut. One of the experts cited in the report talks about AI shadowing—employees using unapproved tools because they’re familiar and convenient. Speed goes up, governance lags behind. Sensitive data slips into prompts. Biased outputs scale. Official-sounding announcements miss legal nuance. The metaphor they use is a good one: it’s like self-driving cars in the early days. The system works beautifully, until it doesn’t. And when it fails, you better have a human paying attention.

        Next, there’s trust. What surprised me in the data is how comfortable people say they are with AI-generated content. 45% trust AI-generated information as much as human-written content. 61% say it doesn’t matter whether a human or AI created the message as long as it’s useful. But—this is critical—that acceptance evaporates as the stakes rise. If you look at things like performance feedback, terminations, crisis communication, messages from the CEO, those are the top categories employees say should never be heavily AI-generated. And just more than half, 51%, say they feel less personally connected to leaders when they know AI played a major role in creating a message. Only 40% of top and middle managers perceive that drop in connection. There’s that gap again. AI may be acceptable as an assistant, but in consequential moments, people want to know who’s driving.

        And finally, there’s meaning. This is where I think the report hits closest to home for we communication professionals. AI increases volume and speed. It multiplies words, but it doesn’t automatically create understanding. In fact, 87% of respondents report that major changes were poorly communicated. Employees describe change communication as one-way, too distant, impersonal, and not well-timed. Nearly one in five can’t connect corporate communication to their actual work. This is a relevance problem. One of the experts in the report makes the point that communicators’ roles are shifting from content creators to sense-makers. Now that resonates with what we’ve been discussing on this show for years.

        The value isn’t in producing more polished messages, it’s in curating, contextualizing, and helping people answer the question: So what does this mean for me? Now, the short-term gains from AI are undeniable, but the long-term risk isn’t that AI will take over communication; it’s that we’ll lose connection—that leadership will feel more confident while employees feel less understood. The report ends with a provocative question: In a future shaped by AI, what do we wish we could say one day about change communication that we can’t say today? For me, the answer is that we used AI to amplify clarity and humanity. AI can prepare the ground and accelerate the drafting. It can help with structure and scale. But trust, accountability, and meaning? Those still require a human being who’s willing to stand behind the words. And if we don’t pay attention to that widening gap, we may discover that while our messages are moving faster than ever, they’re landing with less impact than ever before.

        Neville Hobson: Yeah, you’re right. This does reflect what we’ve been discussing for some time. So what I take from this is the humans are the issue, not the tech, not the tools.

        Shel Holtz: Yeah, absolutely. As with any tool, you can misuse a tool.

        Neville Hobson: Yeah, it’s interesting. Surely the path’s clear these days, is it not? I keep seeing people talking about this in a broader sense—not the specifics of this report—but humans need to step up to the plate and recognize their value as the ones who can explain the whole damn thing. So you will use an AI tool to do your research that leads you to create a report, for instance. And you then need to help others understand the situation; all those points you enumerated need explaining. And if people are saying in change communication, for instance, that you mentioned feedback is poorly done and all, well, that’s down to the communicators, I would say—whoever wrote the report and then sent it out and executed on it. And did they train? Did they have a plan in place? How they’re to do this? So I’m kind of surprised that this topic that is talked about so much is still being talked about as if this is a new thing you guys need to pay attention to. Now, we talk about it for a long time, not just us. Communicators generally have been discussing this for quite a while. So there’s something missing then if we’re still trying to set out the simplistic 101 approach to how you do this. That’s what surprises me.

        Shel Holtz: Yeah, I think this rests in strategic planning, to be honest. If you develop a strategic plan for a change that the organization is making, it starts with the goal. What do you want? What does it look like if you’ve succeeded and proceeds through strategies and objectives and tactics? And you measure. So where we are today, based on this report, is that a lot of people are seeing these highly polished outputs from AI and going, “Wow, that’s really good. Let’s just send this.” And we’re throwing the strategic plan in the trash. And we’re not looking to measure how well employees understand it. We’re not looking to see if employees are able to connect it to their day-to-day work.

        The fact is that AI writing is getting very, very good. All the people who say, “I can always tell when it was written by AI,” I still maintain that’s a bad prompt. But these days, even a bad prompt can produce some pretty polished output. And if we look at that and succumb to the allure of this gloss that we get from the AI output without looking at what it really takes to develop that trust and meaning and accountability that employees recognize so that they understand what this change means to them—what’s expected of me, what’s in it for me, what changes around here—then it’s a disservice. And I think we do have to determine where we gain advantages from using AI, as you mentioned earlier, from the research, certainly. But we also have to look at where the AI does not do well and—yeah, trust, accountability, it still doesn’t do well. And if we want employees or frankly, other stakeholders to respond to the messages that we are sending and to engage in a two-way communication, relying entirely on those polished outputs and saying, “Wow, that was a great job. We’ll send that out, communication done”—that’s a problem.

        Neville Hobson: It is a problem. It’s a severe problem. And my message would be: do not be like Deloitte and do something like that. We reported on that last year. Deloitte, the big four accounting firm or consulting firm, had contracts with the governments of Canada and Australia for research reporting—six-figure fees involved. And they sent the reports to their clients in Australia and Canada. And someone, a researcher, found that it was riddled with hallucinations as they’re now termed. Not only that, obvious errors of URLs not working properly—404 errors away—no one checked it. I’m thinking what you just said: “Oh, this is great, the output, let’s send it to the client and get the bill and 200 grand or whatever it might be.”

        It amazes me that not only people think that that’s a good way of doing this, but that there are no checks and balances in an organization that would have milestones in place to prevent that kind of error. The reputational error, I would argue, for Deloitte was seriously bad, although maybe people read it and go “tut tut” and move on and no one really cares at the end of the day. That’s a bit of a cynical view, of course. But I think it illustrates something we’ve talked about and we will continue talking about: that the elements AI can’t do related to things like trust, reputation, deeper understanding—that’s what humans do. The AI is really good at the research, the assembling of all the facts, the summarizing of lengthy documents, zeroing in on what the main issues are and making recommendations. That’s what it’s good at. That doesn’t mean to say, “Hey, I’ve got this report from ChatGPT or this bespoke tool we use that’s 65 pages long. This is great. Just what the client needs and we’ll send it.” That’s absolutely stupid, frankly.

        Shel Holtz: I have a custom GPT. It took me about five hours to build this—I’ve mentioned it before. It’s a senior communications consultant. I don’t have the budget for a human one, so I created one. And I had a need to develop a strategic plan in short order. And with limited time and resources, I had a first draft produced by my custom GPT senior communication consultant. And it did a very good job. I mean, it needed more work from me, but it did a passable job of developing a good strategic communication plan. But what struck me as I was reviewing and revising the plan was it created a plan that it could not execute entirely itself, or any AI system could not execute this plan. It required humans. It’s almost like it recognized that for a communication plan to be strategic, people needed to be involved.

        At the beginning of this report, I mentioned that the consulting firm that did the report said that we need to move from content creators to sense-makers, meaning-makers. And I think that’s exactly right. And when we use AI to generate content, it’s more than just verification. I mean, we have advocated on this show for hiring content verifiers, AI verifiers in companies. And I stand by that. I think that’s important. But this goes beyond that. It’s not just verifying that the LLM didn’t hallucinate or correcting it when it did. It’s not just verifying that the URLs all work or finding the right ones if they don’t. It is asking the question: Will employees make meaning out of this that is relevant to them in their jobs? And if not, what do I need to do to make sure that they can? And I don’t know how many communicators are doing that right now because the allure of the AI creating this polished output is—you know.

        Neville Hobson: Yeah, I agree with you. Well, it’s—yeah, I personally think, frankly, Shel, those cases like Deloitte are edge cases—that this is not the norm. I don’t know—and I do pay attention to this—of others to the scale of that, that mistakes have been made like that. I also believe myself that most responsible communicators are becoming more experienced in the recognition and the benefits of using an AI tool alongside them in their daily work. So it’s not like “Let me just get the chatbot to summarize this document once or twice a week,” do something like that. No. Every single day, you are making use of either your corporate one that’s been created in your organization or a professional license on ChatGPT or Gemini or Claude or whatever it might be as an assistant to you.

        There are plenty of publications out there that will guide you on how to do this. The best one that comes to mind is Ethan Mollick’s book from 2023 that he talked about that is really, really very helpful to recognize that reality. And you will benefit from understanding how that works. That means you are less likely to just think, “Hey, great output,” and off you go. You will know that: Yes, okay, I’ve done the verification; I’ve checked all those links; I now need to go further into this to look at it from a “Will they understand this?” perspective. And you ask questions back of the AI system. I do that almost on a daily basis—maybe two or three times a week, actually—that I will use it to create something or summarize a report, and I will then go back with a bunch of questions: “When you said this, what did you mean by that? Have you got a source to cite what led you to think that?”

        And I find that exceptionally useful in—this is my perception, of course—in strengthening my confidence that the AI isn’t like a raving loony that’s going to hallucinate and tell lies all the time, although I realize that they do that sometimes. And you’ve got to—not—it’s not a person you’re talking to. This is not anything other than a bit of software on a server somewhere that pattern-matches things. Let’s not get into that conversation because I find it very distracting. The important stuff to think about: communicators who recognize that are benefiting; those who don’t are suffering. That, in my opinion, is a strong place. Communicators generally who know about all this stuff can focus on helping educate other communicators on how to do this properly. So that to me seems a simple progress forward to do that. Like I said, there are books, there are publications, there are newsletters, there are articles—you name it—telling you about all of this.

        Now, where do you go to find all these? Are you on your own totally to wade through God knows what online? No, there are places to help with that. I’ve got something in mind which I’ll talk about another time, I think, that will help that. And I think we are at a stage, notwithstanding the agentic AI that slags off a developer in public and you don’t know whether it’s true—more of that’s likely. But we’re at the stage where we are looking at the way AI tools like these are developing that go way beyond prompt engineering, as the phrase used to be. You don’t need the level of detail in many prompts—not saying all—because the general rule applies: it depends on what you’re doing; that the more detail you provide might be actually beneficial in the output you’ll get from the chatbot. But the simple, plain-English conversation you have, which I use a lot, is usually good enough. And it’s a bit like that 80% rule, you know—it’s always 80%; that’s good enough. We can live with that, depending totally on what it is that you want and what you’re doing. So we’re at that stage where there is so much to see and read online about this that it’s hard to know where on earth you would start, and that’s a key thing we need to help other communicators understand: How do you start? We have solutions to help you do that.

        Thanks a lot, Dan. That was a really comprehensive report. You packed a lot into that report. I got a couple of things I wanted to mention to you. It’s really interesting what you said about BlueSky and commenting, and indeed, the clamor for an edit button. Boy, does that remind us of Twitter, does it not back in the day? People want an edit button. But you mentioned some of the technicalities in why that’s a major issue with the protocol that is problematic from a technical point of view. And I get this is technical. But my question is this: How has Threads managed to do this without any problems at all? Because Threads is also connected with—the—runs on a protocol, let’s say, the same as BlueSky’s that enables you to share stuff to the—to the Fediverse, but you can edit a comment on Threads. I think you’ve got 15 minutes before that—that expires; you can’t do it anymore. And I do that quite a bit. I’m forever—you know, for instance, when I share posts about the next For Immediate Release episode, I usually forget to either include the URL or even add your handle to the post, so there’s a quick post, “damn,” I go back in again and correct it. So I find that quite useful from that point of view. So how come they’re doing it then without any issues or have there been issues that I just don’t know about? That’s my question on that one.

        The other one is really interesting about WordPress. I’ve been following that too. I don’t use WordPress actively anymore—not for over a year now—although I still maintain my archive. So I’m in the back end quite a bit now, updating stuff and so forth. But interesting what you said—I was wondering, I read in—I think it was TechCrunch recently—that the hosted WordPress, that’s WordPress.com, has just launched an AI assistant that lets you literally build your site with voice prompts and drag and drop across the screen, asking the AI assistant to complete the task. Now that to me seems a huge step forward in using that. I wish that would come to Ghost, which is where I am now. But I think it’s surely an evolutionary step that is definitely going to come. I’m curious what you think about that, Dan. But the overall picture on WordPress, though, is pretty interesting. So thanks for including that.

        So next story—this is the first of our non-AI stories. So you take a breath, right, take a breather from AI for a bit. This is about the—back in January in For Immediate Release 496, one of our midweek episodes, we talked about the PRCA, that’s the Public Relations and Communication Association, their move to redefine public relations. The organization proposed a new definition that positions PR as a strategic management discipline.

        Shel Holtz: First of two.

        Neville Hobson: Concerned with trust, legitimacy, volatility, and long-term value creation. It’s ambitious. It’s modern. It clearly aims to elevate the profession. But since then, the reaction’s been rather muted, from what I can see. There hasn’t been a groundswell of endorsement across the wider communication landscape. Okay, so they published this specifically asking PRCA members to comment on it. So if you weren’t a member, you couldn’t access the part of the website where you could leave comments. On LinkedIn, various posts—much of the commentary feels polite, even respectful, but not energized.

        So let’s hear the PRCA’s new definition. And this is the portable one, I suppose you’d call it: “Public relations is the strategic management discipline that builds trust, enhances reputation, and helps leaders interpret complexity and manage volatility.”

        Shel Holtz: The executive summary.

        Neville Hobson: “Delivering measurable outcomes, including stakeholder confidence, long-term value creation, and commercial growth.” Now, I’ve had some anecdotal comments I’ve seen—it’s like, “Wow, that’s a mouthful.” Interesting. But I had to take a breath in that one single sentence, by the way, to complete it. So I read a really interesting post by Helen Dunne in Corporate Affairs Unpacked, where she says she showed the definition to several senior communicators. Their reactions ranged from “word salad” to “corporate buzzwords” to the rather weary “I’m too old for this.” I like that one.

        Her bigger concern though isn’t the language; it’s representation. She argues that the definition doesn’t reflect the broader industry. The PRCA represents agencies. Many of those agencies are focused on branding, marketing, media relations, creative services. Only a small proportion of practitioners would describe their work as helping leaders interpret complexity at the strategic management level.

        Shel Holtz: Ha.

        Neville Hobson: Helen cites PRCA’s own state-of-the-sector data, which says 15% are in branding and marketing, 13% in communication strategy, 12% in corporate PR, and only 3% in reputation management. So that data undercuts the elevated framing, she says. So is the PRCA describing what PR is or what it wishes it to be? In my own post on this, which I did last week, I argued that the idea of redefining PR is worthy. But unless the CIPR, PRSA, IPRA, IABC, and others move in the same direction, we simply add another definition to a growing list, which raises a deeper question: Are we trying to clarify the profession or to rebrand it? If every major industry association defines public relations differently—and they do, frankly, even though some look similar—is the real issue the wording or the fact that we’ve never agreed what business we’re actually in?

        Shel Holtz: After we reported on this, I was thinking that if anybody is going to succeed in pushing a new definition of PR that is widely adopted, it would be the Global Alliance. Because if the Global Alliance pushes it, all of their member associations, like PRSA and IABC and all the rest, are more likely to adopt it, or at least be aware of it. I don’t know what kind of influence PRCA has to push this, but if you open any public relations textbook, you’re going to find that author’s or those authors’ definitions of PR. You’re going to find a different definition in every PR association.

        The one thing that troubles me about PRCA’s definition is that it says nothing about the relations that we have with stakeholder groups. And it’s right there in the—the name of the profession. Public relations is about managing the relationships, the relations, between an organization and its stakeholders. And that’s absent from the definition. In fact, I wouldn’t know from the definition that it had anything to do with all those stakeholders and the way the company interacts with them or the organization interacts with them. That said, I find the reactions that you have collected to be interesting, notably for their lack of enthusiasm and excitement. I certainly credit PRCA for undertaking this. I think it is a worthwhile discussion to have, but it really doesn’t seem like it’s going anywhere, does it?

        Neville Hobson: Well, it’s interesting. I mean, you mentioned the Global Alliance. I wrote about that in my post last week—that they’re well-placed to, let’s say, convene all the major associations, if such a thing were even possible, to arrive at a single, concise definition supported by shared principles—that part of their stated mission is to unify the public relations profession. So wouldn’t that be a good place to start? It wouldn’t be easy; consensus-building rarely is, I said in my post, really. But if unification is the goal, agreeing how we define ourselves would seem a logical place to start.

        I think the PRCA, like you said, Shel, I think they have taken a really good move to address the topic. The definition currently stems from 30 years ago—it’s been tweaked in between times—when it was all about press releases and media relations and things like that. This effort from PRCA brings it up to date. It’s a much more contemporary definition that is more in tune with what communicators do. Yet, like you said, there’s been little enthusiasm for it. And in fact, it reminds me—I saw a post on LinkedIn recently, I can’t remember what it was, where someone had done a word cloud of descriptions from, I guess, a dozen PR firms of what they say they do. Lots of words in there; “public relations” isn’t mentioned at all.

        So are we at the point where we don’t know what the business is that we’re doing? Should it broaden out that discussion more widely? I don’t think PRCA is the organization to do that. Something like the Global Alliance is much more well-placed, I believe. Now, I’ve not seen them commenting on this. I’ve not actually seen any of the acronym soup I put in my post—CIPR, PRSA, IPRA, IABC—commenting on this at all. That speaks a lot, I think—that no one is commenting about it. And the comments I have seen, as you mentioned, don’t really exert much enthusiasm. Jerry Corbett, a good friend of ours who used to be, I think, the president of PRSA in America…

        Shel Holtz: He was.

        Neville Hobson: …did comment, and he talks about: this is way too long, still needs to be simplified. It needs to talk about relations like you just mentioned. The last time this topic was addressed in a meaningful way that embraced other associations and gained a lot of traction—if nothing eventually, ultimately happened—was in 2011, 2012 when the PRSA proposed a new definition. Now they offered it to everyone saying, “What do you think of this?” It wasn’t just the members of PRSA, which I think was the smarter move, frankly. A lot of debate happened. Others on the extremes like the Arthur Page Society and others were involved as well in commenting on this. So it was widely embracing. Yeah, ultimately nothing happened. So there wasn’t enthusiasm—a lot of opinion, but it ultimately didn’t go anywhere.

        So here we are 15, 16 years later. Now it’s coming up again. The cynical view—and I’ve seen some people commenting on this—is that about every decade, the industry goes through all this: “We need to redefine the definition,” and nothing happens. That’s a bit of a cynical view. Will this be different? Well, PRCA has done a good job in taking a very first step that has generated some response, even without much enthusiasm. Can it go anywhere? I guess we will see in time.

        Shel Holtz: We will see, but I have to say that I am skeptical, doubtful that even if they adopt it, I don’t see it being widely embraced by the entire public relations and communications community. I think part of the problem is it’s still hard to define public relations as a profession when anybody—as I have said 50,000 times on this show and elsewhere—anybody can hang out a shingle and say, “I am a public relations practitioner,” and they abide by none of the principles, none of the best practices, and none of the models. They engage in unethical behavior just to get to that final result that a client is interested in. And until we can coalesce around the idea of being a profession with a shared set of principles and a shared set of values and a shared set of frameworks and, you know, behave like a profession… Think about accounting. Think about law. Think about medicine. Think about engineering. These are professions where there are certain assumptions that wherever you are in the world and whatever level you’re at—whether you’re with a consulting firm or a corporation or you’re an independent consultant—you all agree to these things.

        The communication/public relations industry is nowhere near that. I know the Global Communication Certification Council aims to change that, but that’s a long way off. Still in the process of separating from IABC; the idea being that other associations are not going to adopt IABC certification, but if it’s an independent certification, they certainly might.

        Neville Hobson:

        Shel Holtz: But the more people who seek and obtain certification, regardless of the association they belong to, the more likely the profession will be to coalesce around those guiding principles. So that’s my wild dream, but we’re nowhere near that right now. And even as I say, if PRCA settles on this definition, I don’t see it being widely adopted elsewhere.

        Neville Hobson: No, if it’s just the members settling on it, then I can’t say it’ll just be another one amongst the things. If you Google “define PR,” as I did on a number of times—typing on a machine where I’m not logged in so it’s a clean search—it pulls up at least a dozen different definitions. Indeed, all the professional bodies say something slightly different. So this will just be another one. It may get picked up by some, but I can see greater confusion. You start using this and someone else who reads your stuff or is involved with you in some way just kind of Googles “defined PR,” they get something entirely different. So which is it then? You’re saying it’s this and these guys are saying it’s that—so it doesn’t help.

        Shel Holtz: Well, collect every definition from every association and from every textbook and from every agency and feed them all to Claude or ChatGPT and say, “Create a single definition that accounts for everything that you see here.” See what it comes up with.

        Neville Hobson: Well, you could do the whole thing end to end. The AI system does the whole thing, does the research, and then—that could be a good start.

        Shel Holtz: Of course, you would use the AI to do the research too. Good exercise. Well, here’s the headline from a Substack post Paul Ferbredi published recently: “I bet you couldn’t show the ROI of your corporate podcast if your job depended on it.” That line isn’t just provocative; it highlights a real challenge many of us in organizational communication face as audio content increasingly becomes part of the mix. Ferbredi’s key point—echoed in the comments that were left on his piece—is that too many corporate podcasts are, frankly, vanity projects. People launch them because everyone’s doing a podcast or because executives think their voice should be heard. But they’re not always clear about what the podcast is supposed to achieve. Back to that whole idea of strategic planning. And if you don’t define success clearly, then yeah, proving ROI is nearly impossible.

        So let’s unpack that a bit. One of the problems is that we often measure the wrong things. We fall back on downloads, subscriber counts, chart rankings—all output metrics that tell you how many people pressed play, but almost nothing about what that listening meant for the business. That’s why critics like Paul call ROI “unshowable,” because too often we’re not measuring in ways that link back to business outcomes. But here’s the nuance: it is possible to measure ROI if you define it differently at the beginning and tie it to concrete goals. According to frameworks in the B2B podcast space, traditional vanity metrics like downloads or rankings simply don’t cut it, especially in the B2B world. What matters is whether episodes generate pipeline influence, lead opportunities, and business impact that your CFO can understand. That means integrating your podcast data into your customer relationship management and tracking things like listener engagement that turns into demo requests or sales conversations.

        Put differently, ROI for a branded or corporate podcast isn’t just a ratio of dollars spent versus dollars earned in direct revenue. Some of the most valuable returns are indirect. And I would argue that means we need a different label than ROI, which is the ratio of dollars spent to dollars earned. Brand awareness, trust, thought leadership, deeper audience relationships—these are the kinds of outcomes that support recruitment, retention, stakeholder alignment, even executive visibility. Agencies and analytics platforms remind us that these outcomes are real. They just aren’t easily captured by simple metrics, and certainly not as ROI.

        Experts also point to sophisticated ways of measuring impact—things like brand lift studies, pixel attribution, long-term tracking of customer behavior. These techniques compare people exposed to the podcast with a control group or follow listeners through the customer journey to see if they visit your website and engage further or convert into customers. That gives you measurable evidence that listening isn’t just passive noise; it’s influencing the business. And importantly, not all podcasts are trying to directly generate sales. Some are designed to build relationships with potential customers, with internal audiences, with partners. If your podcast goal is to deepen customer trust or make your brand more visible in your ecosystem, then your ROI framework has to reflect that. Clear goal-setting upfront before the microphone is ever turned on is what’s most important.

        So what do we take away from Paul’s challenge? First, he’s right that many corporate podcasts fail ROI tests, but mostly because they aren’t giving themselves a fighting chance to succeed. ROI isn’t inherent to a podcast; it’s a function of how you define your goals, how you measure your outcomes, and how you connect the dots between listening and real-world results. When we treat podcasts as strategic channels with measurable outcomes—not just vanity projects—we not only can show ROI, we can use the ROI to make better decisions. To summarize this: podcasts can have measurable ROI, but only when we stop obsessing over downloads and start thinking in terms of business impact.

        Neville Hobson: Yeah, you’re absolutely right to that conclusion. It’s a really good piece Paul wrote, I think. Even though I have to say his rationale is comparing with text—isn’t text better than audio? So set that aside though, because the strength of his analysis is really, really well done. My experience in B2B podcasting, which I’ve done for a client for some time recently, it rings bells, this, because it is all about the goals. Yet the obsession has always been—from way back; it’s probably diminished quite a bit now—”How many downloads do we get? What does Apple Podcasts say?” And then you get kind of down rabbit holes when you look at the analytics reports about all the—which delivered the clicks to your podcast site—you’re then into serious eye-glazing territory unless you’re the techie who needs to know that kind of stuff.

        I think the goals are key, absolutely key. And you made a very good point that it’s not always just about ROI, meaning money, the return on the investment. How many leads does it generate that lead to sales, perhaps? Although having a podcast that is a lead generator, that’s great. There’s a goal when you say, “We want this episode to deliver us 16 inquiries about a widget that we’re selling”, in which case the whole chain of that has got to be well thought through. Not good enough just to stick your podcast up there and have a link on a podcast page on your website. You’ve got to have, when they click to go to your site, get to the landing page—what happens? How do you track that? And enterprise firms particularly have access to really effective tools that kind of map and track the end-to-end journey or visits to the sites, where they came from, who they are—particularly if they’ve identified themselves or they’re existing customers. So all that’s got to be part of your structure.

        I think I had a conversation with someone about six months ago about starting a business podcast. And I’m getting déjà vu just reflecting on part of that conversation where a goal was literally a by-the-way at the very end—where it emerged from this person that they had a goal of what it was. And I remember thinking at the time that a podcast is not what they should be using to achieve that goal. So you’ve got to—the right goal. Yet I also recognize that vanity projects—yeah, there’s not much you can do, I suppose, if the person you’re talking to is convinced he or she wants to do this no matter what, that’s a vanity project. The question I would ask as a communicator is: Do you want to get involved in something like that, no matter what the theme might be? Podcasting is in a different place than it was even five years ago, I would argue, in that most people I talk to now think of video first, not audio. And we do a video of our audio conversation. We don’t do much with the video; I stick it up there on YouTube. So if you want to look at two talking heads on screen, you can.

        Shel Holtz: Well, yeah, the video gets recorded whether we want it to or not. So we might as well use it for people who prefer to get it that way.

        Neville Hobson: Right. We might as well use it. Exactly. You can see our facial expressions. When I go like that, you can see that. But I think this is worth reading, Paul’s post. The thought in your mind if you’re thinking about a podcast is: start with the goal first. Don’t think about how many downloads you get and how you’re going to be like Joe Rogan. I often think those comparisons—when people say, “Joe Rogan’s podcast got 65 million downloads”—talk about stuff that’s completely irrelevant to what you’re likely to achieve with a B2B podcast. Let’s actually go. Which is you better have big budget.

        Shel Holtz: Yeah, and there are goals that you can assign to a podcast that have nothing to do with ROI, nothing at all. It could be that you are trying to change the perception of your organization: “We’re not a stodgy organization. We have that reputation. We need to change it. Let’s get a fun, loose podcast out there so that it starts to move the needle in the other direction—that this would be a fun place maybe to come work”. There are podcasts that are aimed at attracting new recruits to the organization.

        Neville Hobson: Right, we mentioned that, yeah.

        Shel Holtz: There are podcasts that are aimed at promoting thought leadership. And of course you need to know what your goal for thought leadership is, but none of these are going to be directly tied to new revenue. That would be really, really hard to do.

        Neville Hobson: You tie it to other goals that you could measure. So you’ve got to have that. Yeah.

        Shel Holtz: Exactly. And you can measure that as long as you know what it is at the point where you start. You mentioned that Paul did make the point: isn’t text better? When we started this podcast, when there were about 400 podcasts, most podcasts talked about podcasting. That was the theme. Every podcast was, “Let’s talk about podcasting”. And there was a lot of conversation back then about why audio is better. And I mean, there were some critics. I remember one person said, “I can read five articles in the time it takes to listen to one podcast”. But my answer was, “Yeah, but I can’t read any articles when I’m driving my car.” But I can listen to a podcast for me, audio—and this is not true of video, by the way—audio is the only form of media that’s available to us that people can pay attention to when they’re doing something else, whether it’s folding laundry or working out or walking the dog or driving somewhere or mowing the lawn—whatever it might be, you can listen and absorb information. You can’t read; you can’t watch a video.

        God help me if I ever see anybody driving and watching a video at the same time. I actually did see that. I saw somebody had their phone on the car and he had a video playing. It wasn’t the road ahead of him or behind him; it was a TV show or something. And I went, “My God,” I mean, that’s worse than being on your phone. But I continue to maintain that that is true: the value of audio is the ability to listen when you’re doing something else. And there’s also been studies about emotion from hearing somebody’s voice—that you’re able to connect with that much more quickly than reading a quote. Where this is leading me is that if you are going down the road of producing a podcast, know why that format is of value to you. Why is that the approach to take in terms of the goal that you’re trying to achieve? Is that emotional connection important? Are you trying to reach an audience that has limited time and may listen to your show when they’re doing something else?

        Finally, a podcast could be part of a larger campaign. It can be just one element. Could be it’s the audio version of something that we are producing for people who aren’t going to partake of another element of this that was produced. I am wrapping up work on a book that—the proposal is almost ready to go. There is an agent waiting to look at it. It’s probably going to get published. When it’s published, the proposal calls for there to be a Substack-like newsletter to go along with this and a new podcast that I am going to be launching with Steve Crescenzo on internal communications right here on the For Immediate Release Podcast Network. It’s just one element, but the main piece is the book, right? It’s—it’s not the podcast. The podcast is supporting.

        And one more thing is that you talk about podcasting being in a very different place today than it was five years ago. One of the things that defines that is the fact that you are now seeing news made based on what somebody says on a podcast. It’s no longer what they said on an interview show or in a speech—well, it is—but in addition to that now, on a podcast where he was interviewed, this politician said this or this business leader said that. So that might be another reason that you want to podcast is as a way to get these quotes out there that might get picked up elsewhere and make news. So I think shoehorning podcasting into this one ROI bucket is a mistake. And yet Paul is absolutely right: his bottom-line conclusion, which is you’d better know what it is you’re trying to achieve with this before you push that record button.

        Neville Hobson: Yeah, that’s the bottom line. Absolutely right. So goal-setting is key. Start with that, not how many downloads you expect and can you arrive with Joe Rogan or whatever it might be. So good stuff, that, I have to say. Okay, so our final story today—we’re back to the AI topic. Question: Are chatbots—are chatbots the new influencers?

        Shel Holtz: Everything that goes around comes around.

        Neville Hobson: For the past two decades, digital marketing has largely been about visibility. First it was banner ads, then search, then social, then influencer marketing. Each wave brought new tools, new behaviors, and new anxieties. Now, according to a recent New York Times piece, we’ve entered another phase: chatbots are the new influencers and brands have to woo the robots. The article describes how companies are discovering that when customers ask ChatGPT, Gemini, or Claude, for example, about a product or provider, the answer that comes back may not reflect what the company believes about itself. In one example, a healthcare software firm asked chatbots about its own offerings and found outdated, incomplete, and sometimes misleading information being surfaced. That moment triggered a realization: if AI models are shaping how people consume information, then influencing those models becomes part of marketing strategy.

        This has been framed as the next evolution of SEO, says the New York Times. Except now it has a new acronym: AEO (Answer Engine Optimization) or GEO (Generative Engine Optimization)—a topic we discussed last September in a For Immediate Release interview with Stephanie Grober at the Horowitz Agency in New York. Great conversation that was. Instead of trying to rank on page one of Google, brands are now trying to influence how large language models synthesize and present information in response to prompts. That changes the game. Chatbots don’t care about vibe, emotional resonance, or brand storytelling. They prioritize clarity, structured detail, and volume. Some brands are flooding the zone with highly targeted content. Others are obsessively auditing Reddit because Reddit turns out to be one of the most cited sources in AI-generated answers. In effect, the brand is no longer competing only for human attention; it is competing for algorithmic interpretation.

        That’s actually well said there, Shel. We talked about this very topic at least twice in the last six months of last year. Not only humans; you’ve got to look at the bots as well. I think that introduces a deeper shift. Historically, search engines pointed users to sources. Chatbots increasingly summarize, recommend, and decide what is worth mentioning. The intermediary is no longer neutral. It synthesizes, which means the battleground for reputation is moving upstream from persuasion to data conditioning.

        But here’s the counterpoint. We’ve been here before and we’ve discussed it in this podcast before. Every major digital shift has been framed as existential. SEO was supposed to change everything, then social algorithms, then influencer marketing. Each time an optimization industry sprang up, each time brands flooded the zone with content, and each time the platforms evolved in response. So the question is: Is this genuinely a structural shift in how reputation is constructed, or simply the next optimization cycle dressed up as revolution? Because there’s a real risk here. If brands begin producing vast volumes of content purely to influence AI outputs, do we elevate substance or do we accelerate a new kind of synthetic noise? Could be all that AI slop we’ve been hearing about a lot recently. And if Reddit posts and forum threads are disproportionately shaping chatbot answers, are we witnessing democratization of influence or amplification of unverified commentary? So are chatbots truly the new influencers we must court, or are we watching the early stages of another marketing arms race that may look very different once the models mature? What do you think, Shel?

        Shel Holtz: It’s a fraught topic. I mean, first of all, as organizations trip over themselves to figure out how to appear in AI query responses and appear the way they want to, is that going to taint AI responses to the point that they’re no better than a Google search response? I mean, you remember the original Google where you typed in a query and you got 10 items that were directly related to what you were interested in. And now you have to wade through the ads and the other crap that populates the Google search results before you get to anything that’s even remotely relevant.

        Neville Hobson: Yeah. Slop is the word, not crap, slop.

        Shel Holtz: Yeah. Okay, yeah. But you have some other issues here. We hear that Reddit figures prominently in the results. And then you hear from somebody else: No, no, no, it’s earned media that is prompting what gets injected into the responses to queries in the large language models. I just saw—this was just published on February 18th—a study that found 44% of ChatGPT citations come from the first third of whatever content it was that they found. So, you know, do you top-load your content with the main information that you want the AI models to grasp, even if that’s not necessarily the way you want people to read the content that you’re producing?

        And each model does something different. The fact that ChatGPT citations come from the first third of content doesn’t mean that Claude’s do or Gemini’s or Grok’s. And then every time they release a new model, has it changed? So I think we could be chasing our tails with this kind of information. Are chatbots the new influencer? Well, they’re a new influencer. Certainly people are getting information from these—I do. I say, “This product isn’t working for me. What are the alternatives?” And it tells me, and I’m sure it’s leaving out good products that just haven’t got their information into the places where it’s going to be absorbed by an AI being trained on this content or searching.

        So, you know. I think we just need to produce good content that answers questions. I—we talked about this a couple of months ago. When you look at the tools that are being implemented in the enterprise, employees are no longer reading the articles that we produce that say, “Here’s the justification and the context and the background for the change that the organization’s going through.” They type a query and they get a reply. Where’s that reply coming from? It’s not coming from the context that we provided unless we top-load, front-load the content with that answer in order to accommodate the chatbot. Is that what we want? This is probably a time to be rethinking the way we communicate altogether because of this situation. But I think creating good content that does a good job of answering questions, that puts the main information at the top…

        Neville Hobson:

        Shel Holtz: I mean, you know, somebody ought to invent an inverted pyramid style of writing that starts with the who, when, where, why before you get to the, you know, the detail. Just do good content and you’ll be fine.

        Neville Hobson: That’s a good tip, I think. To me, just seems like everything is so manipulated. I was thinking this the other day, something I was searching for online, and I looked at what Google produced. Because Google, by the way, really has improved hugely in the last six months in terms of what it actually offers you when you do a search term. The AI generates a summary of the top result that comes, the citations that it includes that you can click on if you want. My experience is, I often find that that summary is good enough for what I need. I might scroll down to see who else is saying what. And then you’ve got little drop-downs of other responses to that search term. Great. And it usually gives me what I want.

        But basically, I’m thinking when I see stuff like this: the manipulation is huge. Would it not be simpler if we just ditched all this stuff? No, that’s not the answer. The world’s moved on. We have to live with this. But it makes it difficult to trust anything the way you used to be able to. So do I trust this answer either because it’s—Google is giving it to me, therefore implies I trust Google? Or is it because it looks about right, that’s what I’m looking for? So I trust the responder to that answer? I don’t know. You have to make your own judgment call on this because if you’re using another search engine, it’s going to be very different.

        Shel Holtz: Yeah.

        Neville Hobson: If you use your chatbot—and that’s actually quite interesting because whether it’s ChatGPT, whether it’s Claude, whatever it might be, using your chatbot, not a search engine—how do you feel about that? Do you implicitly trust the chatbot and what it’s telling you? Would it be different than what Google would tell you if you did a Google search? Probably yes. Not in terms of meaning, but the words are going to be different, obviously, and maybe the sources will be different. So if you need to do that, fine. I don’t think you do typically need to do that. You just go to Google or whatever it might be that you’re accustomed to, that you trust, search and get your answer.

        But you’ve now really got to—and particularly in light of the story we talked about earlier about the developer who was stitched up by an AI agent that damages reputation—that kind of content might show up in search results too. So this is the landscape we’re in now. You have to get used to it.

        Shel Holtz: I still find the the top 10 blue links on the first search engine results page from Google are far less valuable than they used to be. I still find that the first three or four are paid and irrelevant or… I see it all the time.

        Neville Hobson: I don’t see that. I don’t see that at all. I don’t see paid at all in the first results. I see it a little further down. Yeah, okay, interesting. Maybe it’s different in here. I’m doing google.co.uk, not google.com. So maybe there’s a difference. Yeah.

        Shel Holtz: I definitely do. Listeners, what are you seeing on Google? I’ve been using Perplexity. Are you logged in to Google when you’re doing this? Okay. I have been using Perplexity more and more because I’m able to refine my search, saying, “I’m looking for this, not this, and I need it from articles that have been published in the last six months.” And it does an excellent job of providing me with great results. Now I haven’t compared it to what Google would give me, but I have to believe that it’s more relevant because it is trying to satisfy me rather than satisfy the advertisers who have paid to have their links promoted on Google.

        Neville Hobson: Yeah, yeah, typically. Okay. It’s funny. I mean, I’ve just done a search on Google right now. There’s not a single sponsored link in my list at all. Not one. And I do see them occasionally, but they’re kind of halfway down; it says “sponsored”. I’m not seeing any for this search term I just searched on. So I’m scrolling further down the page—I’m not seeing any. Results are personalized. Try it without personalization; maybe that might make a difference. But so I’m quite happy with what I see from this. I see in this particular example…

        Shel Holtz: Hmm.

        Neville Hobson: …it gives me the text upfront, as you know, “to see more”. That will tell me more about that. Again, scrolling down the page, don’t see anything that’s saying sponsored, which is what you normally do see. I don’t know. But I mean, the point is, I think you need to determine yourself: Do you trust what it’s telling you? Are you happy with that result, whether it’s search at Google or whether it’s your favorite chatbot? I was using Perplexity a lot, Shel. I really was. I stopped using it entirely. I didn’t like what it was doing. I didn’t like it at all. Yeah. But I have to tell you, I stopped flipping from one tool to another to see. No, I stick with what I like, what I know works for me. And I don’t bother trying to second-guess it. But let me see what Gemini says about this. Although I do that occasionally, I have to say.

        Shel Holtz: I had stopped for a while and I’ve gone back to it. It’s improved. It has improved considerably in the last couple of months.

        Neville Hobson: I did a research project about two weeks ago where I did spend time trawling different tools and getting complementary or different results. I then had one of those—ChatGPT—summarize it all. But hey, it’s a lot of work and I didn’t need to do that. So I’m not going to do that as a matter of course.

        Shel Holtz: I did. I, on our intranet, have a “construction term of the week”. This has been going on for about six and a half years. Every week, a new definition of a new term. I’ve gone through everything that has been provided to me. So now I’m asking an AI: “Give me a list of 20 construction-related terms.” And I’ll get more specific than that. I’ll say, “around water infrastructure projects” or things like this. And I’ll say, “Okay, I like this one. Give me a two-paragraph definition of that.” I’ll copy and paste that definition and go to one of the other LLMs and I’ll say, “Assess this for accuracy, list what you would change and then rewrite it the way you would rewrite it to incorporate your corrections.” And I find that that gets me a much better definition. So I’m frequently bouncing around to these.

        I also find that I’ll switch which tool I’m using the most based on who’s released the best model most recently because I find the latest Claude model is just amazing, but then Gemini just released a new one that apparently is blowing Claude away. I want to use the one that’s going to give me the best results, not the one I’m most comfortable with. So I’m changing all the time.

        Neville Hobson: Yeah, I find the one I’m most comfortable with is the one that gives me the best results—that I’m very happy with that—but again, our uses are very different. I don’t use it for the kind of stuff that you do when you talk about “You hear these definitions, give me a summary and find the best one” or whatever. I tend not to do that kind of work. But I’m very happy with ChatGPT Plus that I’ve been using for a while now. I use NotebookLM occasionally, particularly when I’m looking at dense academic reports. But I’m kind of OK with that. So the point is, I think—to summarize all of this—that our chatbots are new influencers. I think the New York Times piece is a good piece. It’s a thought-provoking piece. And I think the caveats, as I saw them certainly, are the risk factor that we just spent a while discussing. I think the idea—as the writer mentioned in the Times piece—if Reddit posts are disproportionately shaping chatbot answers, are we witnessing the amplification of unverified content? I think that’s a very good point to make. Hence, even more so—and I don’t know how we’d feel comfortable with this—you’ve got to verify everything.

        I do that. And I find, depending on what it is… I can’t think of a good example, frankly, Shel, but you know… you’ve spent some time, a little bit of time telling your AI system what you want it to do. You might have had a to-and-fro, back-and-forth conversation about that. That’s common for me. Not just “Here’s a prompt and off you go and do it”—no. And it comes back with something; I say, “Fine, what do you mean by this?” or “I want you to do that as well.” Yes, that’s good to highlight that. That goes on all the time. And then the checking of things takes longer than that. And I’m totally OK with that. Because I need to—and this must apply to everyone—I need to be sure… Or maybe it doesn’t. Maybe it doesn’t apply to folks who work in Deloitte. Sorry, I shouldn’t have said that, but it occurs to me.

        You need to check it for your own peace of mind—that what you’re sharing with the other person, whether it’s a client or a colleague, is accurate to your best knowledge—that there’s nothing you’ve done that would diminish the accuracy of that or anything you haven’t done, meaning not verified or checked everything. So—and like you said earlier in our early discussion about this, there’s a lot more to this than just verifying. Yeah, I get that too. But it takes time. And maybe that’s why people don’t do it. They see the folks who do it this way see it as the easy tool to improve their—to dump all this stuff on the chatbot so they can either take the day off or do other things. I mean, that’s—I don’t believe that’s everywhere. But some people will think that. So it is a tricky one to answer. And I think that we just got to do what we’re comfortable with that meets our objectives and take as much care as possible in producing the best work we can.

        Shel Holtz: Yeah, and for communicators, recognizing that chatbots are a new influencer means that we have to think about how we take advantage of that. And I’m going to emphasize again: they are a new influencer, not the new influencer. Kim Kardashian has not hung her head in shame and retreated into a dark room to wait to die. She still has millions and millions of followers and holds up a product and it drives sales. You know, the—the old influencers haven’t gone anywhere and still warrant some attention.

        Neville Hobson: Well, true. So the Times, though, says—the question they asked is: Are chatbots the new influencers? So our answer to that would be: No, they are one of the new influencers.

        Shel Holtz: Right, yeah. No. Yes, add them to the mix. So that’ll wrap up this episode of For Immediate Release, episode number 502, our long form episode for February. We do hope that you will comment on this. All of our comments these days come from our LinkedIn posts. So check LinkedIn, follow either one of us, but we also share these posts on Facebook in three places: we have a For Immediate Release Podcast Network community and a For Immediate Release page, in addition to you and I sharing them individually. We’re also on Threads and BlueSky. Leave a comment. Any of those places, we’ll pick it up and share it in the March long form episode.

        You can also send us an email to [email protected]. You can attach an audio file. You can record that audio file directly from the For Immediate Release Podcast Network website—there’s a “send voicemail” tab over on the right-hand side. I actually got a voicemail from the website last month, but it was just somebody being obscene. It had nothing to do with communication, but I got excited. We got one of those from Speakpipe, which is the vendor who does that. You can leave a comment directly in the show notes. I mean, it is a blog. There’s a place to put comments in a blog. Go figure.

        Neville Hobson: Wow, should have played it. An obscene phone call, okay.

        Shel Holtz: All these ways to comment, please do and be part of this conversation. And our next long form episode will be recorded on Saturday, March 21st. We will drop that on Monday, March 23rd. Until then, that will be a “30” for For Immediate Release.

        The post FIR #502: Attack of the AI Agent! appeared first on FIR Podcast Network.

        ...more
        View all episodesView all episodes
        Download on the App Store

        FIR Podcast NetworkBy FIR Podcast Network

        • 3.3
        • 3.3
        • 3.3
        • 3.3
        • 3.3

        3.3

        3 ratings