
Sign up to save your podcasts
Or


In this monthly long-form episode for March, Neville and Shel tackle a trio of interconnected themes reshaping the communications profession in the age of AI. The conversation opens with Anthropic’s top lawyer declaring that AI will destroy the billable hour. That thread leads naturally into JP Morgan’s controversial use of digital monitoring to verify junior bankers’ working hours, where Shel and Neville question whether surveillance technology can substitute for genuine managerial trust and engagement.
The episode also examines Gartner’s widely circulated prediction that PR budgets will double by 2027 as AI search engines favor earned media. Shel delivers a detailed report on the escalating misinformation crisis, citing a 900% surge in global deepfake incidents and new research from the C2PA on content provenance standards. The episode closes with a discussion of Cloudflare CEO Matthew Prince’s prediction that bot traffic will exceed human traffic by 2027, and a sobering peer-reviewed study on how social bots hijack organizational messaging — research reported by Bob Pickard, who has experienced bot-driven attacks firsthand.
Dan York also contributes a tech report on the state of the Fediverse and Mastodon, as well as on AI developments for WordPress.
Links from this episode:
Links from Dan York’s Tech Report:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone, and welcome to the Forum Immediate Release podcast, long form episode for March, 2026. I’m Neville Hobson.
Shel: And I’m Shel Holtz.
Neville: As ever, we have six great stories to discuss and share with you, and we hope you’ll gain insight and enjoyment from our discussion. Perhaps you’ll want to share a comment with us once you’ve had a listen. We’d like that.
Our topics this month range from AI in the end of the billable hour to Gartner’s predictions about PR budgets to monitoring work in the age of AI to newsrooms battling AI generated misinformation and more, including Dan York’s tech reports. Before we get into our discussion, let’s begin with a recap of the episodes we’ve published over the past month and some list of comments in the long form.
In episode 502 for February, published on the 23rd of that month, we explored how rapidly accelerating technology is reshaping the communication profession from autonomous agents with attitudes to the evolving ROI of podcasting. We led with a chilling milestone moment, an autonomous AI coding agent that publicly shamed a human developer after he rejected its code contribution.
A leader can build goodwill for days and lose it in seconds. In FIR 503 on the 2nd of March, we reported on the president of the IOC, that’s the International Olympic Committee, who had no answers to reporters’ questions and suggested on camera that someone on her communications team should be fired. We got comment on this, haven’t we, Shel?
Shel: Boy, do we have comments on this one. This attracted a good number of them, starting with Kevin Anselmo, who used to have a podcast on the FIR Podcast Network. It was on higher education communication. He says, having previously worked in communications for two different international sport federations, I found this story quite amusing. One of my first PR roles was working at the 2000 Sydney Olympic Games. I was working on the sport federation side, not the IOC.
Neville: Yep, you did.
Shel: But I know that working at such events is exhilarating and exhausting as you have to deal with a myriad of different issues. I can imagine that toward the end of the Olympics, the PR team fell short of delivering a robust brief. But nevertheless, in answer to your question, even if the PR people were abysmal, the fault is on Coventry for the way she handled the situation. A simple, we will have to look into this and get back to you response would have worked.
Instead, by handling it the way she did, she drew unnecessary attention to the questions she and the team weren’t prepared to answer, as you and Neville shared. I guess in the process of this mishap, I learned that Germany was in the running for the 2036 Olympics, which I wasn’t aware of. We also heard from Monique Zitnick, who said, really enjoyed your discussion on this. Certainly a puzzling situation that has surely ended in broken trust on both sides.
Shel: Mike Klein said, another ignominious IOC leader in the mold of Brundage and Samaranch. Neville, you replied. You said that’s an interesting comparison. Mike, Avery Brundage and Juan Antonio Samaranch both left very complicated legacies, particularly around politics and governance in the Olympic movement. What struck me about this episode wasn’t so much ideology or policy. It was leadership under pressure.
Coventry had actually received a fair amount of praise for how she handled some difficult moments during the games, which makes the press conference moment even more interesting from a communication perspective. It’s a reminder that reputation capital can be fragile. A single public moment can reshape the narrative very quickly. Mike replied, yes, leadership under pressure, but also the kind of people the IOC has chosen for leadership over the years.
Coventry has a complicated history over her involvement with her native Zimbabwe’s recent regimes as well. Sylvia Camby said, Neville, watching Coventry’s press conference took me back to the time I spent doing comms for an international association. It reminded me of how inward-looking organizations like the IOC can be. So totally focused on their internal member politics with leaders too lazy or too overconfident to bother to educate themselves about current affairs.
Also, they often have a distorted idea of what the press is interested in. They often think they can dictate their agenda. As you and Shel mentioned on the podcast, the questions were entirely predictable. You replied, Neville, that’s a really insightful observation, Sylvia. Organizations like the IOC can become quite inward facing, particularly when so much of their energy is spent navigating internal governance and member politics. That can create a kind of blind spot about how issues look from the outside.
Sylvia said, and I was thinking, I’m proud of Germany for being so sensitive about the significance of that date and for opposing the 2036 bid. They are much better at reading the spirit of the time than Coventry. As an aside, my father’s cousin competed in the 1936 Olympics in Berlin as a gymnast. She passed away last year at the age of 104.
She often spoke to me of the atmosphere surrounding the Olympics at the time, a heaviness and a sense of unspeakable doom. So yes, 2036 is a date that Berlin should definitely avoid. And you replied to that, Neville. People can go find that one in the comments.
Neville: That’s a good one. There are some great points of view, perspectives there. So thanks to everyone who commented. Are companies using AI as a convenient explanation for layoffs? That was a question we asked in FIR 504 on the 10th of March when we discussed AI washing, when organizations blame workforce cuts on AI, even when the reality is more complicated. It’s a difficult ethical space for communicators. And we have comment on this too, don’t we?
Shel: Three short ones. First from Monique, who commented that she was looking forward to listening to the episode because she’s been having a lot of conversations on this over the last month. Jacqueline Trzezinski said, I’m glad you’re delving into this. The same thought came to my mind when I saw the Block layoff announcement, especially as it was held up by some on LinkedIn as an example of how valuable transparency is during layoffs.
And Jesper Anderson said, I find it fascinating how quickly the world turns upside down. 18 to 24 months ago, companies were accused of letting people go because of AI and not admitting that this was the true reason.
Neville: Good perspective, Jesper, that one. Is social media still social? In FIR 505 on the 17th of March, we explored Hootsuite’s 2026 Social Media Trends Report, addressing social search, AI versus authenticity and more. Plus a darker question: what if AI starts to dominate the conversation? And we have comment, don’t we?
Shel: Yes, from Zara Ramoutoho Akbar, and I sure hope I pronounced that right, apologies if I didn’t. She said, yes, it feels like socials are shifting from a channel to a trust system. And in that world, I would say that the employee and peer voices matter more than brand output. Are you seeing organizations lean into that yet or still treating social as a broadcast channel? And since Zara asked the question, Neville, what do you think? Are you seeing this change?
Neville: No, I’m not, to be honest, but maybe it’s taking its time. There is something afoot without any doubt. And I think it’s something that we should expect. And that darker question is a valid one to put forward, let’s say. And we’ll keep our eyes and ears open, I think.
Shel: Yeah, I haven’t seen it much either, but I do think that there are organizations that are talking about it. So as you say, we may see this start to change in the months ahead. We have one more comment from Dolores Holtz. No relation. I for one certainly rely on people whom I trust more than any name or brand.
Neville: Yeah, I agree. Fair enough.
Shel: I think that covers our previous episodes up to this one.
Neville: Yeah, good, good comments all over from all those episodes. And thanks everyone for listening and adding your comments to that conversation. It’s really terrific.
Shel: Yeah, keep those coming and ask us questions because that was great from Zara. Also up on the FIR Podcast Network right now is the latest Circle of Fellows. It was a good conversation on the communication issues and challenges in this age of grievance and isolation into basically tribes these days.
Shel: This was Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Wah were the panelists on this Circle of Fellows. As I say, it was really a terrific conversation. The next one is coming up on March 26th, Thursday at noon Eastern time. It’s on crisis communications and especially this idea of the polycrisis, which we heard about from our friend, Philippe Borremans.
The panelists for that Circle of Fellows will be Ned Lundquist, Robin McCaslin, George McGrath, and Carolyn Sapriel. Should be a good crisis-focused conversation. And of course, if you can’t make it at noon on Thursday, it will be available as a Circle of Fellows podcast and the video will be up on the FIR Podcast Network.
Neville: While we’re talking about IABC, let me briefly mention that Sylvia Camby and I hosted a webinar for IABC as part of IABC Ethics Month in February about ethics and AI. We’re actually going to…
Shel: I attended and it was terrific. I was there. It was a great webinar.
Neville: Well, thanks, Shel. That’s great. And we’ve actually had a nice review from someone, which was very pleasing. We’re going to repeat this, specifically for IABC members in the Asia-Pacific region. So if you’re in Australia, India, China, Japan, and maybe right out into the Pacific area, this one’s for you. It’s members only.
The event is AI Ethics and the Responsibility of Communicators. It explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. It’s on Wednesday, the 15th of April at 6 PM Sydney time. That’s AEST, as I discovered, Australian Eastern Standard Time. You’re no longer on daylight savings in Australia, whereas we are by the time we do this. So 6 PM in Sydney, or 8 AM UTC. That’s Coordinated Universal Time or GMT if you’re used to that one. For me, I’m in the UK, so it translates into 9 AM UK time. But 6 PM in Sydney and that sort of time zone area is the important bit. So we look forward to seeing you there.
Shel: 1 AM Pacific time, so I won’t be participating in this one.
Neville: If you’re up, you could join. OK. So IABC will be letting members know about where to go and register, et cetera, I’m sure in the coming days. So just mark your diary in the meantime. Wednesday, 15th of April, 6 PM Sydney time. And let’s get on with things. But first, there’s this.
Shel: I won’t be.
Neville: Right, let’s start with a statement that will make a lot of people in professional services sit up a bit. Anthropic’s top lawyer Jeff Blick says AI is going to destroy the billable hour. That’s of interest to you if you’re a consultant in particular. Blick argues that AI is removing the need for what he calls tedious but lucrative work, the kind of work that firms have historically billed by the hour. And that matters because the billable hour isn’t just a pricing model.
It’s the foundation of how entire professions have operated for decades. But here’s the tension he highlights. Clients want problems solved quickly and efficiently, while the billable hour rewards the opposite: more time, more revenue. AI sharpens that contradiction because now tasks that once took days or weeks can be done in minutes. And that raises a very simple, very uncomfortable question for clients: if the work takes less time, why am I still paying for all those hours?
It’s something I’ve been thinking about quite a lot myself recently. I wrote about this in Strategic Magazine a few months ago, where I argued that AI isn’t killing consultants, but it’s killing the logic of the billable hour. Because the model has always had flaws: it rewards activity over impact. It prices effort rather than outcomes. And as soon as technology compresses effort, the model starts to look outdated. What’s changing now is not just efficiency, it’s expectations.
Clients aren’t necessarily looking to pay less. They’re looking for clarity, predictability, and above all, value that reflects results, not time spent. So we’re starting to see a shift from billing hours to pricing outcomes, from selling labor to selling judgment. And that sounds straightforward, but it opens up some deeper questions. If AI removes the entry-level repetitive work, how do people develop the judgment that clients are now paying for?
If you move away from time-based billing, how do you actually define and defend value? And perhaps most importantly, are firms really ready to let go of a model that has defined their economics for generations? I think what this really points to is a shift in what clients are buying: not time, but judgment; not effort, but outcomes. And the firms that recognize that early will have a very different advantage from those that don’t.
Shel: Well, if AI drives the end of the billable hour, all I will be able to say is it’s about time and thank God something did it. I have never been a fan of billable hours in communication consulting. I can see it in other lines of work. I mean, plumbers bill by the hour, electricians, people who work with their hands tend to bill by the hour, although interestingly, auto mechanics often do not. It’s the labor required to do this particular thing is worth this amount of money. And then there are the parts that you have to pay for.
But the question is, if the model of billable hours goes away in the public relations and communication industry, what do we replace it with? And I know we have talked about this in the past, but it has been a while.
But I remember when I worked — we have both operated in the billable hour environment. And when I was at Mercer, Mark Schuman was also at Mercer. I think he was in their Houston office and he came up and met with the comms consultants in Los Angeles. And he was talking about the value add. And I objected to this. I said, I have a billable hour based on my value and what it takes to cover overhead and make a profit. I think my billable hour when I left Alexander and Alexander was something like $385 an hour. And that should cover everything. Why are we adding something and just calling it value add?
And what Mark said was, if I have an idea in the shower and it took me 30 seconds for that idea to spark and yet it informs the entire engagement with the client and solves a problem and is based on my decades of experience and everything that I have learned — is that really worth only the 15 cents that that 30 seconds would be valued at under the billable rate? That’s ridiculous. The more I thought about it, the more I thought he’s right. That is ridiculous. So why aren’t we billing based on the value of the project?
Now, you can say here’s how many hours it’s going to take to complete that project and use that as a basis to come up with a price to give a client. Or you can look at other things. I think I mentioned on a show several years ago that Craig Jolly and I proposed a communications program for Coca-Cola for a department that was eliminated before we could come to a final agreement because they had actually agreed to this.
And what we were going to be paid for our effort was absolutely nothing. We were not going to bill them for hours. We were not going to bill them for the value of the project, but they were going to track the outcomes of the work that we did. And they were going to pay us 5% of the savings that accrued as a result of what we did and 5% of the profits that accrued based on what we did. And we had a formula for that. We would have made a fortune over, I think, the three years that we were going to get compensated after this project was complete.
There are other models out there that people can consider, but you’re right. I’m wondering when the clients are going to start saying, this is what I paid last time. Haven’t you started using AI? Why isn’t the drudge work that is part of this project taking less time and costing me less? I think we’re going to hear that from clients. So you better start thinking about the new models.
Neville: Yeah, it’s a sea change. It’s quite a significant change in structure to move from the billable hour. And one reason I believe nothing’s happened is there is definitely no groundswell of desire to change this from the people in organizations who would likely suffer most if it did change, or those who don’t.
And there are lots. I’m not picking out anyone in particular, but there are lots of people who just don’t like change. And we’ve been doing this for years. It works. Our whole business is based on this. And it probably is going to need, going to take a major client of a major consulting firm to say, hang on a second. We have a question for you about how you’re charging us. I’ve seen lots of chats about this, Shel, and I’m sure you have. And yet nothing’s happened.
So I wrote a lengthy analysis on my own blog not long ago, and that hardly got any attention at all. The story in Strategic I wrote was quite heavily researched, but I’ve not really seen much, any real traction on that other than some folks who said to me, hey, nice article you wrote in Strategic. I’d rather hear them say, I didn’t like it, here’s why, or I got a better idea, or whatever. Get a conversation going about it.
One thing I think should stimulate a discussion is, and this could be something we’ve got to force on people: look at it from the point of view of the client, not the consultant. And by the way, all these other examples you gave, like plumbers and all that, are absolutely right. So this discussion is specifically about professional services and consulting, not auto mechanics and plumbers and stuff like that.
So think about this: clients aren’t buying less, they’re buying differently. That’s the thing. I’ve had conversations with people — I have to admit, I struggled, truly seriously struggled to get the conversation actually with some energy to continue on why we should make this kind of change. So clients aren’t buying less, they’re buying differently. And one thing I wrote in the Strategic piece was talking about what their expectations are from the people that advise them, the consultants they work with. Today, they expect advisors who: one, use AI to scan signals and surface insights; bring sharper data-informed recommendations; and help avoid ethical, legal, and reputation missteps. Three major things they expect from people. AI has a role in all of them.
I think we need to move away and we can take the initiative on this to change the conversation with clients to this as opposed to, well, draft that report for the clients and AI can do all the research and so forth. When clients ask, why am I paying for all this time? You could pitch that to them in the sense that this is the value of the briefing we give to the AI. I think that is a demolishable argument over time. Clients are like you and me, they’re people, they’re not stupid. They’re looking at this themselves, many of them.
That said, there are many clients, particularly the more you get to the enterprise level and those kind of consulting firms at that level, who really don’t have much desire to rock the boat at all with all of this. It’s very entrenched, it’s ingrained. Everyone’s making money and it’s all wonderful and business gets done. And it’s going to need something to make a major shift here.
So I think we should take the initiative as communicators to do this. And it could be someone in a consulting firm — like you, I worked for Mercer and I remember back in the early ’90s, not discussions about changing the business model, but the value add. So maybe this is a Mercer thing at that time, perhaps. We need to have that conversation now. And we need someone at a senior level with an influential voice to raise this internally in their organization and run some internal webinars or seminars or get-togethers to talk about why we need to change the business model and why the billable hour has to end as the basis for business. But it’s a big task, I would say.
Shel: One of the truths about the public relations industry is that it takes pain for the industry to change. I mean, we’ve seen this. We’ve been doing this show for 21 years and we’ve seen it with a number of major technologies that have come along that the PR industry has been very, very, very slow to adopt. And what ultimately got them to adopt the web and social media was seeing work taken away from them by boutiques who were offering those services. And as soon as they saw money left on the table, they said, we’d better figure this out because this is something that we should be doing. They figured it out and now they’re using it regularly.
You’re absolutely right that we in the industry have experience and insights that allow us to do things like create the appropriate prompt to get the right result for a public relations issue or campaign or what have you. And it goes far beyond the prompt. It goes into creating documents that become foundational to a project within one of the LLMs. It even gets into agents now. What if we set up an agent on behalf of a client that is out there looking for competitive information on a regular basis? And it took, let’s say, 15 hours to create this agent so that it was producing the kind of daily or hourly reports that we’re looking for. And those become a big part of the project. It’s operating while we sleep. We can’t charge for that. Certainly it’s not going to be on an hourly basis.
So a formula has to emerge for these types of things that allows agencies to be compensated in a way that keeps the lights on, provides the salaries to the consultants who work there, and earns a reasonable profit without having to bill hours because it just makes less and less sense. And as I say, I didn’t think it made sense back in the ’80s when I was working for Mercer, my first consulting gig.
You remember maintaining your time sheet in 10-minute increments? Oh my God. Who’s going to pay me for that? Who do I bill for the time that I spend maintaining a time sheet in 10-minute increments? I mean, come on.
Neville: Don’t remind me, please. I tried to get away with entering time in the timesheet for the time I had to spend on doing the timesheet. They didn’t let me get away with that. No.
Shel: They didn’t buy that. My brother’s an attorney, and when he was working for a law firm — he’s corporate side now — but he remembered if he took a pencil out of the supply cabinet, he had to bill that to a client. So I mean, the time that he was spending billing things to clients was time that he wasn’t spending on client work. There are countless reasons why the billable hour needs to die. I don’t mind the consultant having a billable hour rate as a base for calculating something, but it shouldn’t be the be-all and end-all of what the client is billed. There needs to be a formula where you say this is what the project is going to cost. And if the project moves out of the scope that you agreed to, then you go back to the client and say, we’re outside the scope. We’re going to have to charge more for that. Here’s what we’re going to charge. You okay with that before we start moving on this stuff that you’ve requested that is out of scope?
Neville: Yeah, no, we need to get some movement going on this topic, I think. And maybe that’s something — thinking about IABC, you know, some kind of talk on this topic needs to happen.
Shel: Yeah. Or, you know how Ann Handley sold the T-shirt that said Justice for the Em Dash? I bought one. We need T-shirts that say Kill the Billable Hour with the FIR logo on it. Would anybody buy that? Let us know. We’ll pursue it. I’ll find out where Ann had her shirts made.
Neville: Yeah, I like that idea. I like it. Excellent.
Shel: If you work in public relations, you’ve probably seen the prediction that’s making the rounds right now. It sounds too good to be true. Gartner, the analyst firm whose pronouncements tend to get circulated in agency pitch decks for years, Gartner has declared that by next year, 2027, the mass adoption of artificial intelligence and large language models as a replacement for traditional search will drive a doubling of PR and earned media budgets.
Now, what would drive this surge in PR spending, you ask? Well, AI answer engines overwhelmingly favor non-paid sources. More than 95% of links referenced in AI-generated answers come from earned, shared, and organic owned content, with 27% originating directly from earned media. So if AI is where people increasingly go for information — and by the way, the data on that is striking; ChatGPT saw traffic surge 608% year over year between the first half of 2024 and the first half of 2025, while traditional search giants Google and Bing both slipped — well, then earned media becomes the engine of discoverability. And that, the argument goes, means organizations will pour money into PR to stay visible.
Now, I want to be honest about the source here, because Stuart Bruce, someone whose thinking you and I have always admired and respected, Neville — Stuart has pointed out that this prediction originated in a blog post published by Gartner as part of a lead generation campaign promoting a webinar for chief communication officers, and that while it carries the authority of the Gartner brand, it lacks the evidence normally associated with their research publications.
Frank Strong over at the Sword and the Script notes similarly that the prediction feels rushed. 2027 is barely more than eight months away and the path from “AI favors earned media” to “budgets actually double” is pretty far from certain. But I’m cautiously optimistic because the underlying logic is sound.
If AI systems favor credible third-party sources and PR is the function best equipped to generate that kind of coverage, well then yeah, our work becomes more strategically important. But a Gartner webinar promo is not a Gartner research report, and we should resist the temptation to tout this prediction as if it were settled fact.
Here’s what I actually want to talk about though. Let’s say the prediction is right. Let’s say the prediction is half right. Let’s just say budgets grow substantially. What happens to that money? Because there’s a pattern in this industry that I think we need to name directly. When good fortune arrives — a new platform, a new capability, a shift in the media landscape — agencies have historically been better at capturing the upside than at reinvesting in the profession. More revenue has meant more of the same: more accounts, more billable hours, more senior hires, not more rethinking.
And right now, in the age of AI, there are two investments that I think agencies have an obligation to make if this windfall arrives. The first is genuinely rethinking the agency model in light of AI — not just adding a chatbot to the workflow, but asking the hard questions about what services still require human judgment, where AI can amplify capacity, and how to build new offerings around answer engine optimization. And by the way, a new billing model.
Stuart Bruce notes that Gartner explicitly rejects the efforts of SEO and marketing companies to pivot into this space, recognizing that answer engine optimization requires communication-specific skills to balance stakeholder trust and platform requirements. That’s an opening for PR, but only if agencies actually build those capabilities rather than outsourcing them to MarTech vendors.
The second investment, and this one matters a lot to me, is in rebuilding entry-level pathways into the profession. AI has already been eroding the grunt work that used to serve as the training ground for new communicators. As one analysis put it, the traditional deal of entry-level work — trading rote labor for mentorship — that’s dying. The learning curve is being automated, leaving early-career professionals stranded between AI agents and senior incumbents.
If PR budgets double, agencies will have the resources to do something about this. They could create structured apprenticeship programs. They could invest in training that teaches new communicators not just to use AI tools, but to supervise and interrogate them. They could build the next generation of practitioners rather than simply eliminating the entry points.
What I fear, and what I think is entirely possible, is that agencies will look at this budget doubling as a margin opportunity rather than a reinvestment opportunity. More revenue, leaner teams, higher profits. And five years from now, we’ll be asking where the next generation of PR professionals are going to come from.
So yeah, the Gartner prediction may well be right. AI does appear to favor the kind of credible third-party earned coverage that PR generates. And that’s genuinely good news for the profession. But good news is only useful if you do something smart with it. Neville, you’ve been watching the agency landscape in the UK and Europe for a long time. When you see a prediction like this, do you believe it? And what’s your read on whether the industry will rise to the moment or just cash the check?
Neville: I must admit, I did say when I saw the article, I don’t believe it. British TV viewers might recognize that phrase from a comedy show 20 years ago. I did follow a lot of what people were saying, and all I saw was bubble, bubble, bubble, hype. I didn’t see anything. What I saw was missing, meaning this was a marketing claim, as you mentioned, and Stuart Bruce wrote about that, and others have too, just pointing out this was a blog post from Gartner. There’s no data to back up any of it. There’s nothing cited. There’s nothing you could trust to prove or to give you confidence in repeating it. Yet that’s what everyone has been doing, repeating this as fact.
The particular phrase that was repeated by Gartner and then mass repeated: by 2027, mass adoption of public LLMs as a replacement for traditional search will drive a 2x increase in PR and earned media budgets. But there’s no evidence behind that. Yet what we saw was mass repetition all over, LinkedIn in particular.
I did read a worth-reading article by Stephen Waddington published on the 16th of March on his blog about this topic. And he’s critical. And I think his starting line is “when industry optimism outruns the evidence,” and therein is where we’re at with this. I’ve seen sensible voices — you, Stuart, another one — who are saying that if this is true, then this is what it could mean, this is what could happen. But it’s like a lot of things we see: the maybe, perhaps, could, etc. is kind of brushed under the carpet, where suddenly before you know it, this is what’s going to be happening.
So I’ve not seen a huge amount of conversation about this, to be honest, except when this first appeared. That said, today I saw two posts on LinkedIn from people repeating this who obviously just came across the Gartner piece and they’ve reposted it.
Shel: The long tail lives.
Neville: Exactly. So Stephen goes into — he makes a point in his post about GEO, and I think that’s actually contextually good. He’s saying Gartner’s observation may ultimately prove correct. But the path from the insight to a doubling of budgets is far from certain. He says, GEO remains highly contested. I’ve seen others saying that too. The mechanics of how AI models select, weight, and attribute sources are still evolving. This is an era where budgets are being directed to support discovery work.
So what needs to happen instead, he says, is a call to action, I suppose, to communicators. When you see this claim being made, please challenge the argument. And if we aren’t set to see a boom in public relations work, some of that investment will need to be diverted to ensure the sustainability of earned media. And that, to me, is a very sensible point to make.
All of this is probably and in fact certainly is why I didn’t post about this on my blog. When I saw it, I was attracted to it thinking, this could be an interesting topic to stimulate some attention. Then I read it and started seeing others like Stuart saying, wait a minute. So I thought, no, I’m not going to join a hype bandwagon here without some further research. Therefore, it didn’t appear compelling enough to me to spend the time on it. Let’s see what emerges further from this, if anything. But like you said, Shel, if this turns out to be true, then happy days.
Shel: Yeah, I doubt it myself. I think what we’re going to see is an incremental increase in PR spending as a result of this. And that’s going to be because we’re not going to see some mass revelation at the same time among all industry that, my God, we need to invest more in earned media so that we’re visible in search results that are now happening on LLMs instead of search engines. This is going to be gradual.
One company is going to pick up on it, then another. But what I have seen ongoing, regularly, are new reports, new studies, new research coming out. It all validates that LLMs are in fact generating their search results based largely on earned media. And I think as people wake up to that and realize that if we want to be present in those results — it’s like showing up on the first page of Google search results — we want to be in the answer when somebody asks a question where our expertise, our thought leadership is relevant. Then you need to bolster your earned media.
One of the things that worries me though about this bolstering of earned media is how many more press release pitches am I going to get? How many more press releases that have nothing to do with me or what I do are going to show up in my inbox? You’re going to see reporters pitched way more than they’re being pitched now. And there may be some blowback from this as a result of that. It’s like, hey, PR industry, back off — too much. So there’s also that to consider.
Neville: Yeah, I agree. So don’t believe everything you read online is a simple thing here, and take time to pay close attention to what people are saying about this before you repeat anything. Just be clear in your mind.
Shel: Yeah, I was also going to say that I think owned media, the stuff that you produce on your own website — I think a renewed emphasis on that. So you’re producing really interesting stuff that people start looking at. That counts, too. That’s one of the categories of media that was included in this research. So you don’t have to rely on earned media all that much if you can do a great job of producing that content.
Neville: Good tip. OK, so earlier we talked about how work is priced. That was our piece about the billable hour. Now let’s consider how work is measured, because there’s another story that feels connected but from a different angle. The Financial Times reported that JP Morgan has started using technology to check whether the hours junior bankers say they work actually match their digital activity — things like keystrokes, meetings, and video calls. The bank says this is about well-being, about awareness, not enforcement, about making sure people aren’t overworked. And on the surface, that sounds reasonable.
But when you look a bit closer, it raises some uncomfortable questions. What’s really happening here is a shift from reported work to observed work. Not what you say you did, but what the system can verify. And that’s where the reaction gets interesting.
If you look at the comments on the FT’s post about this, there’s a very clear pattern. Some people see this as logical, almost inevitable. In a data-driven industry, of course you measure activity more precisely. But a lot of the reaction is skeptical, even uneasy. You see comments like, “this really screams we trust our employees.” “This is a classic case of measuring what’s easy instead of what matters.” “Big Brother is watching you.”
And then there’s a more nuanced point that comes up repeatedly. Does this actually improve anything, or does it just change behavior? Because if people know they’re being measured on activity, they optimize for activity. More keystrokes, more visible presence, more signals that look like work — but not necessarily better outcomes.
And that connects directly to the earlier discussion about billing. If AI is automating more of the actual work — the analysis, the modeling, the drafting — then what exactly are we measuring here? Time, activity, presence, or value?
There’s also a deeper cultural question. Investment banking has long had a reputation for extreme hours. JP Morgan has already tried to address that, capping weeks at 80 hours, for example. 80-hour weeks. The days of 40-hour weeks are a distant memory, obviously. But if people were underreporting hours to stay on deals, then the issue isn’t just measurement — it’s incentives, it’s culture. Technology can surface that, but it doesn’t resolve it.
So this opens up some bigger questions. Are we moving towards a world where all knowledge work is continuously monitored and verified? Does that improve trust or undermine it? And if both pricing and measurement are shifting at the same time, what does a fair day’s work even mean anymore?
Shel: Absolutely. One of the things we keep hearing about AI is organizations are going to have to rethink things like workflows. And we’re talking about organizations that are not going to look at all in five years the way they do today because of AI. Are people thinking that it’s going to take 40 hours for somebody to do today what it took them to do before if all of that grunt work is being taken over by AI?
On the other hand, I have seen that AI has increased the number of hours people are spending on their jobs. There’s some very recently released data on that, that they are more stressed now with AI in the picture. And if you’re putting in more hours, is this really an issue?
I’m also always struck by, as you mentioned in the report, the lack of trust, the signal of the lack of trust that this sends. I’ve always felt that the availability of these tools that allow this kind of monitoring raises the question of, you know, just because you can, should you? And yeah, I don’t think that you should. I think there are better ways to determine whether your people are working, and looking at their outputs is the best of those. Have they delivered what you expected them to deliver?
Because when you destroy the trust that you might have had, or perhaps you never had trust in your organization in the first place, if you have new hires who come in and find that they are being monitored in this way, they’re just inclined to find ways to cheat. I saved an article in my link blog not too long ago from the HR Digest about key jamming.
The point on this was that if you have employees who are doing this, you have a bigger issue. But if you haven’t heard of key jamming, this is easily available products that remote workers use by putting them on their keyboards and it continually presses the key. So it looks to the software that’s monitoring like that keyboard is active, that employee is working these hours. They could be off doing whatever they want.
I imagine that there are some keystroke monitoring software that have been updated to address this and want to make sure that they’re typing real words or real numbers and not just repetitively striking the same key. But then employees will figure out the next thing, or the companies that sell these products will figure out the next thing to make it appear that the employee is working.
Better to build trust so that the employees will want to produce great work for the organization that they love working for than to destroy trust and implement these kinds of monitoring tools.
Neville: So it’s interesting. JP Morgan is quite resolute in their defense of this, because as they say, they’re doing this to help junior employees not overwork. There was a case here where an intern at the Bank of America died in 2013, which the coroner said was linked to long working hours. And the anecdotal stuff has emerged constantly since then on people who are totally wrecked emotionally because of the hours they’ve got to work.
To be fair to JP Morgan, they’ve responded to that at scale in the organization. The trouble is that nearly every comment I see that has commented on this is extremely skeptical about their true motive. So they’ve got a credibility problem to explain this well. They talk about this is about awareness, not enforcement, they say in their prepared statement. It’s designed to support transparency, well-being, and encourage open conversations about workload. They’re going to roll it out much more widely across their organization.
The estimate is based on employees’ weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings. So people being people, and the thrust of part of the article is what some of these junior employees are doing to kind of be counted and get the checkbox that you’re doing okay to enable them to spend time on the deals that they’re trying to close. Whereas if they did this to the letter and reduced the hours, they wouldn’t be able to close the deal. So I get that. So they’ll find ways to work around this.
And I think, is this inevitably what we could expect to see in every organization? Or surely the organization should approach this in a way that presents something to the employees that doesn’t encourage workarounds to get around these kinds of things. I don’t know. My sense is that we’re going to see a huge amount more of this kind of thing in service industry firms in particular, starting with banks, I suspect.
Shel: I hope not. I mean, let’s take them at their word. Let’s say that this is their solution of having Big Brother looking over employees’ shoulders for the employees’ benefit. Like I said, let’s take them at their word. They don’t want employees overworking because they don’t want them dropping dead at their desks. Great. That’s a great thing.
You do that by having well-trained managers who understand that their role is to set expectations and to display the kind of caring for the members of their teams that leads them to make sure that they’re not overworking. Where I work, we are working really hard in communications, in HR, and at the executive levels to develop this culture of managing where managers are checking in on employees to make sure they’re okay. We’re training managers on watching for signs of mental wellness distress among employees and then reaching out to them to say, hey, let’s take care of this, right?
It sounds to me like JP Morgan would rather implement a Big Brother program than to have engaging managers, one of the pillars of employee engagement, I might add. Why do people leave organizations? 50%, according to some research, leave because of their boss. And you know, if you have this churn among your junior people, maybe that’s because you’re doing a piss-poor job of training your managers to be really good managers. And if you did that, you wouldn’t need to erode the trust of your employee base by implementing Big Brother systems.
Neville: That makes total sense. I agree with you. But I’m wondering, maybe there’s something structurally amiss here. So for instance, the FT says in 2024, JP Morgan appointed a senior banker to oversee the well-being of junior staff. JP Morgan has since curtailed weekend work and also capped the working week for younger employees at 80 hours, typically based on self-reported numbers. That’s key, that last bit.
This process has proved imperfect as some junior bankers misreport the hours they work. One issue is they declare fewer hours than they have actually spent to avoid being pulled from existing deals or to ensure they can still be added to new ones. So I would say, if we kind of know this kind of behavior is going on, what are we going to do to address it and try and bring them around to our thinking? But that requires structural change in the organization as to how you do all this.
Shel: I have an answer. If AI is saving you money, use that money to hire more junior people so that nobody has to put in that kind of time. So staffing should increase as a result of the use of AI, not decrease, says I.
Neville: Are you listening, JP Morgan? Well, yeah, no, that’s a fair comment. I think just reading a bit more about the FT piece, it focuses on the tech workplace surveillance technologies. So not necessarily AI doing this, although it must be in there somewhere.
Shel: No, no, I understand. But if we’re using AI in the organization and it’s lowering costs because the rote work is being done by the AI, those savings could go to the additional staff. So nobody has to put in 80 hours.
Neville: Yeah. Well, I think it’s a problem across the sector because the FT quotes Goldman Sachs, for instance: junior bankers on occasion have been pulled aside and told to rest when its internal electronic monitoring was triggered. Get that. That’s how they’re watching all the time.
I think the comment someone made on the FT’s piece about, you know, we’re going to see more of this — I think we will. It is clearly not perfect. I’m reminded a little of some of the stuff I paid a lot of attention to a couple of years ago about surveillance in China and the surveillance society in China, where you are monitored constantly all the time by the state. And it doesn’t necessarily mean central government, but the local way you live — the town, the city — monitors everything you do: what you spend your money on, what time you get up, what time you get on the train to go to work, how you clock in, you swipe your card — all that.
That’s something as part of their society and structure. We are probably heading that way, I would argue, in Western countries, notably in Europe, some European countries. I don’t know about the States, Shel, to be honest. I don’t really know whether this is likely to be kind of prevalent anytime soon. I wouldn’t be surprised if it is, particularly if it’s going to be done covertly as opposed to openly and transparently, which I think is likely in America.
Shel: Well, mass surveillance has definitely been in the news in the US lately with Anthropic pushing back on the Pentagon’s insistence that they be able to use Claude for that.
Neville: Yeah, I mean, we’ve got experiments going on here which make the headlines now and again, although no one seems to be unduly concerned, which is the police in some jurisdictions are trialing more facial recognition technology that is now far superior to what’s been done before, that scans people as a matter of course in any public place. That, I would say, is an inevitability. We’re going to see that.
So what does that mean for organizations? I mean, that’s a broad avenue to go down, the discussion on that wide topic. But in an organization, it surely does become understandable, if not acceptable, that when you show up at the office to work — and by the way, that’s still a thing for many organizations, even though I’m now seeing in all the newspapers here that because of the war in Iran and the price of oil shooting up and all this stuff, there’s now talk about one way you can help to reduce energy usage is work from home and drive less and drive slower.
So that kind of talk is now starting to permeate public discourse. So I wonder what difference that will make to any of this, because if we’re to see more and more people want to work at home, that’s reversing. Are we going to see a backlash from employers who demand people come to the office? I mean, these are just questions. I don’t have answers for those, but it’s part of the picture. We are facing this kind of change that has good points, I can see quite clearly, but it’s alarming the state we’re at with all of this.
Shel: Yeah, just for a point of interest, yesterday I watched a video on YouTube. It was Senator Bernie Sanders talking to Claude. This is on YouTube. I’ll share the link in the show notes. He’s asking Claude questions about what AI can do in terms of this kind of surveillance, its monitoring of people. And Claude is very, very candid in its answers to Senator Sanders. It’s about 11 minutes. I think it’s really worth watching because it surfaces a lot of these issues, and as a society, I think we have to decide whether this is something we want in the workplace or in general.
Neville: I agree. That’s interesting.
Shel: Well, thank you, Dan. Great report. I have to admit that I have been neglecting my Mastodon instance. It’s called Mastocomm, C-O-M-M, for communications. I set it up when I figured that it was an easy thing to do and a great way to learn about how to establish an instance in the Fediverse. And I haven’t been taking care of it lately. And Dan, your report has inspired me to go back. I’ve been away so long, it wanted me to log in.
But it’s still there. It’s still up and running, which means I still have money coming out of my checking account every month to pay the fee to the service I use to host it. So as long as I’m spending the money, I might as well manage that. So thanks for the reminder, Dan.
Neville: Yeah, good report on that. I’ve not listened to your audio yet. But thinking about Mastodon, I don’t go directly to Mastodon. I haven’t been there this year. What I do is every time I post on Threads, it posts to the Fediverse. And so I do it that way. It’s cheating a bit because I’m not actually engaging with anyone there at all. But I get quite a steady stream of engagement back, people who like and so forth. And I do occasionally do the same myself via Threads. So it’s a lazy approach to doing it. But I’m okay with that because I’m present via Threads and that works well. And it’s a useful way of keeping in touch. If Threads is more likely to be your primary engagement channel rather than Mastodon, that’ll work quite well.
Shel: If anybody’s interested in joining the Fediverse and being part of a Mastodon instance that is focused on communication, join me: mastocomm.org. I’ll look for you there.
Shel: A professor at Syracuse University’s Newhouse School recently made a point that deserves to be heard beyond the J-school world. Jason Davis, who specializes in detecting disinformation, said the challenge today isn’t really about spotting fakes anymore. The AI tools are so good now that there just isn’t much that we can catch. To break the misinformation amplification cycle, people need to apply critical thinking before they decide to pass something on.
Now that connects to something I’ve been watching closely, because the misinformation problem has moved well beyond being a journalism problem. It’s a business problem now, and that means it’s a communication problem. The scale is pretty significant. Deepfake incidents tracked globally surged from about 500,000 cases in 2023 to over 8 million last year. That’s a 900% increase in just two years. A recent executive survey found eight in 10 executives are concerned about AI-driven misinformation impacting their brand. Yet many admit their companies aren’t fully ready to detect or respond.
A University of Melbourne/KPMG global study of 48,000 people across 47 countries found 87% want stronger laws to combat AI-generated misinformation. And a survey found that fewer than four in 10 Americans say that they can confidently spot AI-generated content, and 88% say it’s harder now than a year ago to tell what’s real online.
So who’s fighting back and how? Sophisticated newsrooms — think the New York Times, Bellingcat, investigative outlets worldwide — are now using multi-layered verification: a combination of reverse image search, metadata analysis, and geolocation cross-referencing to authenticate content. Reporters are using AI itself as a detection tool, analyzing thousands of posts to detect bot behavior by identifying patterns in timing, repetition, and network activity.
Beyond individual newsrooms, the Coalition for Content Provenance and Authenticity, that’s the C2PA, is building broader infrastructure. They’re backed by Adobe, Microsoft, the BBC, Google, Meta, OpenAI, and others. With that backing, they’ve developed an open technical standard that functions like a nutrition label for digital content, establishing its origin and edit history. The U.S. Cybersecurity and Infrastructure Security Agency endorsed this approach in January last year. Adoption is still limited, but the standard exists and it’s worth watching.
There’s also a striking research finding from a field experiment with readers of the German newspaper Süddeutsche Zeitung. Exposure to AI-driven misinformation reduced overall trust in news, but actually increased engagement with highly trusted sources. As synthetic content proliferates, credibility becomes scarcer, and as a result, becomes more valuable.
That finding has direct implications for us in organizational comms. A deepfake of your CEO, a fabricated press release, a manipulated earnings statement — these are no longer theoretical. A hacked news tweet in 2013 briefly erased $136 billion from the S&P 500. The tools to do something far more sophisticated are now consumer grade.
Deepfake fraud attempts grew by 3,000% in 2023, and humans detected manipulated media only 24.5% of the time. So practically: monitor for impersonation of your executives and brand. This belongs in your communications infrastructure. It’s not just an IT thing. Establish a verify-first culture inside your organization. Have pre-drafted response templates ready for the scenario where fake content goes viral under your or your organization’s name.
And invest in your organization’s credibility before a crisis arrives, because that research finding tells us audiences under information stress return to the sources they already trust. The newsrooms dealing with this are systematic. They document their processes and when they can’t definitively authenticate something, they say so. That’s the standard every comms team should hold itself to.
Neville, I know you’re watching all of this from across the Atlantic where the EU AI Act is pushing content labeling into requirements under law by August 2026. Are organizations taking this seriously? And is this regulatory pressure in Europe making any difference?
Neville: To your last point, I don’t think it’s making waves-type difference. Awareness is rising. I’m seeing more people talking about this topic online across Europe, here in the UK too. But I think it requires far more and more effective communication to bring the messaging home to people about this huge topic. So it’s early days.
We’ve got debate continuing here in this country about online safety and all these other issues that kind of obscure some of the important details such as this, for instance, that does require further debate. Things that I pay attention to certainly are the broad debates about all of this, but seeing what people are doing. You mentioned some examples in your introduction about some media broadcasters in particular, what they’re doing to verify the veracity of content. I saw an excellent article the other day about what Wikipedia is doing in this area, because there’s a place that’s at high risk of misinformation and disinformation.
But there’s no uniformity from what I’ve seen, certainly. There’s lots of homebrew solutions people are suggesting. There’s lots of good solutions some respected organizations are suggesting that you do, but there’s not a big groundswell of action on this yet, it seems to me. So I’d be interested myself even to hear what listeners in the UK and across EU countries have to say about what they’re seeing in this area. But I don’t see a huge amount of conversation going on about this.
Shel: And I’d really appreciate, listeners, if you’re in organizations that are doing anything to identify misinformation and to catch it before it’s used or even redistributed — what are you doing? How are you going about that? Is there any infrastructure for this that’s being implemented? I’d really like to know because I think this is going to become a bigger problem faster than most people are aware of.
Neville: Yeah, I mean, one thing I am seeing talk about that caught my attention quite dramatically is the amount of fake news in a broad sense, but misinformation, particularly about the war in Iran, the use of video that is simply fake. I’m also seeing the use of video that isn’t fake and being highlighted as the fact that it’s not fake.
The reality though is that like most things you encounter online, how do you really know? And what do you do if you see something you think, I’m going to share that with my network? What do you need to do before you do that? Most sensible people will take those precautionary steps, the most fundamental of which: how do you trust what you’ve seen? Is the source credible? Is it a reliable source? If it’s a media property, or even before that, who else is talking about this?
So these are things that I do as a matter of course now on almost everything I encounter online, particularly if I’m thinking of sharing it. I’ve yet to be caught out by not doing that. I make it a point, and partly it’s affected by the fact I’m doing less of that than I was before a couple of years ago, far less. I don’t post a lot on social networks, except stuff that I think is really interesting to share with people who follow me, or just because I feel like I want to share this because I think it’s interesting.
And that works. No other heavy message behind any of this stuff. But I do carry out due diligence. And I think I do it reasonably well because I’ve yet to be caught out. Now, of course, someone listening to this might say, well, let’s test him out on something then. OK, fine.
Shel: Now that we’ve heard you say this…
Neville: So, right. Go for it and do that. Let’s see how we go. But I think this is the status of where we’re at. The changes that are happening because of the events that are happening, and the fact that these euphemistic bad actors are increasing — there’s more and more of them. We have events taking place in the world now, note what’s going on in the Middle East, that lend themselves to more of this. You’ve got to really do your due diligence on things that you might not have felt you needed to before.
Shel: Yeah, and I think due diligence needs to go beyond the tools that can detect a deepfake. You’ve got to remember that people were sharing content that was disinformation before there was AI. So you run your algorithm, you put a video through a tool and it says, yep, this is real video, it’s not AI generated — but it’s claimed that that video is showing something from the Iran war when in fact the video was shot years ago during, say, the Iraq war, and somebody just grabbed that video clip and made the claim that this is from the current conflict. This happens all the time. It still happens today. It’s not from this weather event. That’s from that weather event five years ago.
So we have to be diligent and not just rely on the tools, and we have to come up with some solutions. I remember years ago when we reported it here, when blockchain was still a topic of conversation in digital circles, Ike Pigott had recommended a tool. I don’t remember exactly how it worked, but as you shot video, it was recorded into the blockchain, which would authenticate its authenticity. And that became a way for people to see that it was genuine video and not manipulated somehow and not a deepfake — it was actually shot on a video camera and uploaded as a blockchain record in real time. So there are potential solutions out there. We need to get serious about implementing them in this profession.
Neville: Yeah, that’s a good example of the blockchain one, although that was pretty niche. That was pretty out on the edge, as it were. There were lots of things like that that just didn’t survive and disappeared. Things change, things evolve, and people are trying new things. I don’t mean bad guys, but in a good way. So let’s see how that goes. But you need to keep vigilant on all this.
And by the way, when I mentioned misinformation, I wasn’t thinking of deepfakes and that kind of thing. It’s more the fundamental stuff that crosses your screen every day or your newsfeed or whatever it might be, saying something that someone says something or someone has done something and it’s interesting and fine. Don’t trust it until you verify it. So if it’s on the BBC or CNN or any other broadcaster, you know, Süddeutsche Zeitung newspaper, the one you mentioned earlier, Shel — that’s a good bet that it’s OK.
But you know what? Some media recently have been caught out with fakes. So it still pays to do your own due diligence, particularly if that content is something you’re going to use in a way that could embarrass you if it turned out to be fake or simply wrong. So it’s worth doing. Most people think that they don’t have time to do that. You have to make the time. This is part of your future.
And AI has a role here. Arguably, you could say, well, I need to do this myself. No, you don’t really. Your favorite chatbot, if you trust it, it knows enough about you, and you can still verify stuff. It does the searching and finding the sources. You then check them. It can check them too, but you still have to do that. It just makes it easier for you to do that. You still want to do that work, by the way. There’s no magic bullet or shortcuts here. So it’s worth it. You learn a lot doing this, too. I’ve learned huge things from doing all this myself. And it’s been very, very useful.
Neville: So there we are. OK, let’s talk about bot traffic. In an interview at South by Southwest, literally a week or so back, with TechCrunch, Cloudflare CEO Matthew Prince said that by 2027 — so as you pointed out earlier, we’re eight months away basically — bot traffic will exceed human traffic on the internet. That’s not entirely new in principle. Bots have always been part of the web. But what he’s describing is a change in scale and function.
Now think about this: Cloudflare — I don’t have the exact number, but don’t they manage like 30% of all the traffic on the web that goes through some of their servers somewhere? They do caching. They do all sorts of interesting things with people’s data. I use it on my blogs. I’m sure we use it on the FIR network. I mean, it’s part of the plumbing of the internet now. And you might remember a month or so back, Cloudflare was all over the news because they were hit by a distributed denial-of-service attack or some such that took large chunks of the internet offline because people like Amazon and some of those big properties use Cloudflare too. So it’s quite something.
Anyway, historically bot traffic has been relatively stable, around 20%, largely driven by search engine crawlers. What’s changed is the impact of generative AI, said Prince. His point is that AI agents behave fundamentally differently from human users. A person researching a purchase might visit a handful of sites. An AI agent performing the same task might visit thousands of sites. This is not incremental growth. It’s a multiplier effect — not just more traffic, but a different kind of traffic.
That has consequences at three levels: infrastructure, economics, and behavior. First, infrastructure. If AI agents generate orders of magnitude more requests than humans, then the web becomes a system that increasingly serves machine activity. Prince talks about the need for new infrastructure, including ephemeral sandboxes where agents can execute tasks without overwhelming the broader network.
Second, economics. The commercial web has been built around human attention: visits, impressions, and clicks. If a growing share of traffic is non-human, that model doesn’t just weaken — it becomes misaligned with how the web is actually used.
Third, behavior. Prince characterizes this as a platform shift comparable to the move from desktop to mobile. If that’s right, then the way information is discovered, consumed, and acted upon changes fundamentally — and not necessarily by humans.
That raises a set of implications that go beyond infrastructure. If machines are increasingly intermediating access to information, then visibility is no longer just about being found by people. It’s about being processed, selected, and used by systems. This links back to the earlier themes. We talked about how AI changes what work is worth. We followed that with how AI changes what and how work is measured. Here, it’s changing the environment in which both of those things happen.
So this is less about traffic and more about control — who or what is actually navigating the web. Which leads to some important questions. If AI agents are doing more of the searching, what does it mean to be visible online? If traffic no longer equates to human attention, how do organizations think about value? And if this is indeed a platform shift, what replaces the current models that underpin the web?
Shel: These are interesting questions, and I think that this is ultimately more a matter of evolution, just like the web was, even the internet before we had the graphical interface of the web. It’s a shift in what’s doing what. But at the end of the day, all of those bots have been deployed by whom? I mean, I have agents out there. These are just set up on Claude and on ChatGPT that are going out and doing searches and coming back and giving me reports. Me, I’m a human, last time I checked.
And I’m using the results of the work that those bots do. So these agents are proxies for the humans who need something done with this information, whether it’s delivering a report or creating a spreadsheet or what have you.
These are human-deployed bots. I mean, ultimately in every case, a bot has been deployed by somebody for some purpose. And I think having your content out there for those bots to find so that those results are delivered back to the human and you’re visible there — all it’s doing is reducing the need for the human to sit there for hours doing the searching and just having the AI go out and do the searching for them and delivering back results. But those results are still being used by people.
So this doesn’t concern me all that much, unless there’s something going on here that I’m not aware of with agents suddenly creating themselves to go off and engage in activities that have no human behind them, in which case we’re in the realm of science fiction. And I don’t think we’re there yet.
Neville: Well, that could be the case, although I think there are signs that we might be heading in that direction. Thinking about what we talked about in the last episode on that darker place that you cited, Ethan Mollick talking about what happens if it all gets taken over by an AI — that question applies here as well. You’ve got the AI agent instructing other AI agents. And I read someone talking about that very topic in quite a compelling way that this is already happening. So that wouldn’t surprise me one bit at all. So we’ve got to think of that too.
Shel: Yeah, now we’re talking about two different things, right? I mean, we’re talking about bots and agents here as an umbrella topic. But the fact that bots have been deployed to search and report back is one thing. Bots that are creating content is another, which is actually the topic of my next report.
Neville: Got it. Yeah, you’re absolutely right. We were talking about bots. So they are deployed by humans to achieve certain things. I guess I could project that out and say what happens in a darker place where the bots are deployed by AI agents unbeknownst to the human. I mean, I’m not Skynetting here, by the way. This is just projecting the thought out. And I welcome these kinds of discussions on “what if” when we see what’s happening now. It immediately makes you think, yeah, but what if? So this is part of how we generate good conversation about this kind of topic.
But it is interesting. I think the way in which Matthew Prince kind of framed it — that someone does a search for something in a retail outlet online and he or she may do a couple of dozen searches, but the AI instructs a bot to do this and that bot goes out and there’s thousands of searches all in a short period of time. And you suddenly see, wow, the scale of this is absolutely phenomenal. And that’s really, I think, part of what Prince is arguing: when bot traffic overtakes human traffic, we are confronting a scale of an order of magnitude that is driven by the system.
Is he ringing alarm bells here? I’m not sure that he is or not, but he’s looking at the need for a new kind of infrastructure to take care of this. And I think that’s actually a good avenue to explore.
Shel: Probably. I mean, Google has always used bots to go out and scour the web — called them spiders back in the day. But they only sent out the one and it found everything, those millions and millions of sites. And all that information resides on Google’s servers. So when you’re doing a search, it’s not going out onto the web, right? It’s looking in its own data centers and giving you those results. And those spiders, those bots, are always out there, always running, but just the one from Google.
Now with AI, you’re asking it to go out in real time and scour the web. So yeah, it’s sending out thousands in order to do essentially the same work that Google did. And then it brings you back the result in that narrative output that you get. So that’s why we’re seeing so many more bots out there. Is this a problem? I’m not an engineer, so I don’t know.
Neville: No, I don’t know either. I’m not sure it is a problem. But I’m cognizant, paying attention to what Prince is saying, that none of this is incremental growth — it’s a multiplier effect. And could it be that we’re at risk of everything grinding to a halt? Is that what he’s saying?
The consequences I listed — infrastructure, economics, and behavior — make sense, and they are connected. The generating of orders of magnitude more requests than humans are capable of doing is partly the thing. And I can see that. The web is then a system that increasingly serves machine activity, which is how he’s making that connection. He talks about the need for new infrastructure, including sandboxes where agents can execute tasks without overwhelming the broader network. That makes a lot of sense.
Shel: Yeah, I like that. Nothing wrong with that.
Neville: I use sandboxes myself, so I understand conceptually what that means. The economics about it all, where the behavior is now totally different. Visits, impressions, clicks — that’s what humans did, or still do largely. But as he argues, if you’ve got a growing share of this, increasingly more non-human traffic according to Prince, that model doesn’t just weaken — it becomes misaligned with how the web is actually used today.
OK, does that mean we need to change that? Well, yes, it does. How do we do that? Well, that’s part of the bigger debate. Behavioral characteristics — he’s likening this to the move from desktop to mobile. If he’s right, then the way this is all discovered, consumed, and acted upon changes, not necessarily by the humans, changed by the AI. Is this a bad thing? I don’t know. Maybe he’s just ringing the hand of caution and ringing the cowbell. Maybe that’s it. But it certainly is provocative what he’s suggesting.
Shel: Yeah, certainly there’s absolutely going to be more bot traffic on the internet. That’s inescapable with all of this. Maybe the LLMs, the labs, find ways to confine the searches so they’re searching relevant sites to reduce that traffic. I don’t know.
Neville: Yeah. So let’s hear about your connection piece then about this. Assume that humans are not at the heart of all of this.
Shel: Sure. And you mentioned Ethan Mollick earlier. I mentioned this in an earlier episode a couple of weeks ago, I think. But he said that when he posts something, he can tell that about 70% of the comments that are left on his posts have been generated by bots. And it’s weakened the value of LinkedIn to him, which is discovering smart people with intelligent thoughts and perspectives. And 70% of that is now being generated by bots.
So we have bots that are now creating content. So you talked about bot traffic — stay with that theme, but focus more on the content. A new peer-reviewed study just published in the Journal of Public Relations should be required reading for anyone responsible for managing an organization’s reputation and messaging. The paper is titled “Social Bots as Agenda Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging.” And it came to my attention by way of Bob Pickard, one of Canada’s most respected PR practitioners and someone whose commentary on this research carries special weight. More on that in a minute.
The research, led by Philip Arceneaux at Miami University, along with colleagues from the University of Arizona, University of Texas, and University of Florida, is the first study in public relations scholarship to empirically measure how social bots interfere with organizational messaging. The authors note they found no prior PR research addressing this specifically, which is remarkable given how long the threat has been visible.
The study analyzed nearly 900,000 tweets generated during Ohio’s 2022 midterm elections. What the researchers found was that social bots successfully influenced the agenda formation process, most heavily in negative tone and most notably among the election campaigns. Bot messaging was most effective at influencing attribute salience — that is, how issues were framed and characterized — driving primarily negative sentiment. The bots were the strongest influencers of campaign agendas with measurable downstream influence on press and public discourse.
Here’s the distinction that Pickard zeros in on in his commentary. And I think it’s the most important insight in the entire body of research. The bots didn’t control what was discussed. They controlled the tone in which it was discussed. And as Pickard writes, that may be a more dangerous lever. Your organization puts out a carefully crafted message. The bots don’t need to invent a counter-narrative. They just need to inject enough negativity around yours that the frame gets corrupted before it can set.
A primary strategy social bots adopt is the creation of information disorder — information ecosystems filled with suspicion and distrust that erode public confidence. And as Pickard observes, this has a direct downstream effect on communications decisions. Distorted inputs produce distorted decisions. If your social listening is picking up manufactured sentiment — bot-driven negativity masquerading as genuine stakeholder concern — you may be prioritizing the wrong issues, reacting to the wrong pressures, and in some cases, misreading your stakeholders entirely. Some of what looks like groundswell may just be a bot farm.
The asymmetry that Pickard describes is sobering. A small network of automated accounts can systematically degrade the messaging environment of a well-funded organization with a full communications team. And as lead researcher Arceneaux put it, it’s not natural selection anymore — it’s artificial selection by who controls the most bots.
A survey cited in the study found that 51% of leading communication professionals already reported that social bots present a clear threat to organizations and their reputations. And practitioners view social bots as the most pressing ethical challenge in public relations. And that was before generative AI made bot-produced content dramatically more convincing.
Why does Pickard’s voice matter here particularly? Well, when he blew the whistle on the Chinese interference at the Asian Infrastructure Investment Bank in 2023, hundreds of pro-China bots on Twitter targeted him with insults, accusing him of being an American agent, a white supremacist, and a neocolonialist. The pattern the researchers describe in the study — rapid negative amplification, coordinated framing, and agenda hijacking — isn’t abstract to Bob. He has operated inside of it.
And his observation that state-directed information operations seem to understand the bot asymmetry better than most corporate communications leaders is a pointed challenge to our profession.
The study recommends stronger media relationships, better investment in bot detection tools, and a return to traditional polling as a signal less susceptible to manipulation. And that’s sound advice. And on the practical side, research on bots’ impact on public discourse suggests their influence is most pronounced in the early stages of an issue — before credible sources establish the dominant narrative. Which means getting your authentic message out fast, before the negative frame hardens, is now a genuine strategic imperative, not just a good practice.
There’s also a real-world corporate illustration of this dynamic, and it’s one that we talked about more than once. In 2025, research found that roughly half of all the posts about the Cracker Barrel controversy in its early days were driven by inauthentic bot activity. So a minor design story artificially elevated into a culture war flashpoint before human communicators could get their footing. That’s the playbook now.
Neville, I know you follow this activity and information disorder closely and you’ve watched platform governance response in Europe in particular. What do you think? Are social platforms doing enough to protect organizations from bot-driven agenda hijacking, or are communication professionals essentially on their own here?
Neville: I don’t think they’re doing enough. They are doing some, the platforms, but their attention is not on this at all. I think any organization, any corporate communicator, needs to recognize the fact that — regard it as if you’re on your own, that you need to take the steps that are needed.
Reading Bob’s piece on LinkedIn, an interesting turn of phrase he uses here, talking about “hands-on combat experience versus synthetic competitors gaming the algorithm in contested environments” is now extremely important. So make of that what you will, but you need to be up to speed with these developments. There are plenty of places you can get information from, get insights and guidance from as well.
I think, though, that this is the fundamental point which Bob Pickard makes in his piece: some communication leaders are still fighting the last war. This new research soberly explains new realities of possibilities of modern PR battlegrounds.
Now, I have not read the article, Shel, that you had in our Slack channel. I mean, it’s 34 pages of eight-point type, it seems to me. It’s big. So I would get my AI assistant to summarize the whole thing for me and give me the highlights. I haven’t done that. I think I will do that even to get a good understanding of this.
It seems to me that this is yet another example of the changes that are happening, whether we like it or not, that we have to pay attention to as communicators. We’ve touched on quite a few in this discussion today. Here’s another one. So I can’t really comment more than that, Shel. I’ve not read the report, which I am going to do. But I think his intro to the piece on LinkedIn is good. It’s a good introduction to it. And it then makes it easier to try and wade into it. Although I think for most communicators, some kind of summary is what they’re going to need rather than trying to read the whole thing.
Shel: Yeah, well, the bottom line is, I think, pretty simple. If you release some information and it’s in somebody else’s interest to shift the tone in order to control the agenda, then those bots are going to be deployed very, very, very quickly and create that content that changes the framing of what you started. Because you had a communication goal, and you as a communicator need to be prepared for that. And you need to have processes in place — and these are new processes and new workflows — to make sure that what you want people to understand is the message that fixes in people’s minds before these bots can come in and mangle your message, because that’s what’s happening pretty routinely now.
Shel: And that will be a -30- for this episode of For Immediate Release. We do want to remind everybody again, because we mentioned it earlier, comment on what you’ve heard. If you have thoughts, if you have any experiences to share, if you have questions, share them. The place most people are doing that these days — and in fact, every comment that we shared today was left on the LinkedIn posts where we announced the availability of a new episode. So if you follow Neville or me on LinkedIn, you will get those notifications of those new episodes. That’s the place to comment.
You can always comment on the show notes. That’s where people used to do this all the time. Remember blogs when people used to comment on blog posts? You could do that. You can send us an email to [email protected].
Shel: Boy, am I overloaded with spam in that account, but absolutely not one comment in the last month. One of the things I find in that email account is any voicemail messages that you have left. Just by going to [email protected] and clicking Send Voicemail, and you can send us your comment that way — we’ll play it. We’d love to have another voice on the show. So you can also send us an audio that you record, just attach it to an email and send that to [email protected].
We also have the FIR community on Facebook. And there are lots of places that you can tell us what you think. We’d love it if you did. And we will share that on the next monthly long-form episode. That next monthly long-form episode is coming on Monday, April 27th. Neville, you and I will record that on Saturday, April 25th. So we will have our monthly episode then. Between now and then, not this week, but starting next week, we will have our shorter-form one-topic weekly episodes. It should be three or four of those before we get to the April long-form episode. And that will in fact be a -30- for this episode of For Immediate Release.
The post FIR #506: Battle of the Bots! appeared first on FIR Podcast Network.
By Neville Hobson and Shel Holtz5
2020 ratings
In this monthly long-form episode for March, Neville and Shel tackle a trio of interconnected themes reshaping the communications profession in the age of AI. The conversation opens with Anthropic’s top lawyer declaring that AI will destroy the billable hour. That thread leads naturally into JP Morgan’s controversial use of digital monitoring to verify junior bankers’ working hours, where Shel and Neville question whether surveillance technology can substitute for genuine managerial trust and engagement.
The episode also examines Gartner’s widely circulated prediction that PR budgets will double by 2027 as AI search engines favor earned media. Shel delivers a detailed report on the escalating misinformation crisis, citing a 900% surge in global deepfake incidents and new research from the C2PA on content provenance standards. The episode closes with a discussion of Cloudflare CEO Matthew Prince’s prediction that bot traffic will exceed human traffic by 2027, and a sobering peer-reviewed study on how social bots hijack organizational messaging — research reported by Bob Pickard, who has experienced bot-driven attacks firsthand.
Dan York also contributes a tech report on the state of the Fediverse and Mastodon, as well as on AI developments for WordPress.
Links from this episode:
Links from Dan York’s Tech Report:
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone, and welcome to the Forum Immediate Release podcast, long form episode for March, 2026. I’m Neville Hobson.
Shel: And I’m Shel Holtz.
Neville: As ever, we have six great stories to discuss and share with you, and we hope you’ll gain insight and enjoyment from our discussion. Perhaps you’ll want to share a comment with us once you’ve had a listen. We’d like that.
Our topics this month range from AI in the end of the billable hour to Gartner’s predictions about PR budgets to monitoring work in the age of AI to newsrooms battling AI generated misinformation and more, including Dan York’s tech reports. Before we get into our discussion, let’s begin with a recap of the episodes we’ve published over the past month and some list of comments in the long form.
In episode 502 for February, published on the 23rd of that month, we explored how rapidly accelerating technology is reshaping the communication profession from autonomous agents with attitudes to the evolving ROI of podcasting. We led with a chilling milestone moment, an autonomous AI coding agent that publicly shamed a human developer after he rejected its code contribution.
A leader can build goodwill for days and lose it in seconds. In FIR 503 on the 2nd of March, we reported on the president of the IOC, that’s the International Olympic Committee, who had no answers to reporters’ questions and suggested on camera that someone on her communications team should be fired. We got comment on this, haven’t we, Shel?
Shel: Boy, do we have comments on this one. This attracted a good number of them, starting with Kevin Anselmo, who used to have a podcast on the FIR Podcast Network. It was on higher education communication. He says, having previously worked in communications for two different international sport federations, I found this story quite amusing. One of my first PR roles was working at the 2000 Sydney Olympic Games. I was working on the sport federation side, not the IOC.
Neville: Yep, you did.
Shel: But I know that working at such events is exhilarating and exhausting as you have to deal with a myriad of different issues. I can imagine that toward the end of the Olympics, the PR team fell short of delivering a robust brief. But nevertheless, in answer to your question, even if the PR people were abysmal, the fault is on Coventry for the way she handled the situation. A simple, we will have to look into this and get back to you response would have worked.
Instead, by handling it the way she did, she drew unnecessary attention to the questions she and the team weren’t prepared to answer, as you and Neville shared. I guess in the process of this mishap, I learned that Germany was in the running for the 2036 Olympics, which I wasn’t aware of. We also heard from Monique Zitnick, who said, really enjoyed your discussion on this. Certainly a puzzling situation that has surely ended in broken trust on both sides.
Shel: Mike Klein said, another ignominious IOC leader in the mold of Brundage and Samaranch. Neville, you replied. You said that’s an interesting comparison. Mike, Avery Brundage and Juan Antonio Samaranch both left very complicated legacies, particularly around politics and governance in the Olympic movement. What struck me about this episode wasn’t so much ideology or policy. It was leadership under pressure.
Coventry had actually received a fair amount of praise for how she handled some difficult moments during the games, which makes the press conference moment even more interesting from a communication perspective. It’s a reminder that reputation capital can be fragile. A single public moment can reshape the narrative very quickly. Mike replied, yes, leadership under pressure, but also the kind of people the IOC has chosen for leadership over the years.
Coventry has a complicated history over her involvement with her native Zimbabwe’s recent regimes as well. Sylvia Camby said, Neville, watching Coventry’s press conference took me back to the time I spent doing comms for an international association. It reminded me of how inward-looking organizations like the IOC can be. So totally focused on their internal member politics with leaders too lazy or too overconfident to bother to educate themselves about current affairs.
Also, they often have a distorted idea of what the press is interested in. They often think they can dictate their agenda. As you and Shel mentioned on the podcast, the questions were entirely predictable. You replied, Neville, that’s a really insightful observation, Sylvia. Organizations like the IOC can become quite inward facing, particularly when so much of their energy is spent navigating internal governance and member politics. That can create a kind of blind spot about how issues look from the outside.
Sylvia said, and I was thinking, I’m proud of Germany for being so sensitive about the significance of that date and for opposing the 2036 bid. They are much better at reading the spirit of the time than Coventry. As an aside, my father’s cousin competed in the 1936 Olympics in Berlin as a gymnast. She passed away last year at the age of 104.
She often spoke to me of the atmosphere surrounding the Olympics at the time, a heaviness and a sense of unspeakable doom. So yes, 2036 is a date that Berlin should definitely avoid. And you replied to that, Neville. People can go find that one in the comments.
Neville: That’s a good one. There are some great points of view, perspectives there. So thanks to everyone who commented. Are companies using AI as a convenient explanation for layoffs? That was a question we asked in FIR 504 on the 10th of March when we discussed AI washing, when organizations blame workforce cuts on AI, even when the reality is more complicated. It’s a difficult ethical space for communicators. And we have comment on this too, don’t we?
Shel: Three short ones. First from Monique, who commented that she was looking forward to listening to the episode because she’s been having a lot of conversations on this over the last month. Jacqueline Trzezinski said, I’m glad you’re delving into this. The same thought came to my mind when I saw the Block layoff announcement, especially as it was held up by some on LinkedIn as an example of how valuable transparency is during layoffs.
And Jesper Anderson said, I find it fascinating how quickly the world turns upside down. 18 to 24 months ago, companies were accused of letting people go because of AI and not admitting that this was the true reason.
Neville: Good perspective, Jesper, that one. Is social media still social? In FIR 505 on the 17th of March, we explored Hootsuite’s 2026 Social Media Trends Report, addressing social search, AI versus authenticity and more. Plus a darker question: what if AI starts to dominate the conversation? And we have comment, don’t we?
Shel: Yes, from Zara Ramoutoho Akbar, and I sure hope I pronounced that right, apologies if I didn’t. She said, yes, it feels like socials are shifting from a channel to a trust system. And in that world, I would say that the employee and peer voices matter more than brand output. Are you seeing organizations lean into that yet or still treating social as a broadcast channel? And since Zara asked the question, Neville, what do you think? Are you seeing this change?
Neville: No, I’m not, to be honest, but maybe it’s taking its time. There is something afoot without any doubt. And I think it’s something that we should expect. And that darker question is a valid one to put forward, let’s say. And we’ll keep our eyes and ears open, I think.
Shel: Yeah, I haven’t seen it much either, but I do think that there are organizations that are talking about it. So as you say, we may see this start to change in the months ahead. We have one more comment from Dolores Holtz. No relation. I for one certainly rely on people whom I trust more than any name or brand.
Neville: Yeah, I agree. Fair enough.
Shel: I think that covers our previous episodes up to this one.
Neville: Yeah, good, good comments all over from all those episodes. And thanks everyone for listening and adding your comments to that conversation. It’s really terrific.
Shel: Yeah, keep those coming and ask us questions because that was great from Zara. Also up on the FIR Podcast Network right now is the latest Circle of Fellows. It was a good conversation on the communication issues and challenges in this age of grievance and isolation into basically tribes these days.
Shel: This was Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Wah were the panelists on this Circle of Fellows. As I say, it was really a terrific conversation. The next one is coming up on March 26th, Thursday at noon Eastern time. It’s on crisis communications and especially this idea of the polycrisis, which we heard about from our friend, Philippe Borremans.
The panelists for that Circle of Fellows will be Ned Lundquist, Robin McCaslin, George McGrath, and Carolyn Sapriel. Should be a good crisis-focused conversation. And of course, if you can’t make it at noon on Thursday, it will be available as a Circle of Fellows podcast and the video will be up on the FIR Podcast Network.
Neville: While we’re talking about IABC, let me briefly mention that Sylvia Camby and I hosted a webinar for IABC as part of IABC Ethics Month in February about ethics and AI. We’re actually going to…
Shel: I attended and it was terrific. I was there. It was a great webinar.
Neville: Well, thanks, Shel. That’s great. And we’ve actually had a nice review from someone, which was very pleasing. We’re going to repeat this, specifically for IABC members in the Asia-Pacific region. So if you’re in Australia, India, China, Japan, and maybe right out into the Pacific area, this one’s for you. It’s members only.
The event is AI Ethics and the Responsibility of Communicators. It explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. It’s on Wednesday, the 15th of April at 6 PM Sydney time. That’s AEST, as I discovered, Australian Eastern Standard Time. You’re no longer on daylight savings in Australia, whereas we are by the time we do this. So 6 PM in Sydney, or 8 AM UTC. That’s Coordinated Universal Time or GMT if you’re used to that one. For me, I’m in the UK, so it translates into 9 AM UK time. But 6 PM in Sydney and that sort of time zone area is the important bit. So we look forward to seeing you there.
Shel: 1 AM Pacific time, so I won’t be participating in this one.
Neville: If you’re up, you could join. OK. So IABC will be letting members know about where to go and register, et cetera, I’m sure in the coming days. So just mark your diary in the meantime. Wednesday, 15th of April, 6 PM Sydney time. And let’s get on with things. But first, there’s this.
Shel: I won’t be.
Neville: Right, let’s start with a statement that will make a lot of people in professional services sit up a bit. Anthropic’s top lawyer Jeff Blick says AI is going to destroy the billable hour. That’s of interest to you if you’re a consultant in particular. Blick argues that AI is removing the need for what he calls tedious but lucrative work, the kind of work that firms have historically billed by the hour. And that matters because the billable hour isn’t just a pricing model.
It’s the foundation of how entire professions have operated for decades. But here’s the tension he highlights. Clients want problems solved quickly and efficiently, while the billable hour rewards the opposite: more time, more revenue. AI sharpens that contradiction because now tasks that once took days or weeks can be done in minutes. And that raises a very simple, very uncomfortable question for clients: if the work takes less time, why am I still paying for all those hours?
It’s something I’ve been thinking about quite a lot myself recently. I wrote about this in Strategic Magazine a few months ago, where I argued that AI isn’t killing consultants, but it’s killing the logic of the billable hour. Because the model has always had flaws: it rewards activity over impact. It prices effort rather than outcomes. And as soon as technology compresses effort, the model starts to look outdated. What’s changing now is not just efficiency, it’s expectations.
Clients aren’t necessarily looking to pay less. They’re looking for clarity, predictability, and above all, value that reflects results, not time spent. So we’re starting to see a shift from billing hours to pricing outcomes, from selling labor to selling judgment. And that sounds straightforward, but it opens up some deeper questions. If AI removes the entry-level repetitive work, how do people develop the judgment that clients are now paying for?
If you move away from time-based billing, how do you actually define and defend value? And perhaps most importantly, are firms really ready to let go of a model that has defined their economics for generations? I think what this really points to is a shift in what clients are buying: not time, but judgment; not effort, but outcomes. And the firms that recognize that early will have a very different advantage from those that don’t.
Shel: Well, if AI drives the end of the billable hour, all I will be able to say is it’s about time and thank God something did it. I have never been a fan of billable hours in communication consulting. I can see it in other lines of work. I mean, plumbers bill by the hour, electricians, people who work with their hands tend to bill by the hour, although interestingly, auto mechanics often do not. It’s the labor required to do this particular thing is worth this amount of money. And then there are the parts that you have to pay for.
But the question is, if the model of billable hours goes away in the public relations and communication industry, what do we replace it with? And I know we have talked about this in the past, but it has been a while.
But I remember when I worked — we have both operated in the billable hour environment. And when I was at Mercer, Mark Schuman was also at Mercer. I think he was in their Houston office and he came up and met with the comms consultants in Los Angeles. And he was talking about the value add. And I objected to this. I said, I have a billable hour based on my value and what it takes to cover overhead and make a profit. I think my billable hour when I left Alexander and Alexander was something like $385 an hour. And that should cover everything. Why are we adding something and just calling it value add?
And what Mark said was, if I have an idea in the shower and it took me 30 seconds for that idea to spark and yet it informs the entire engagement with the client and solves a problem and is based on my decades of experience and everything that I have learned — is that really worth only the 15 cents that that 30 seconds would be valued at under the billable rate? That’s ridiculous. The more I thought about it, the more I thought he’s right. That is ridiculous. So why aren’t we billing based on the value of the project?
Now, you can say here’s how many hours it’s going to take to complete that project and use that as a basis to come up with a price to give a client. Or you can look at other things. I think I mentioned on a show several years ago that Craig Jolly and I proposed a communications program for Coca-Cola for a department that was eliminated before we could come to a final agreement because they had actually agreed to this.
And what we were going to be paid for our effort was absolutely nothing. We were not going to bill them for hours. We were not going to bill them for the value of the project, but they were going to track the outcomes of the work that we did. And they were going to pay us 5% of the savings that accrued as a result of what we did and 5% of the profits that accrued based on what we did. And we had a formula for that. We would have made a fortune over, I think, the three years that we were going to get compensated after this project was complete.
There are other models out there that people can consider, but you’re right. I’m wondering when the clients are going to start saying, this is what I paid last time. Haven’t you started using AI? Why isn’t the drudge work that is part of this project taking less time and costing me less? I think we’re going to hear that from clients. So you better start thinking about the new models.
Neville: Yeah, it’s a sea change. It’s quite a significant change in structure to move from the billable hour. And one reason I believe nothing’s happened is there is definitely no groundswell of desire to change this from the people in organizations who would likely suffer most if it did change, or those who don’t.
And there are lots. I’m not picking out anyone in particular, but there are lots of people who just don’t like change. And we’ve been doing this for years. It works. Our whole business is based on this. And it probably is going to need, going to take a major client of a major consulting firm to say, hang on a second. We have a question for you about how you’re charging us. I’ve seen lots of chats about this, Shel, and I’m sure you have. And yet nothing’s happened.
So I wrote a lengthy analysis on my own blog not long ago, and that hardly got any attention at all. The story in Strategic I wrote was quite heavily researched, but I’ve not really seen much, any real traction on that other than some folks who said to me, hey, nice article you wrote in Strategic. I’d rather hear them say, I didn’t like it, here’s why, or I got a better idea, or whatever. Get a conversation going about it.
One thing I think should stimulate a discussion is, and this could be something we’ve got to force on people: look at it from the point of view of the client, not the consultant. And by the way, all these other examples you gave, like plumbers and all that, are absolutely right. So this discussion is specifically about professional services and consulting, not auto mechanics and plumbers and stuff like that.
So think about this: clients aren’t buying less, they’re buying differently. That’s the thing. I’ve had conversations with people — I have to admit, I struggled, truly seriously struggled to get the conversation actually with some energy to continue on why we should make this kind of change. So clients aren’t buying less, they’re buying differently. And one thing I wrote in the Strategic piece was talking about what their expectations are from the people that advise them, the consultants they work with. Today, they expect advisors who: one, use AI to scan signals and surface insights; bring sharper data-informed recommendations; and help avoid ethical, legal, and reputation missteps. Three major things they expect from people. AI has a role in all of them.
I think we need to move away and we can take the initiative on this to change the conversation with clients to this as opposed to, well, draft that report for the clients and AI can do all the research and so forth. When clients ask, why am I paying for all this time? You could pitch that to them in the sense that this is the value of the briefing we give to the AI. I think that is a demolishable argument over time. Clients are like you and me, they’re people, they’re not stupid. They’re looking at this themselves, many of them.
That said, there are many clients, particularly the more you get to the enterprise level and those kind of consulting firms at that level, who really don’t have much desire to rock the boat at all with all of this. It’s very entrenched, it’s ingrained. Everyone’s making money and it’s all wonderful and business gets done. And it’s going to need something to make a major shift here.
So I think we should take the initiative as communicators to do this. And it could be someone in a consulting firm — like you, I worked for Mercer and I remember back in the early ’90s, not discussions about changing the business model, but the value add. So maybe this is a Mercer thing at that time, perhaps. We need to have that conversation now. And we need someone at a senior level with an influential voice to raise this internally in their organization and run some internal webinars or seminars or get-togethers to talk about why we need to change the business model and why the billable hour has to end as the basis for business. But it’s a big task, I would say.
Shel: One of the truths about the public relations industry is that it takes pain for the industry to change. I mean, we’ve seen this. We’ve been doing this show for 21 years and we’ve seen it with a number of major technologies that have come along that the PR industry has been very, very, very slow to adopt. And what ultimately got them to adopt the web and social media was seeing work taken away from them by boutiques who were offering those services. And as soon as they saw money left on the table, they said, we’d better figure this out because this is something that we should be doing. They figured it out and now they’re using it regularly.
You’re absolutely right that we in the industry have experience and insights that allow us to do things like create the appropriate prompt to get the right result for a public relations issue or campaign or what have you. And it goes far beyond the prompt. It goes into creating documents that become foundational to a project within one of the LLMs. It even gets into agents now. What if we set up an agent on behalf of a client that is out there looking for competitive information on a regular basis? And it took, let’s say, 15 hours to create this agent so that it was producing the kind of daily or hourly reports that we’re looking for. And those become a big part of the project. It’s operating while we sleep. We can’t charge for that. Certainly it’s not going to be on an hourly basis.
So a formula has to emerge for these types of things that allows agencies to be compensated in a way that keeps the lights on, provides the salaries to the consultants who work there, and earns a reasonable profit without having to bill hours because it just makes less and less sense. And as I say, I didn’t think it made sense back in the ’80s when I was working for Mercer, my first consulting gig.
You remember maintaining your time sheet in 10-minute increments? Oh my God. Who’s going to pay me for that? Who do I bill for the time that I spend maintaining a time sheet in 10-minute increments? I mean, come on.
Neville: Don’t remind me, please. I tried to get away with entering time in the timesheet for the time I had to spend on doing the timesheet. They didn’t let me get away with that. No.
Shel: They didn’t buy that. My brother’s an attorney, and when he was working for a law firm — he’s corporate side now — but he remembered if he took a pencil out of the supply cabinet, he had to bill that to a client. So I mean, the time that he was spending billing things to clients was time that he wasn’t spending on client work. There are countless reasons why the billable hour needs to die. I don’t mind the consultant having a billable hour rate as a base for calculating something, but it shouldn’t be the be-all and end-all of what the client is billed. There needs to be a formula where you say this is what the project is going to cost. And if the project moves out of the scope that you agreed to, then you go back to the client and say, we’re outside the scope. We’re going to have to charge more for that. Here’s what we’re going to charge. You okay with that before we start moving on this stuff that you’ve requested that is out of scope?
Neville: Yeah, no, we need to get some movement going on this topic, I think. And maybe that’s something — thinking about IABC, you know, some kind of talk on this topic needs to happen.
Shel: Yeah. Or, you know how Ann Handley sold the T-shirt that said Justice for the Em Dash? I bought one. We need T-shirts that say Kill the Billable Hour with the FIR logo on it. Would anybody buy that? Let us know. We’ll pursue it. I’ll find out where Ann had her shirts made.
Neville: Yeah, I like that idea. I like it. Excellent.
Shel: If you work in public relations, you’ve probably seen the prediction that’s making the rounds right now. It sounds too good to be true. Gartner, the analyst firm whose pronouncements tend to get circulated in agency pitch decks for years, Gartner has declared that by next year, 2027, the mass adoption of artificial intelligence and large language models as a replacement for traditional search will drive a doubling of PR and earned media budgets.
Now, what would drive this surge in PR spending, you ask? Well, AI answer engines overwhelmingly favor non-paid sources. More than 95% of links referenced in AI-generated answers come from earned, shared, and organic owned content, with 27% originating directly from earned media. So if AI is where people increasingly go for information — and by the way, the data on that is striking; ChatGPT saw traffic surge 608% year over year between the first half of 2024 and the first half of 2025, while traditional search giants Google and Bing both slipped — well, then earned media becomes the engine of discoverability. And that, the argument goes, means organizations will pour money into PR to stay visible.
Now, I want to be honest about the source here, because Stuart Bruce, someone whose thinking you and I have always admired and respected, Neville — Stuart has pointed out that this prediction originated in a blog post published by Gartner as part of a lead generation campaign promoting a webinar for chief communication officers, and that while it carries the authority of the Gartner brand, it lacks the evidence normally associated with their research publications.
Frank Strong over at the Sword and the Script notes similarly that the prediction feels rushed. 2027 is barely more than eight months away and the path from “AI favors earned media” to “budgets actually double” is pretty far from certain. But I’m cautiously optimistic because the underlying logic is sound.
If AI systems favor credible third-party sources and PR is the function best equipped to generate that kind of coverage, well then yeah, our work becomes more strategically important. But a Gartner webinar promo is not a Gartner research report, and we should resist the temptation to tout this prediction as if it were settled fact.
Here’s what I actually want to talk about though. Let’s say the prediction is right. Let’s say the prediction is half right. Let’s just say budgets grow substantially. What happens to that money? Because there’s a pattern in this industry that I think we need to name directly. When good fortune arrives — a new platform, a new capability, a shift in the media landscape — agencies have historically been better at capturing the upside than at reinvesting in the profession. More revenue has meant more of the same: more accounts, more billable hours, more senior hires, not more rethinking.
And right now, in the age of AI, there are two investments that I think agencies have an obligation to make if this windfall arrives. The first is genuinely rethinking the agency model in light of AI — not just adding a chatbot to the workflow, but asking the hard questions about what services still require human judgment, where AI can amplify capacity, and how to build new offerings around answer engine optimization. And by the way, a new billing model.
Stuart Bruce notes that Gartner explicitly rejects the efforts of SEO and marketing companies to pivot into this space, recognizing that answer engine optimization requires communication-specific skills to balance stakeholder trust and platform requirements. That’s an opening for PR, but only if agencies actually build those capabilities rather than outsourcing them to MarTech vendors.
The second investment, and this one matters a lot to me, is in rebuilding entry-level pathways into the profession. AI has already been eroding the grunt work that used to serve as the training ground for new communicators. As one analysis put it, the traditional deal of entry-level work — trading rote labor for mentorship — that’s dying. The learning curve is being automated, leaving early-career professionals stranded between AI agents and senior incumbents.
If PR budgets double, agencies will have the resources to do something about this. They could create structured apprenticeship programs. They could invest in training that teaches new communicators not just to use AI tools, but to supervise and interrogate them. They could build the next generation of practitioners rather than simply eliminating the entry points.
What I fear, and what I think is entirely possible, is that agencies will look at this budget doubling as a margin opportunity rather than a reinvestment opportunity. More revenue, leaner teams, higher profits. And five years from now, we’ll be asking where the next generation of PR professionals are going to come from.
So yeah, the Gartner prediction may well be right. AI does appear to favor the kind of credible third-party earned coverage that PR generates. And that’s genuinely good news for the profession. But good news is only useful if you do something smart with it. Neville, you’ve been watching the agency landscape in the UK and Europe for a long time. When you see a prediction like this, do you believe it? And what’s your read on whether the industry will rise to the moment or just cash the check?
Neville: I must admit, I did say when I saw the article, I don’t believe it. British TV viewers might recognize that phrase from a comedy show 20 years ago. I did follow a lot of what people were saying, and all I saw was bubble, bubble, bubble, hype. I didn’t see anything. What I saw was missing, meaning this was a marketing claim, as you mentioned, and Stuart Bruce wrote about that, and others have too, just pointing out this was a blog post from Gartner. There’s no data to back up any of it. There’s nothing cited. There’s nothing you could trust to prove or to give you confidence in repeating it. Yet that’s what everyone has been doing, repeating this as fact.
The particular phrase that was repeated by Gartner and then mass repeated: by 2027, mass adoption of public LLMs as a replacement for traditional search will drive a 2x increase in PR and earned media budgets. But there’s no evidence behind that. Yet what we saw was mass repetition all over, LinkedIn in particular.
I did read a worth-reading article by Stephen Waddington published on the 16th of March on his blog about this topic. And he’s critical. And I think his starting line is “when industry optimism outruns the evidence,” and therein is where we’re at with this. I’ve seen sensible voices — you, Stuart, another one — who are saying that if this is true, then this is what it could mean, this is what could happen. But it’s like a lot of things we see: the maybe, perhaps, could, etc. is kind of brushed under the carpet, where suddenly before you know it, this is what’s going to be happening.
So I’ve not seen a huge amount of conversation about this, to be honest, except when this first appeared. That said, today I saw two posts on LinkedIn from people repeating this who obviously just came across the Gartner piece and they’ve reposted it.
Shel: The long tail lives.
Neville: Exactly. So Stephen goes into — he makes a point in his post about GEO, and I think that’s actually contextually good. He’s saying Gartner’s observation may ultimately prove correct. But the path from the insight to a doubling of budgets is far from certain. He says, GEO remains highly contested. I’ve seen others saying that too. The mechanics of how AI models select, weight, and attribute sources are still evolving. This is an era where budgets are being directed to support discovery work.
So what needs to happen instead, he says, is a call to action, I suppose, to communicators. When you see this claim being made, please challenge the argument. And if we aren’t set to see a boom in public relations work, some of that investment will need to be diverted to ensure the sustainability of earned media. And that, to me, is a very sensible point to make.
All of this is probably and in fact certainly is why I didn’t post about this on my blog. When I saw it, I was attracted to it thinking, this could be an interesting topic to stimulate some attention. Then I read it and started seeing others like Stuart saying, wait a minute. So I thought, no, I’m not going to join a hype bandwagon here without some further research. Therefore, it didn’t appear compelling enough to me to spend the time on it. Let’s see what emerges further from this, if anything. But like you said, Shel, if this turns out to be true, then happy days.
Shel: Yeah, I doubt it myself. I think what we’re going to see is an incremental increase in PR spending as a result of this. And that’s going to be because we’re not going to see some mass revelation at the same time among all industry that, my God, we need to invest more in earned media so that we’re visible in search results that are now happening on LLMs instead of search engines. This is going to be gradual.
One company is going to pick up on it, then another. But what I have seen ongoing, regularly, are new reports, new studies, new research coming out. It all validates that LLMs are in fact generating their search results based largely on earned media. And I think as people wake up to that and realize that if we want to be present in those results — it’s like showing up on the first page of Google search results — we want to be in the answer when somebody asks a question where our expertise, our thought leadership is relevant. Then you need to bolster your earned media.
One of the things that worries me though about this bolstering of earned media is how many more press release pitches am I going to get? How many more press releases that have nothing to do with me or what I do are going to show up in my inbox? You’re going to see reporters pitched way more than they’re being pitched now. And there may be some blowback from this as a result of that. It’s like, hey, PR industry, back off — too much. So there’s also that to consider.
Neville: Yeah, I agree. So don’t believe everything you read online is a simple thing here, and take time to pay close attention to what people are saying about this before you repeat anything. Just be clear in your mind.
Shel: Yeah, I was also going to say that I think owned media, the stuff that you produce on your own website — I think a renewed emphasis on that. So you’re producing really interesting stuff that people start looking at. That counts, too. That’s one of the categories of media that was included in this research. So you don’t have to rely on earned media all that much if you can do a great job of producing that content.
Neville: Good tip. OK, so earlier we talked about how work is priced. That was our piece about the billable hour. Now let’s consider how work is measured, because there’s another story that feels connected but from a different angle. The Financial Times reported that JP Morgan has started using technology to check whether the hours junior bankers say they work actually match their digital activity — things like keystrokes, meetings, and video calls. The bank says this is about well-being, about awareness, not enforcement, about making sure people aren’t overworked. And on the surface, that sounds reasonable.
But when you look a bit closer, it raises some uncomfortable questions. What’s really happening here is a shift from reported work to observed work. Not what you say you did, but what the system can verify. And that’s where the reaction gets interesting.
If you look at the comments on the FT’s post about this, there’s a very clear pattern. Some people see this as logical, almost inevitable. In a data-driven industry, of course you measure activity more precisely. But a lot of the reaction is skeptical, even uneasy. You see comments like, “this really screams we trust our employees.” “This is a classic case of measuring what’s easy instead of what matters.” “Big Brother is watching you.”
And then there’s a more nuanced point that comes up repeatedly. Does this actually improve anything, or does it just change behavior? Because if people know they’re being measured on activity, they optimize for activity. More keystrokes, more visible presence, more signals that look like work — but not necessarily better outcomes.
And that connects directly to the earlier discussion about billing. If AI is automating more of the actual work — the analysis, the modeling, the drafting — then what exactly are we measuring here? Time, activity, presence, or value?
There’s also a deeper cultural question. Investment banking has long had a reputation for extreme hours. JP Morgan has already tried to address that, capping weeks at 80 hours, for example. 80-hour weeks. The days of 40-hour weeks are a distant memory, obviously. But if people were underreporting hours to stay on deals, then the issue isn’t just measurement — it’s incentives, it’s culture. Technology can surface that, but it doesn’t resolve it.
So this opens up some bigger questions. Are we moving towards a world where all knowledge work is continuously monitored and verified? Does that improve trust or undermine it? And if both pricing and measurement are shifting at the same time, what does a fair day’s work even mean anymore?
Shel: Absolutely. One of the things we keep hearing about AI is organizations are going to have to rethink things like workflows. And we’re talking about organizations that are not going to look at all in five years the way they do today because of AI. Are people thinking that it’s going to take 40 hours for somebody to do today what it took them to do before if all of that grunt work is being taken over by AI?
On the other hand, I have seen that AI has increased the number of hours people are spending on their jobs. There’s some very recently released data on that, that they are more stressed now with AI in the picture. And if you’re putting in more hours, is this really an issue?
I’m also always struck by, as you mentioned in the report, the lack of trust, the signal of the lack of trust that this sends. I’ve always felt that the availability of these tools that allow this kind of monitoring raises the question of, you know, just because you can, should you? And yeah, I don’t think that you should. I think there are better ways to determine whether your people are working, and looking at their outputs is the best of those. Have they delivered what you expected them to deliver?
Because when you destroy the trust that you might have had, or perhaps you never had trust in your organization in the first place, if you have new hires who come in and find that they are being monitored in this way, they’re just inclined to find ways to cheat. I saved an article in my link blog not too long ago from the HR Digest about key jamming.
The point on this was that if you have employees who are doing this, you have a bigger issue. But if you haven’t heard of key jamming, this is easily available products that remote workers use by putting them on their keyboards and it continually presses the key. So it looks to the software that’s monitoring like that keyboard is active, that employee is working these hours. They could be off doing whatever they want.
I imagine that there are some keystroke monitoring software that have been updated to address this and want to make sure that they’re typing real words or real numbers and not just repetitively striking the same key. But then employees will figure out the next thing, or the companies that sell these products will figure out the next thing to make it appear that the employee is working.
Better to build trust so that the employees will want to produce great work for the organization that they love working for than to destroy trust and implement these kinds of monitoring tools.
Neville: So it’s interesting. JP Morgan is quite resolute in their defense of this, because as they say, they’re doing this to help junior employees not overwork. There was a case here where an intern at the Bank of America died in 2013, which the coroner said was linked to long working hours. And the anecdotal stuff has emerged constantly since then on people who are totally wrecked emotionally because of the hours they’ve got to work.
To be fair to JP Morgan, they’ve responded to that at scale in the organization. The trouble is that nearly every comment I see that has commented on this is extremely skeptical about their true motive. So they’ve got a credibility problem to explain this well. They talk about this is about awareness, not enforcement, they say in their prepared statement. It’s designed to support transparency, well-being, and encourage open conversations about workload. They’re going to roll it out much more widely across their organization.
The estimate is based on employees’ weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings. So people being people, and the thrust of part of the article is what some of these junior employees are doing to kind of be counted and get the checkbox that you’re doing okay to enable them to spend time on the deals that they’re trying to close. Whereas if they did this to the letter and reduced the hours, they wouldn’t be able to close the deal. So I get that. So they’ll find ways to work around this.
And I think, is this inevitably what we could expect to see in every organization? Or surely the organization should approach this in a way that presents something to the employees that doesn’t encourage workarounds to get around these kinds of things. I don’t know. My sense is that we’re going to see a huge amount more of this kind of thing in service industry firms in particular, starting with banks, I suspect.
Shel: I hope not. I mean, let’s take them at their word. Let’s say that this is their solution of having Big Brother looking over employees’ shoulders for the employees’ benefit. Like I said, let’s take them at their word. They don’t want employees overworking because they don’t want them dropping dead at their desks. Great. That’s a great thing.
You do that by having well-trained managers who understand that their role is to set expectations and to display the kind of caring for the members of their teams that leads them to make sure that they’re not overworking. Where I work, we are working really hard in communications, in HR, and at the executive levels to develop this culture of managing where managers are checking in on employees to make sure they’re okay. We’re training managers on watching for signs of mental wellness distress among employees and then reaching out to them to say, hey, let’s take care of this, right?
It sounds to me like JP Morgan would rather implement a Big Brother program than to have engaging managers, one of the pillars of employee engagement, I might add. Why do people leave organizations? 50%, according to some research, leave because of their boss. And you know, if you have this churn among your junior people, maybe that’s because you’re doing a piss-poor job of training your managers to be really good managers. And if you did that, you wouldn’t need to erode the trust of your employee base by implementing Big Brother systems.
Neville: That makes total sense. I agree with you. But I’m wondering, maybe there’s something structurally amiss here. So for instance, the FT says in 2024, JP Morgan appointed a senior banker to oversee the well-being of junior staff. JP Morgan has since curtailed weekend work and also capped the working week for younger employees at 80 hours, typically based on self-reported numbers. That’s key, that last bit.
This process has proved imperfect as some junior bankers misreport the hours they work. One issue is they declare fewer hours than they have actually spent to avoid being pulled from existing deals or to ensure they can still be added to new ones. So I would say, if we kind of know this kind of behavior is going on, what are we going to do to address it and try and bring them around to our thinking? But that requires structural change in the organization as to how you do all this.
Shel: I have an answer. If AI is saving you money, use that money to hire more junior people so that nobody has to put in that kind of time. So staffing should increase as a result of the use of AI, not decrease, says I.
Neville: Are you listening, JP Morgan? Well, yeah, no, that’s a fair comment. I think just reading a bit more about the FT piece, it focuses on the tech workplace surveillance technologies. So not necessarily AI doing this, although it must be in there somewhere.
Shel: No, no, I understand. But if we’re using AI in the organization and it’s lowering costs because the rote work is being done by the AI, those savings could go to the additional staff. So nobody has to put in 80 hours.
Neville: Yeah. Well, I think it’s a problem across the sector because the FT quotes Goldman Sachs, for instance: junior bankers on occasion have been pulled aside and told to rest when its internal electronic monitoring was triggered. Get that. That’s how they’re watching all the time.
I think the comment someone made on the FT’s piece about, you know, we’re going to see more of this — I think we will. It is clearly not perfect. I’m reminded a little of some of the stuff I paid a lot of attention to a couple of years ago about surveillance in China and the surveillance society in China, where you are monitored constantly all the time by the state. And it doesn’t necessarily mean central government, but the local way you live — the town, the city — monitors everything you do: what you spend your money on, what time you get up, what time you get on the train to go to work, how you clock in, you swipe your card — all that.
That’s something as part of their society and structure. We are probably heading that way, I would argue, in Western countries, notably in Europe, some European countries. I don’t know about the States, Shel, to be honest. I don’t really know whether this is likely to be kind of prevalent anytime soon. I wouldn’t be surprised if it is, particularly if it’s going to be done covertly as opposed to openly and transparently, which I think is likely in America.
Shel: Well, mass surveillance has definitely been in the news in the US lately with Anthropic pushing back on the Pentagon’s insistence that they be able to use Claude for that.
Neville: Yeah, I mean, we’ve got experiments going on here which make the headlines now and again, although no one seems to be unduly concerned, which is the police in some jurisdictions are trialing more facial recognition technology that is now far superior to what’s been done before, that scans people as a matter of course in any public place. That, I would say, is an inevitability. We’re going to see that.
So what does that mean for organizations? I mean, that’s a broad avenue to go down, the discussion on that wide topic. But in an organization, it surely does become understandable, if not acceptable, that when you show up at the office to work — and by the way, that’s still a thing for many organizations, even though I’m now seeing in all the newspapers here that because of the war in Iran and the price of oil shooting up and all this stuff, there’s now talk about one way you can help to reduce energy usage is work from home and drive less and drive slower.
So that kind of talk is now starting to permeate public discourse. So I wonder what difference that will make to any of this, because if we’re to see more and more people want to work at home, that’s reversing. Are we going to see a backlash from employers who demand people come to the office? I mean, these are just questions. I don’t have answers for those, but it’s part of the picture. We are facing this kind of change that has good points, I can see quite clearly, but it’s alarming the state we’re at with all of this.
Shel: Yeah, just for a point of interest, yesterday I watched a video on YouTube. It was Senator Bernie Sanders talking to Claude. This is on YouTube. I’ll share the link in the show notes. He’s asking Claude questions about what AI can do in terms of this kind of surveillance, its monitoring of people. And Claude is very, very candid in its answers to Senator Sanders. It’s about 11 minutes. I think it’s really worth watching because it surfaces a lot of these issues, and as a society, I think we have to decide whether this is something we want in the workplace or in general.
Neville: I agree. That’s interesting.
Shel: Well, thank you, Dan. Great report. I have to admit that I have been neglecting my Mastodon instance. It’s called Mastocomm, C-O-M-M, for communications. I set it up when I figured that it was an easy thing to do and a great way to learn about how to establish an instance in the Fediverse. And I haven’t been taking care of it lately. And Dan, your report has inspired me to go back. I’ve been away so long, it wanted me to log in.
But it’s still there. It’s still up and running, which means I still have money coming out of my checking account every month to pay the fee to the service I use to host it. So as long as I’m spending the money, I might as well manage that. So thanks for the reminder, Dan.
Neville: Yeah, good report on that. I’ve not listened to your audio yet. But thinking about Mastodon, I don’t go directly to Mastodon. I haven’t been there this year. What I do is every time I post on Threads, it posts to the Fediverse. And so I do it that way. It’s cheating a bit because I’m not actually engaging with anyone there at all. But I get quite a steady stream of engagement back, people who like and so forth. And I do occasionally do the same myself via Threads. So it’s a lazy approach to doing it. But I’m okay with that because I’m present via Threads and that works well. And it’s a useful way of keeping in touch. If Threads is more likely to be your primary engagement channel rather than Mastodon, that’ll work quite well.
Shel: If anybody’s interested in joining the Fediverse and being part of a Mastodon instance that is focused on communication, join me: mastocomm.org. I’ll look for you there.
Shel: A professor at Syracuse University’s Newhouse School recently made a point that deserves to be heard beyond the J-school world. Jason Davis, who specializes in detecting disinformation, said the challenge today isn’t really about spotting fakes anymore. The AI tools are so good now that there just isn’t much that we can catch. To break the misinformation amplification cycle, people need to apply critical thinking before they decide to pass something on.
Now that connects to something I’ve been watching closely, because the misinformation problem has moved well beyond being a journalism problem. It’s a business problem now, and that means it’s a communication problem. The scale is pretty significant. Deepfake incidents tracked globally surged from about 500,000 cases in 2023 to over 8 million last year. That’s a 900% increase in just two years. A recent executive survey found eight in 10 executives are concerned about AI-driven misinformation impacting their brand. Yet many admit their companies aren’t fully ready to detect or respond.
A University of Melbourne/KPMG global study of 48,000 people across 47 countries found 87% want stronger laws to combat AI-generated misinformation. And a survey found that fewer than four in 10 Americans say that they can confidently spot AI-generated content, and 88% say it’s harder now than a year ago to tell what’s real online.
So who’s fighting back and how? Sophisticated newsrooms — think the New York Times, Bellingcat, investigative outlets worldwide — are now using multi-layered verification: a combination of reverse image search, metadata analysis, and geolocation cross-referencing to authenticate content. Reporters are using AI itself as a detection tool, analyzing thousands of posts to detect bot behavior by identifying patterns in timing, repetition, and network activity.
Beyond individual newsrooms, the Coalition for Content Provenance and Authenticity, that’s the C2PA, is building broader infrastructure. They’re backed by Adobe, Microsoft, the BBC, Google, Meta, OpenAI, and others. With that backing, they’ve developed an open technical standard that functions like a nutrition label for digital content, establishing its origin and edit history. The U.S. Cybersecurity and Infrastructure Security Agency endorsed this approach in January last year. Adoption is still limited, but the standard exists and it’s worth watching.
There’s also a striking research finding from a field experiment with readers of the German newspaper Süddeutsche Zeitung. Exposure to AI-driven misinformation reduced overall trust in news, but actually increased engagement with highly trusted sources. As synthetic content proliferates, credibility becomes scarcer, and as a result, becomes more valuable.
That finding has direct implications for us in organizational comms. A deepfake of your CEO, a fabricated press release, a manipulated earnings statement — these are no longer theoretical. A hacked news tweet in 2013 briefly erased $136 billion from the S&P 500. The tools to do something far more sophisticated are now consumer grade.
Deepfake fraud attempts grew by 3,000% in 2023, and humans detected manipulated media only 24.5% of the time. So practically: monitor for impersonation of your executives and brand. This belongs in your communications infrastructure. It’s not just an IT thing. Establish a verify-first culture inside your organization. Have pre-drafted response templates ready for the scenario where fake content goes viral under your or your organization’s name.
And invest in your organization’s credibility before a crisis arrives, because that research finding tells us audiences under information stress return to the sources they already trust. The newsrooms dealing with this are systematic. They document their processes and when they can’t definitively authenticate something, they say so. That’s the standard every comms team should hold itself to.
Neville, I know you’re watching all of this from across the Atlantic where the EU AI Act is pushing content labeling into requirements under law by August 2026. Are organizations taking this seriously? And is this regulatory pressure in Europe making any difference?
Neville: To your last point, I don’t think it’s making waves-type difference. Awareness is rising. I’m seeing more people talking about this topic online across Europe, here in the UK too. But I think it requires far more and more effective communication to bring the messaging home to people about this huge topic. So it’s early days.
We’ve got debate continuing here in this country about online safety and all these other issues that kind of obscure some of the important details such as this, for instance, that does require further debate. Things that I pay attention to certainly are the broad debates about all of this, but seeing what people are doing. You mentioned some examples in your introduction about some media broadcasters in particular, what they’re doing to verify the veracity of content. I saw an excellent article the other day about what Wikipedia is doing in this area, because there’s a place that’s at high risk of misinformation and disinformation.
But there’s no uniformity from what I’ve seen, certainly. There’s lots of homebrew solutions people are suggesting. There’s lots of good solutions some respected organizations are suggesting that you do, but there’s not a big groundswell of action on this yet, it seems to me. So I’d be interested myself even to hear what listeners in the UK and across EU countries have to say about what they’re seeing in this area. But I don’t see a huge amount of conversation going on about this.
Shel: And I’d really appreciate, listeners, if you’re in organizations that are doing anything to identify misinformation and to catch it before it’s used or even redistributed — what are you doing? How are you going about that? Is there any infrastructure for this that’s being implemented? I’d really like to know because I think this is going to become a bigger problem faster than most people are aware of.
Neville: Yeah, I mean, one thing I am seeing talk about that caught my attention quite dramatically is the amount of fake news in a broad sense, but misinformation, particularly about the war in Iran, the use of video that is simply fake. I’m also seeing the use of video that isn’t fake and being highlighted as the fact that it’s not fake.
The reality though is that like most things you encounter online, how do you really know? And what do you do if you see something you think, I’m going to share that with my network? What do you need to do before you do that? Most sensible people will take those precautionary steps, the most fundamental of which: how do you trust what you’ve seen? Is the source credible? Is it a reliable source? If it’s a media property, or even before that, who else is talking about this?
So these are things that I do as a matter of course now on almost everything I encounter online, particularly if I’m thinking of sharing it. I’ve yet to be caught out by not doing that. I make it a point, and partly it’s affected by the fact I’m doing less of that than I was before a couple of years ago, far less. I don’t post a lot on social networks, except stuff that I think is really interesting to share with people who follow me, or just because I feel like I want to share this because I think it’s interesting.
And that works. No other heavy message behind any of this stuff. But I do carry out due diligence. And I think I do it reasonably well because I’ve yet to be caught out. Now, of course, someone listening to this might say, well, let’s test him out on something then. OK, fine.
Shel: Now that we’ve heard you say this…
Neville: So, right. Go for it and do that. Let’s see how we go. But I think this is the status of where we’re at. The changes that are happening because of the events that are happening, and the fact that these euphemistic bad actors are increasing — there’s more and more of them. We have events taking place in the world now, note what’s going on in the Middle East, that lend themselves to more of this. You’ve got to really do your due diligence on things that you might not have felt you needed to before.
Shel: Yeah, and I think due diligence needs to go beyond the tools that can detect a deepfake. You’ve got to remember that people were sharing content that was disinformation before there was AI. So you run your algorithm, you put a video through a tool and it says, yep, this is real video, it’s not AI generated — but it’s claimed that that video is showing something from the Iran war when in fact the video was shot years ago during, say, the Iraq war, and somebody just grabbed that video clip and made the claim that this is from the current conflict. This happens all the time. It still happens today. It’s not from this weather event. That’s from that weather event five years ago.
So we have to be diligent and not just rely on the tools, and we have to come up with some solutions. I remember years ago when we reported it here, when blockchain was still a topic of conversation in digital circles, Ike Pigott had recommended a tool. I don’t remember exactly how it worked, but as you shot video, it was recorded into the blockchain, which would authenticate its authenticity. And that became a way for people to see that it was genuine video and not manipulated somehow and not a deepfake — it was actually shot on a video camera and uploaded as a blockchain record in real time. So there are potential solutions out there. We need to get serious about implementing them in this profession.
Neville: Yeah, that’s a good example of the blockchain one, although that was pretty niche. That was pretty out on the edge, as it were. There were lots of things like that that just didn’t survive and disappeared. Things change, things evolve, and people are trying new things. I don’t mean bad guys, but in a good way. So let’s see how that goes. But you need to keep vigilant on all this.
And by the way, when I mentioned misinformation, I wasn’t thinking of deepfakes and that kind of thing. It’s more the fundamental stuff that crosses your screen every day or your newsfeed or whatever it might be, saying something that someone says something or someone has done something and it’s interesting and fine. Don’t trust it until you verify it. So if it’s on the BBC or CNN or any other broadcaster, you know, Süddeutsche Zeitung newspaper, the one you mentioned earlier, Shel — that’s a good bet that it’s OK.
But you know what? Some media recently have been caught out with fakes. So it still pays to do your own due diligence, particularly if that content is something you’re going to use in a way that could embarrass you if it turned out to be fake or simply wrong. So it’s worth doing. Most people think that they don’t have time to do that. You have to make the time. This is part of your future.
And AI has a role here. Arguably, you could say, well, I need to do this myself. No, you don’t really. Your favorite chatbot, if you trust it, it knows enough about you, and you can still verify stuff. It does the searching and finding the sources. You then check them. It can check them too, but you still have to do that. It just makes it easier for you to do that. You still want to do that work, by the way. There’s no magic bullet or shortcuts here. So it’s worth it. You learn a lot doing this, too. I’ve learned huge things from doing all this myself. And it’s been very, very useful.
Neville: So there we are. OK, let’s talk about bot traffic. In an interview at South by Southwest, literally a week or so back, with TechCrunch, Cloudflare CEO Matthew Prince said that by 2027 — so as you pointed out earlier, we’re eight months away basically — bot traffic will exceed human traffic on the internet. That’s not entirely new in principle. Bots have always been part of the web. But what he’s describing is a change in scale and function.
Now think about this: Cloudflare — I don’t have the exact number, but don’t they manage like 30% of all the traffic on the web that goes through some of their servers somewhere? They do caching. They do all sorts of interesting things with people’s data. I use it on my blogs. I’m sure we use it on the FIR network. I mean, it’s part of the plumbing of the internet now. And you might remember a month or so back, Cloudflare was all over the news because they were hit by a distributed denial-of-service attack or some such that took large chunks of the internet offline because people like Amazon and some of those big properties use Cloudflare too. So it’s quite something.
Anyway, historically bot traffic has been relatively stable, around 20%, largely driven by search engine crawlers. What’s changed is the impact of generative AI, said Prince. His point is that AI agents behave fundamentally differently from human users. A person researching a purchase might visit a handful of sites. An AI agent performing the same task might visit thousands of sites. This is not incremental growth. It’s a multiplier effect — not just more traffic, but a different kind of traffic.
That has consequences at three levels: infrastructure, economics, and behavior. First, infrastructure. If AI agents generate orders of magnitude more requests than humans, then the web becomes a system that increasingly serves machine activity. Prince talks about the need for new infrastructure, including ephemeral sandboxes where agents can execute tasks without overwhelming the broader network.
Second, economics. The commercial web has been built around human attention: visits, impressions, and clicks. If a growing share of traffic is non-human, that model doesn’t just weaken — it becomes misaligned with how the web is actually used.
Third, behavior. Prince characterizes this as a platform shift comparable to the move from desktop to mobile. If that’s right, then the way information is discovered, consumed, and acted upon changes fundamentally — and not necessarily by humans.
That raises a set of implications that go beyond infrastructure. If machines are increasingly intermediating access to information, then visibility is no longer just about being found by people. It’s about being processed, selected, and used by systems. This links back to the earlier themes. We talked about how AI changes what work is worth. We followed that with how AI changes what and how work is measured. Here, it’s changing the environment in which both of those things happen.
So this is less about traffic and more about control — who or what is actually navigating the web. Which leads to some important questions. If AI agents are doing more of the searching, what does it mean to be visible online? If traffic no longer equates to human attention, how do organizations think about value? And if this is indeed a platform shift, what replaces the current models that underpin the web?
Shel: These are interesting questions, and I think that this is ultimately more a matter of evolution, just like the web was, even the internet before we had the graphical interface of the web. It’s a shift in what’s doing what. But at the end of the day, all of those bots have been deployed by whom? I mean, I have agents out there. These are just set up on Claude and on ChatGPT that are going out and doing searches and coming back and giving me reports. Me, I’m a human, last time I checked.
And I’m using the results of the work that those bots do. So these agents are proxies for the humans who need something done with this information, whether it’s delivering a report or creating a spreadsheet or what have you.
These are human-deployed bots. I mean, ultimately in every case, a bot has been deployed by somebody for some purpose. And I think having your content out there for those bots to find so that those results are delivered back to the human and you’re visible there — all it’s doing is reducing the need for the human to sit there for hours doing the searching and just having the AI go out and do the searching for them and delivering back results. But those results are still being used by people.
So this doesn’t concern me all that much, unless there’s something going on here that I’m not aware of with agents suddenly creating themselves to go off and engage in activities that have no human behind them, in which case we’re in the realm of science fiction. And I don’t think we’re there yet.
Neville: Well, that could be the case, although I think there are signs that we might be heading in that direction. Thinking about what we talked about in the last episode on that darker place that you cited, Ethan Mollick talking about what happens if it all gets taken over by an AI — that question applies here as well. You’ve got the AI agent instructing other AI agents. And I read someone talking about that very topic in quite a compelling way that this is already happening. So that wouldn’t surprise me one bit at all. So we’ve got to think of that too.
Shel: Yeah, now we’re talking about two different things, right? I mean, we’re talking about bots and agents here as an umbrella topic. But the fact that bots have been deployed to search and report back is one thing. Bots that are creating content is another, which is actually the topic of my next report.
Neville: Got it. Yeah, you’re absolutely right. We were talking about bots. So they are deployed by humans to achieve certain things. I guess I could project that out and say what happens in a darker place where the bots are deployed by AI agents unbeknownst to the human. I mean, I’m not Skynetting here, by the way. This is just projecting the thought out. And I welcome these kinds of discussions on “what if” when we see what’s happening now. It immediately makes you think, yeah, but what if? So this is part of how we generate good conversation about this kind of topic.
But it is interesting. I think the way in which Matthew Prince kind of framed it — that someone does a search for something in a retail outlet online and he or she may do a couple of dozen searches, but the AI instructs a bot to do this and that bot goes out and there’s thousands of searches all in a short period of time. And you suddenly see, wow, the scale of this is absolutely phenomenal. And that’s really, I think, part of what Prince is arguing: when bot traffic overtakes human traffic, we are confronting a scale of an order of magnitude that is driven by the system.
Is he ringing alarm bells here? I’m not sure that he is or not, but he’s looking at the need for a new kind of infrastructure to take care of this. And I think that’s actually a good avenue to explore.
Shel: Probably. I mean, Google has always used bots to go out and scour the web — called them spiders back in the day. But they only sent out the one and it found everything, those millions and millions of sites. And all that information resides on Google’s servers. So when you’re doing a search, it’s not going out onto the web, right? It’s looking in its own data centers and giving you those results. And those spiders, those bots, are always out there, always running, but just the one from Google.
Now with AI, you’re asking it to go out in real time and scour the web. So yeah, it’s sending out thousands in order to do essentially the same work that Google did. And then it brings you back the result in that narrative output that you get. So that’s why we’re seeing so many more bots out there. Is this a problem? I’m not an engineer, so I don’t know.
Neville: No, I don’t know either. I’m not sure it is a problem. But I’m cognizant, paying attention to what Prince is saying, that none of this is incremental growth — it’s a multiplier effect. And could it be that we’re at risk of everything grinding to a halt? Is that what he’s saying?
The consequences I listed — infrastructure, economics, and behavior — make sense, and they are connected. The generating of orders of magnitude more requests than humans are capable of doing is partly the thing. And I can see that. The web is then a system that increasingly serves machine activity, which is how he’s making that connection. He talks about the need for new infrastructure, including sandboxes where agents can execute tasks without overwhelming the broader network. That makes a lot of sense.
Shel: Yeah, I like that. Nothing wrong with that.
Neville: I use sandboxes myself, so I understand conceptually what that means. The economics about it all, where the behavior is now totally different. Visits, impressions, clicks — that’s what humans did, or still do largely. But as he argues, if you’ve got a growing share of this, increasingly more non-human traffic according to Prince, that model doesn’t just weaken — it becomes misaligned with how the web is actually used today.
OK, does that mean we need to change that? Well, yes, it does. How do we do that? Well, that’s part of the bigger debate. Behavioral characteristics — he’s likening this to the move from desktop to mobile. If he’s right, then the way this is all discovered, consumed, and acted upon changes, not necessarily by the humans, changed by the AI. Is this a bad thing? I don’t know. Maybe he’s just ringing the hand of caution and ringing the cowbell. Maybe that’s it. But it certainly is provocative what he’s suggesting.
Shel: Yeah, certainly there’s absolutely going to be more bot traffic on the internet. That’s inescapable with all of this. Maybe the LLMs, the labs, find ways to confine the searches so they’re searching relevant sites to reduce that traffic. I don’t know.
Neville: Yeah. So let’s hear about your connection piece then about this. Assume that humans are not at the heart of all of this.
Shel: Sure. And you mentioned Ethan Mollick earlier. I mentioned this in an earlier episode a couple of weeks ago, I think. But he said that when he posts something, he can tell that about 70% of the comments that are left on his posts have been generated by bots. And it’s weakened the value of LinkedIn to him, which is discovering smart people with intelligent thoughts and perspectives. And 70% of that is now being generated by bots.
So we have bots that are now creating content. So you talked about bot traffic — stay with that theme, but focus more on the content. A new peer-reviewed study just published in the Journal of Public Relations should be required reading for anyone responsible for managing an organization’s reputation and messaging. The paper is titled “Social Bots as Agenda Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging.” And it came to my attention by way of Bob Pickard, one of Canada’s most respected PR practitioners and someone whose commentary on this research carries special weight. More on that in a minute.
The research, led by Philip Arceneaux at Miami University, along with colleagues from the University of Arizona, University of Texas, and University of Florida, is the first study in public relations scholarship to empirically measure how social bots interfere with organizational messaging. The authors note they found no prior PR research addressing this specifically, which is remarkable given how long the threat has been visible.
The study analyzed nearly 900,000 tweets generated during Ohio’s 2022 midterm elections. What the researchers found was that social bots successfully influenced the agenda formation process, most heavily in negative tone and most notably among the election campaigns. Bot messaging was most effective at influencing attribute salience — that is, how issues were framed and characterized — driving primarily negative sentiment. The bots were the strongest influencers of campaign agendas with measurable downstream influence on press and public discourse.
Here’s the distinction that Pickard zeros in on in his commentary. And I think it’s the most important insight in the entire body of research. The bots didn’t control what was discussed. They controlled the tone in which it was discussed. And as Pickard writes, that may be a more dangerous lever. Your organization puts out a carefully crafted message. The bots don’t need to invent a counter-narrative. They just need to inject enough negativity around yours that the frame gets corrupted before it can set.
A primary strategy social bots adopt is the creation of information disorder — information ecosystems filled with suspicion and distrust that erode public confidence. And as Pickard observes, this has a direct downstream effect on communications decisions. Distorted inputs produce distorted decisions. If your social listening is picking up manufactured sentiment — bot-driven negativity masquerading as genuine stakeholder concern — you may be prioritizing the wrong issues, reacting to the wrong pressures, and in some cases, misreading your stakeholders entirely. Some of what looks like groundswell may just be a bot farm.
The asymmetry that Pickard describes is sobering. A small network of automated accounts can systematically degrade the messaging environment of a well-funded organization with a full communications team. And as lead researcher Arceneaux put it, it’s not natural selection anymore — it’s artificial selection by who controls the most bots.
A survey cited in the study found that 51% of leading communication professionals already reported that social bots present a clear threat to organizations and their reputations. And practitioners view social bots as the most pressing ethical challenge in public relations. And that was before generative AI made bot-produced content dramatically more convincing.
Why does Pickard’s voice matter here particularly? Well, when he blew the whistle on the Chinese interference at the Asian Infrastructure Investment Bank in 2023, hundreds of pro-China bots on Twitter targeted him with insults, accusing him of being an American agent, a white supremacist, and a neocolonialist. The pattern the researchers describe in the study — rapid negative amplification, coordinated framing, and agenda hijacking — isn’t abstract to Bob. He has operated inside of it.
And his observation that state-directed information operations seem to understand the bot asymmetry better than most corporate communications leaders is a pointed challenge to our profession.
The study recommends stronger media relationships, better investment in bot detection tools, and a return to traditional polling as a signal less susceptible to manipulation. And that’s sound advice. And on the practical side, research on bots’ impact on public discourse suggests their influence is most pronounced in the early stages of an issue — before credible sources establish the dominant narrative. Which means getting your authentic message out fast, before the negative frame hardens, is now a genuine strategic imperative, not just a good practice.
There’s also a real-world corporate illustration of this dynamic, and it’s one that we talked about more than once. In 2025, research found that roughly half of all the posts about the Cracker Barrel controversy in its early days were driven by inauthentic bot activity. So a minor design story artificially elevated into a culture war flashpoint before human communicators could get their footing. That’s the playbook now.
Neville, I know you follow this activity and information disorder closely and you’ve watched platform governance response in Europe in particular. What do you think? Are social platforms doing enough to protect organizations from bot-driven agenda hijacking, or are communication professionals essentially on their own here?
Neville: I don’t think they’re doing enough. They are doing some, the platforms, but their attention is not on this at all. I think any organization, any corporate communicator, needs to recognize the fact that — regard it as if you’re on your own, that you need to take the steps that are needed.
Reading Bob’s piece on LinkedIn, an interesting turn of phrase he uses here, talking about “hands-on combat experience versus synthetic competitors gaming the algorithm in contested environments” is now extremely important. So make of that what you will, but you need to be up to speed with these developments. There are plenty of places you can get information from, get insights and guidance from as well.
I think, though, that this is the fundamental point which Bob Pickard makes in his piece: some communication leaders are still fighting the last war. This new research soberly explains new realities of possibilities of modern PR battlegrounds.
Now, I have not read the article, Shel, that you had in our Slack channel. I mean, it’s 34 pages of eight-point type, it seems to me. It’s big. So I would get my AI assistant to summarize the whole thing for me and give me the highlights. I haven’t done that. I think I will do that even to get a good understanding of this.
It seems to me that this is yet another example of the changes that are happening, whether we like it or not, that we have to pay attention to as communicators. We’ve touched on quite a few in this discussion today. Here’s another one. So I can’t really comment more than that, Shel. I’ve not read the report, which I am going to do. But I think his intro to the piece on LinkedIn is good. It’s a good introduction to it. And it then makes it easier to try and wade into it. Although I think for most communicators, some kind of summary is what they’re going to need rather than trying to read the whole thing.
Shel: Yeah, well, the bottom line is, I think, pretty simple. If you release some information and it’s in somebody else’s interest to shift the tone in order to control the agenda, then those bots are going to be deployed very, very, very quickly and create that content that changes the framing of what you started. Because you had a communication goal, and you as a communicator need to be prepared for that. And you need to have processes in place — and these are new processes and new workflows — to make sure that what you want people to understand is the message that fixes in people’s minds before these bots can come in and mangle your message, because that’s what’s happening pretty routinely now.
Shel: And that will be a -30- for this episode of For Immediate Release. We do want to remind everybody again, because we mentioned it earlier, comment on what you’ve heard. If you have thoughts, if you have any experiences to share, if you have questions, share them. The place most people are doing that these days — and in fact, every comment that we shared today was left on the LinkedIn posts where we announced the availability of a new episode. So if you follow Neville or me on LinkedIn, you will get those notifications of those new episodes. That’s the place to comment.
You can always comment on the show notes. That’s where people used to do this all the time. Remember blogs when people used to comment on blog posts? You could do that. You can send us an email to [email protected].
Shel: Boy, am I overloaded with spam in that account, but absolutely not one comment in the last month. One of the things I find in that email account is any voicemail messages that you have left. Just by going to [email protected] and clicking Send Voicemail, and you can send us your comment that way — we’ll play it. We’d love to have another voice on the show. So you can also send us an audio that you record, just attach it to an email and send that to [email protected].
We also have the FIR community on Facebook. And there are lots of places that you can tell us what you think. We’d love it if you did. And we will share that on the next monthly long-form episode. That next monthly long-form episode is coming on Monday, April 27th. Neville, you and I will record that on Saturday, April 25th. So we will have our monthly episode then. Between now and then, not this week, but starting next week, we will have our shorter-form one-topic weekly episodes. It should be three or four of those before we get to the April long-form episode. And that will in fact be a -30- for this episode of For Immediate Release.
The post FIR #506: Battle of the Bots! appeared first on FIR Podcast Network.

32,239 Listeners

30,233 Listeners

113,272 Listeners

56,991 Listeners

10,323 Listeners

9,160 Listeners

67 Listeners

16,507 Listeners

14,315 Listeners

2,223 Listeners

29,273 Listeners

12,835 Listeners

20,181 Listeners

1,258 Listeners

98 Listeners