For a while, businesses were flexing their social responsibility muscles, weighing in on public policy matters that affected them or their stakeholders. These days, not so much, with leaders fearing reprisal for speaking out. But silence can have its own consequences. Also in this episode: The gap between AI expectations and reality; rent-a-mob services damage the fragile reputation of the public relations profession; too many people think AI is conscious, so we have to devise ways to reinforce among users that it’s not; Denmark is dealing with deepfakes by assigning citizens the copyright to their own likenesses; crediting photographers for the work you copied from the web won’t protect you from lawsuits for unauthorized use. In Dan York’s Tech Report, Dan shares updates on Mastodon’ (at last) introducing quote posts, and Bluesky’s response to a U.S. Supreme Court ruling upholding Mississippi’s law making full access to Bluesky (and other services) contingent upon an age check.
So far, AI Isn’t Taking Jobs or Generating ProfitCompanies Are Pouring Billions Into A.I. It Has Yet to Pay Off.Seizing the agentic AI advantageNot today, AI: Despite corporate hype, few signs that the tech is taking jobs — yet1 in 6 workers pretend to use AI amid workplace pressures, survey findsWe must build AI for people; not to be a personFIR Interview: Monsignor Paul Tighe on AI and HumanityThe Wisdom of the Heart (Neville’s post on Monsignor Tighe’s remarks)As Rent-A-Mob “Protests” Rage, PRSA’s “Ethics” Board is AWOLBoom times for rent-a-mobsFox News’ Lawrence Jones Presses Rent-A-Mob Company CEO Over ProtestsDenmark Aims to Use Copyright Law to Protect People From DeepfakesDenmark to tackle deepfakes by giving people copyright to their own featuresWhen Does Corporate Silence Backfire?Home Depot keeps quiet on immigration raids outside its doorsFacebook post on crediting photographers when you don’t have permission to use their contentUnmasking the Copyright Traip: The Dark Side of AI BotsLinks from Dan York’s Tech Report:
Quote Posts Coming to MastodonOur Response to Mississippi’s Age Assurance Law – BlueskyThe next monthly, long-form episode of FIR will drop on Monday, September 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Hello everyone and welcome to Four Immediate Release. This is episode 478, the monthly long-form edition for August 2025. I’m Neville Hobson.
And I’m Shel Holtz, and we have six reports for you today. Hope you find them illuminating. And if you find any of them worthy of comment, I would hope that you would comment on them. There are a number of ways to comment on the content that you hear on for immediate release. You can send us an email to fircomments at gmail.com and attach an audio file if you like. You can record that audio file.
On the FIR website, there’s a tab on the right-hand corner. It says record voicemail and you can record up to 90 seconds. You can record more than one. We know how to edit those things together. So send us your audio comments, but you can also leave comments on the show notes at FIRpodcastnetwork.com.
on the posts we make at LinkedIn and Facebook and threads and blue sky and mastodon. You can comment on the FIR community on Facebook. There are lots of ways that you can share your opinion with us so that we can bake those into the show. And we also appreciate your ratings and reviews. So with those comment mechanisms out of the way Neville, let’s.
hear about the episodes that we have recorded since our last monthly episode.
We did five since then. Actually, it was four plus the last monthly. So we’ll start with that one. It’s episode four, 74 for July, the long-form episode. That one ran one hour, 33 minutes. So a bit shorter than we usually do for the month, which is about hefty, hefty but good, as Donna would say. Yeah, exactly.
So we covered a number of topics related to AI, was how we titled the episode Show Notes. AI is redefining public relations, driving a change in the way we craft press releases, PR is at the heart of AI optimization and more. Good discussion. had lots of topics. The links are brilliant. Lots of content we linked to in that episode.
Then we followed that. That was on the 28th of July that was published. On the 29th, the day after that, we published an FIR interview with Monsignor Paul Tai of the Vatican. That was on AI ethics and the role of humanity. It’s actually an intriguing topic. We dove into a document called Antiqua et Nova that was really the anchor point for the conversation that talked about
the comparison of human intelligence with artificial intelligence and that drove that discussion. He was a great guest on the show, Shell, and it’s intriguing. There’s more coming about that in the coming weeks, the way, because I’ve been posting follow-ups to that in little video clips from that interview and there’s more of that kind of thing coming soon. So we have a comment, right?
We do, from Mary Hills out of Chicago. She’s an IABC fellow who says, insightful and stimulating discussion. Thank you for the extraordinary host team for making this happen and Monsignor Tai for sharing his insights. To the question, my view as a ComPro is to build bridges to discover options to move forward and choose the best way. Think discursive techniques, sociopositive climates, and our ability to synthesize data and information.
It taps into those intangible assets we bring to our work and are inherently in us.
Good comment. Reminds me, the way, related to what you talking about, how to comment before we started this, is most of the comments we seem to get, certainly in the last six months, if not more, have been on LinkedIn. It’s a great place for discussion, but that’s a business network. You need to be a member to see them. So if you’re not a member and you want to comment, join LinkedIn, otherwise you won’t be able to.
Yeah, it is. So then next, well, you’ve got a paid option, but generally it’s free unless you take out the paid option. I’ve got the paid option too, just as a little aside there. So we followed that on the 4th of August, episode 475, title of the post, algorithms got you down, get retro with RSS. The rise of social media news feeds had rendered RSS useful for many people, said, and declining usage led Google to sunset it.
Not for me, I pay for mine, but.
Yeah, that’s right. Exactly.
But RSS feeds never went away. And we explored that a bit. Most people don’t know that all the newsletters they subscribe to, the sub-stacks or whatever publication it is, RSS is driving a lot of how they get the content that they include in those publications. So it’s part of the plumbing. And it always has been. even now, people don’t think about this. But we had an interesting perspective on that, on how to use RSS afresh in a slightly different way.
476 on the 12th of August rewiring the consulting business for AI. We reviewed the actions of several firms and agencies and discussed what might come next for consultants. There’s been a change, almost literally changing business models with the rise of AI, agentic AI in particular. So we explored that, a good conversation. And finally, 477 on the 18th of August, de-sloppifying Wikipedia. That’s a heck of a.
descriptor you put in the headline that’s de-sloppifying. Wikipedia introduced a speedy deletion policy for AI slop articles. It’s actually a bigger deal than most of us would realize if we ever thought about. Wikipedia, the user generated content encyclopedia is running or rather is addressing or has been trying to address for a while.
the rise of AI generated content that makes it very difficult in a collaborative editing environment with volunteer editors that is all about consensual agreement to change or addition. That takes a while. This is at light speed by comparison to that procedure. They’re coming up with a speedy deletion policy and that’s getting some discussion too. But Wikipedia,
is an important place online. It has been for long time, a kind of a natural first place that shows up when you’re looking for information about a company, an individual, whatever it might be, a subject of some type. so trust is key to what you see there. So we’ve had quite a bit of a conversation on that. that wraps up what we’ve been doing since the last episode.
We did have a comment on 477, this from Mark Hillary, who says, got to say I’m not familiar with Trust Cafe.
Oh, we did. You’re right. We did. Yes. Yep.
No, me neither. That was Mark Henry. I’m surprised I didn’t leave a comment in reply to him because I know him, but obviously I didn’t see the comments at the time.
Now have to go look that up.
Well, it’s waiting. It won’t go anywhere. We also, in the last week, recorded the most recent episode of Circle of Fellows, the monthly panel discussion with four fellows of the International Association of Business Communicators. This was episode 119 of this monthly panel discussion, and it was on sustainability, communicating sustainability.
The panel included Zora Artis from Australia, Bonnie Kaver from Texas, Brent Carey from Toronto, and Martha Muzyczka from the Far East of Canada. The next circle of fellows is scheduled for September 18th at 10 a.m. I tell you all of this because you can watch it in real time and participate in the conversation. This is going to be about hybrid communications and hybrid workplaces.
This will be moderated by Brad Whitworth and three of the four panelists have been identified so far, Priya Bates, Andrea Greenhouse and Ritzy Ronquillo. So, so far, Brad, the moderator is the only American on that panel. Priya from Toronto, ⁓ Andrea from Toronto and Ritzy from the Philippines. So it’ll be a good international discussion on hybrid and.
That will lead us into our reports for this month, right after this.
But one of the biggest workplace stories right now is the widening gap between the promise of AI and the reality employees are living day to day. The headlines have been flooding the zone lately. MIT researchers report that 95 % of generative AI pilots in companies are failing. The New York Times recently noted that businesses have poured billions into AI without seeing the payoff.
And Gartner’s latest hype cycle has generative AI sliding into the famous trough of disillusionment. By the way, that MIT report is worth a healthy dose of skepticism. They interviewed something like 50 people to draw those conclusions. But the trend is pretty clear. The number of pilots that are succeeding in companies is definitely on the low end. But while companies wrestle with ROI, employees are wrestling with something more personal.
Few research found that more than half of US workers worry about AI’s impact on their jobs, while most actually haven’t actually used AI at work much yet. NBC reported that despite the hype, there’s little evidence of widespread job loss so far. Still, the fears are real, and they’re being compounded by mixed signals inside organizations. Here’s one example I read about. A sales team was told to make AI part of every proposal.
but they weren’t offered any guidance, any training, any process change. As a result, some team members just kind of quietly opened ChatGPT and used it to generate some bullet points. Others copied old proposals and slapped on an AI enhanced label. A few admitted they just pretended to use AI to avoid looking like they were behind the curve, which by the way, lines up with a finding from HR Dive that one in six workers say they pretend to use AI because of workplace pressure.
That’s not innovation, that’s performance theater. This is where communicators need to step in. Employees don’t need more hype, they need transparency. They need to hear that most pilots fail before they succeed. They need clarity about how AI will really fit into their workflows and they need reassurance that the company has a plan for reskilling, not just replacing its people.
So for managers, and I am a firm believer that we need to work with managers to help them communicate with their employees, here’s a simple talk track you can put in their hands right away. So share this with managers on your teams. First, AI is a tool we’re still figuring out your input on what works and what doesn’t is critical. Second, we’re not expecting you to be experts overnight. Training and support will come before requirements. And third,
Your job isn’t disappearing tomorrow. Let’s focus on how these tools can take that busy work off your plate. And for communicators thinking about the next 30 days, consider a quick communication action plan. On week one, launch a listening tour. Ask employees how they feel about AI and where they see potential. Week two, share those findings in plain language, including what employees are worried about. Week three,
Host AI office hours with your IT team or HR partners to answer real questions. And on week four, publish a simple playbook. What’s okay, what’s not? How employees will be supported as the tech evolves. That should help you cut through the hype while keeping employees engaged. The technology may still be finding its footing, but if communicators help employees feel informed, supported, and included,
The organization will be in a far better position to capture real value when AI does start delivering on its promises at the enterprise level.
Interesting statistics there, Shell. Listening to that advice you gave, just made me think straight away that, and indeed looking at the HR dive reports in particular with what they’re talking about, 75 % of workers said they’re expected to use AI at work, they say, whether officially or unofficially. That’s a bit alarming, I think.
Some people said they feel pressured and uncomfortable, and some said they pretend to use it rather than push back. So that’s part of the landscape. And that seems to me to be what needs addressing first and foremost, because if that is the situation in some organizations, then communications got a real uphill struggle to persuade employees to do all the things that you mentioned.
So, you know, the comms team could do all those things. Week one, we do this. Week two. But unless you get the engagement from employees that makes it worthwhile doing that is not worthwhile doing, if the culture in the organization says that you’re not really seeing the right support from leaders. So that is probably the fundamental that needs addressing. It’s a sad fact, isn’t it? If that is the climate still that leads to this kind of reporting.
I don’t hear similar in the UK, but then again, there’s not so much, I don’t think so much kind of research going on as there is in the US, plus the numbers are smaller here. This is very US centric. This one in HR Dive is a thousand people they talk to. Nearly 60 % said they use AI daily. I’m surprised that might be higher than that. So that’s all part of the picture there. That makes it a real struggle to implement what you’ve suggested.
What do you think? it a real hurdle?
⁓ I think it is a real hurdle. And I think one of the things that we need to acknowledge is that leaders in organizations who are driving the adoption of AI, let’s be clear. It’s not IT behind the AI push. It’s leaders who see the potential for doing more with less and earning more and everything else that AI has promised are jamming it down the organization’s throats.
I have mentioned before on the show that I recently read a book called How Big Things Get Done. It’s mainly about building. It’s written by a Danish engineer professor who has the world’s largest database of mega projects. But the conclusion that he draws is that projects that succeed are the ones where they put all of the time into the planning upfront. If you jump right into the building, you get
disasters like the California high speed rail and the Sydney Opera House, which I didn’t realize was a disaster until I read about it. But my God, what a disaster. And the ones that succeed are the ones that spend the time on the planning. The Empire State Building went up in I don’t remember if it was two years. I mean, it was it was fast, but they put a lot of time into what we call pre-construction. And I think that’s not happening with AI.
in the enterprise right now. think there are leaders who are saying we have to be AI first. We have to lean into AI. We need to start generating revenue and cutting headcount. So let’s just make it happen. And there’s no planning. There’s no setting the environment for employees. There’s very little training. Although I do see that there is a shift in.
the dollars that are being invested in AI moving to the employee side and away from the tool side, which is heartening. employees are concerned about this because they’re not getting the training. They’re not getting the guidance. They’re not seeing the plan. All they’re hearing is, we got to start using this. And I think that would leave people concerned. think that explains a lot of the angst that we’re hearing about.
Yeah, that makes sense. mean, again, just glancing through these statistics in the HR Dive Report, interesting, the contrast that I’m reading. It says 84 % of workers said they feel more productive using AI. 71 % said they know how to use it efficiently. They report less burnout, less work stress, better job satisfaction. Nearly a third said they feel less lonely.
that would be me, by the way.
Those are the ones who’ve developed a relationship with CHAP GPT, I know. And a quarter said they collaborate more. Four in particular, I was right there, I tell you. But then in contrast, some workers said they’re struggling to keep up. One in four feeling often or always overwhelmed with AI developments. And the third said that learning, using, and checking AI takes as much time as their previous approach to work. So 25 % of those expected to use AI at work said they have received no training.
Yeah, the 4.0 in particular,
Another 25 % said they did receive training, and a third were given dedicated time at work to learn AI skills. So it’s not all bad. That’s a fact. But it goes on, they’re some people in Deloitte about AI development. Disconnect has emerged, where some people are pretending to understand the tech and others declined to prioritize it. So you’ve got a real mixed bag of landscapes, if you like, that need, well.
To me, seems that you need to identify this and figure out how you’re to address it. Because the conflict, well, the contrast of diet seems to me, you’ve got high percentage of them saying they’re more productive. Others struggling to keep up. Others don’t get any training at all. You mentioned those examples you gave of construction examples, like the Empire State Building going up real fast.
The reality with AI is that this is, I mean, to coin a corny phrase again, I suppose, is things are developing at light speed, things are happening so fast that it is hard to keep up with it. So the pressure is there, particularly in the kind of more relaxed environments today, more informality, less formality, where you can’t, the control has vanished from top down.
and that anyone can get access to information about literally anything, just go online. And so people are finding out about these things. They’re exposed to, this is the latest AI, look at this one, and they hear from their peers and so forth. And unless you’ve got a credible resource that is appealing to people, they’re going to do their own thing. Particularly, they don’t feel they’re getting any kind of support on how to implement all this stuff. So this is quite a
a challenge for communicators. But I think it’s a bigger challenge organizationally in leadership where you’ve got this challenge that doesn’t seem to being addressed by many companies. And I would stress that this is not widespread. I don’t see anything in here that tells me this is the majority overall in in organization in the US, in spite of some of these percentages that suggest otherwise. But it is a it is definitely a situation that is not good for the organization. And surely
that must be apparent to everyone, I suppose.
You know, I would hope, but I would also hope that communicators step up and start documenting what’s going on in their organizations and feeding that back up, uh, representing the employee voice to the leadership of the organization. So maybe that they’ll start taking a step back and thinking about how we do this strategically, because it hasn’t been strategic to this point. As employees read about these claims of 95 % pilot failure.
Those who are not really enthusiastic about AI will be able to use that as an excuse for not embracing it. Well, it doesn’t work anyway, and it’s not really making a difference, and companies aren’t achieving any ROI. So why should I spend time on this? It’s probably going to be gone in six months, right? And I was listening to an interview with Demis Asabas, the CEO of Google DeepMind.
And this is on the Lex Friedman podcast, long two and a half hour interview, but great. one of the things that he talked about is, and as Lex Friedman brought it up, he said, I have a friend who studies cuneiform, ancient images carved on stone, right? And he didn’t know a thing about AI. He barely heard about it. And…
It was a sabbath who made the point. said, you know, there are a lot of us who are talking about this and enthusiastic and you know, if you spend time on X, for example, everything is AI all the time and we lose sight of the fact that there is a huge part of the population that is blissfully unaware of all of this still. So there’s that to deal with too.
Okay, so speaking of AI, one of the big AI stories this month comes from Mustafa Suleiman, the CEO of Microsoft AI. He’s written a long essay with a striking title, We Must Build AI for People, Not to Be a Person. In it, he raises a concern about what he calls seemingly conscious AI. These are systems that won’t actually be conscious, but will be so convincing.
that people will start to treat them as if they are. He argues that this isn’t a distant science fiction scenario. With today’s models, long-term memory, and the ability to generate distinct personalities, it could arrive in just a few years. Already some people project feelings onto their chatbots, seeing them as partners, friends, or even divine beings. We’ve been hearing a lot about that recently. I’ll hold my hand up. I had a great relationship with my good friend and assistant, ChatGPT 4.0.
I was not happy with the move to chat GPT-5, which ditched all of that. And I felt like I was talking to someone I didn’t know at all or who didn’t know me. So I get that. But Suleiman in his essay warns that this trend could escalate into campaigns for AI rights or AI citizenship, which would be a dangerous distraction, he says. Consciousness, he points out, is at the core of human dignity and legal personhood, confusing this by attributing it to machines.
risks creating new forms of polarization and deep social destruction. But what stood out most for me wasn’t the alarm over AI psychosis that some commentators have picked up on. It was Suleiman’s North Star. He says his goal is to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. He describes Microsoft’s generative AI chatbot, Copilot, as a case study.
millions of positive, even life-changing interactions every day, carefully designed to avoid overstepping into false claims of consciousness or emotion. He argues that companies need to build guardrails into their systems so that users are gently reminded of AI’s boundaries, that it doesn’t actually feel, suffer, or have desires. This is all about making AI supportive, useful, and empowering without crossing into the illusion of personhood.
Now this resonates strongly in my mind with our recent FIR interview with Monseigneur Paul Tai from the Vatican. He too emphasized that AI must be in service of humanity, not replacing or competing with it, but reinforcing dignity, ethics and responsibility. And it echoes strongly something I wrote following the publication of the FIR interview about the wisdom of the heart, the core idea that we should keep empathy, values and human connection.
the center of AI adoption. It’s a central concept in Antiqua et Nova, the Vatican’s paper published earlier this year, comparing artificial intelligence and human intelligence. So while the headline debate might be about whether AI can seem conscious, the bigger conversation, and the one I think we really should have, is how we ensure that AI is built in ways that help us be more human, not less. What strikes me is how Suleiman pulled tie in even our own conversations.
all point in the same direction. AI should serve people, not imitate them. But in practical terms, how do we embed that principle in the way businesses and communicators talk about AI? Thoughts?
It’s an interesting conundrum, largely because we are told by experts like Ethan Molluck, the professor out of the Wharton School in Pennsylvania, who is one of the leading posters on LinkedIn about AI and AI research, that the best way to get great results from AI is to treat it like a human, engage in conversation with it, and
I find that to be true. find that giving it a prompt and getting a response and letting it go with that is not nearly as good as a conversation, ⁓ a back and forth, asking for refinements and additions and posing questions and the like. And the more we have conversations with it and treat it like a human, the easier it’s going to be to slide down that slope into perceiving it.
to be a person. I think that’s, we’re hearing a lot of people who do believe that it’s conscious already. I mean, not among the AI engineering community, but you hear tales of people who are convinced that there is a consciousness there and there is absolutely not. But it mimics humanity pretty well and is gonna get much, much better at it.
As as Malik said, at any point, the the tool that you’re using today is the worst one you’ll ever use because they’re just going to continue to get better. So getting people to not see them as conscious, I think is going to be a challenge. And it’s not one that I think a lot of people are thinking about much. Looking at the.
productivity gains and other dimensions of this. Certainly looking at the harm, I mean, there’s a lot of conversation out there among the do-mers as they’re called and what kind of safety measures are being considered as these models are evolving. But specifically this issue of treating it like a human thinking of it.
as a person with a consciousness, I don’t think there’s a lot of attention being paid to that and what the steps are going to be to mitigate it.
Yeah, interesting. think I have great respect for Ethan Molyke, I must admit. I read a lot of what he says, but I utterly disagree entirely with this whole point about you must treat it as if it is a person. That’s completely and utterly counter to the whole notion of the wisdom of the heart, which I think is a magnificent way to look at this.
aware and all your thinking the new dignity of the human being is at the center of what we do with AI. So we do not pretend it’s like a human at all. It is a tool that we can build a relationship with, but we don’t consider it to be like a person at all. but it’s not about how it develops. The point is, how do we develop
sure, difference between considering it
in how we use this, not how it’s developing, because we are the ones who are enabling it to develop through all the tools and activities we go on. And the missing piece in all of that is what about the people? What about the humanity here? Where everyone who talks about this, and Ethan Molyk seems to one of those too, it’s about the benefits we get from using an AI. It’ll make more money. It gives us better market share.
We enable people to do these things better, et cetera, et cetera. And yet, reflecting on your report just prior to this, there are many people in organizations who feel ignored, who feel overwhelmed, who are unhappy with this. There’s not enough explanation of what the benefits are. And those tend to be couched in. These are the benefits for the organization and the employees who work there and the customers who buy our products and so forth. So I think.
we have to develop a way of thinking that gives a different focus to this than we are being pressured to accept, I suppose you could argue. There are strong voices arguing this. I get that. And like you said, to which I truly find it extraordinary that there are people who say, yeah, they’re sentient. These are like humans. Not at all. They’re algorithms, a bit of software. That’s it. So…
This is not about a Luddite approach to technology at all. It’s not about thinking out, it’s like the Terminator and Skynet and all that kind of stuff. No, not at all. It’s the moral and philosophical lens that is missing from all of this. And so that is what we need to develop into our conversations about this is that element of it that is missing largely everywhere you look.
It is. I still think that most of the time I’m engaging with a model, I’m having a conversation with it. I if I’m looking for a simple fact, I’ll go to perplexity and get my answer. But if I’m developing a strategy, for example, which is something that I use AI to help me with, I’ll tell you, I have created a custom GPT that is a senior communication consultant. took me about four hours.
to build this out with all of the instruction set. I don’t have the budget to work with a consulting organization and there’s nobody who is higher in the hierarchy than me in communications where I work. So if I wanna bounce my ideas off a senior communications professional, I had to create one. So I did.
And I didn’t give it a name. I know Steve Crescenzo has one, he named Ernie after Ernest Hemingway, but I didn’t name mine, but I’ll go have conversations with it about the strategy that I am considering. And it works really well and it works best when I treat it like a consultant, when I have that conversation. That’s what I coded it to be. I didn’t code it, I gave it the instructions. And I think it’s this behavior on top of the fact that you have character AI and you have…
Facebook and Metta introducing characters that you can engage with that are designed to be people. And you have the therapists now that are coming, AI therapists, and they’re all designed to behave and engage with you like people. And I don’t have a problem with that. This is a tool and this is one of the things that it does well, but how do we keep front of mind among people?
that while you’re doing this, you need to remember that it is not ⁓ a person and it is not conscious. I just want to say that in our intranet, when I sign onto our network in the morning, I have to click OK on a legal disclaimer. Every single time I turn my company laptop on, shouldn’t we have something like perhaps a disclaimer before you start interacting with these that this is a very lifelike, human-like experience that you’re about to have? Keep in mind, it’s not.
Well, that’s the whole point.
No, absolutely. I think I do the same show. I’ve talked about this a lot over the last couple of years on this show elsewhere. I treat my chat GPT assistant like a person, but I do not. call it I have a name. Jane is what I call the chat GPT one. I don’t see it as a real person at all. Far from it. I’m astonished, frankly, that some people would think this is a person I’m talking to. And come on, for Christ’s sake, it’s an algorithm.
So yet it enables me to have a better interaction with that AI assistant if I can talk to it the way I do, which is like I’m talking to you now almost almost the same. But the bit that’s missing, and I think this is the heart of what Paul Tai was talking about, quoting from Antiqua et Nova, and I think this is the core part of the reflection of all this. must not lose sight of the wisdom of the heart.
which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people. And that has got to underpin everything that we do. And as I noted in my kind of rambling post I did write, it was actually better than the one I did the first draft, I must admit. It’s not a poetic flourish, it’s a framing. That’s the thing that we’re missing. We mustn’t see
AI is a neutral tool. It’s not really because we shape it and we need to encourage critical reflection on that human dignity. Wisdom can’t be reduced to data. The Vatican says that ethical judgment comes from human experience, not machine reasoning. Totally agree with that. So, I mean, this is to me the start of this conversation, really. And I think the kind of wisdom
or the thinking, certainly not wisdom, seems to me, the thinking that is the counter to that, such as what you outlined, is very powerful, is embedded almost everywhere you look. So I looked at this myself and think, OK, fine, I’m not going to evangelize this to anyone at all. I know what I’m going to do as far as I’m concerned. And that made me feel very comfortable that I’m going to follow the principles of this myself, which I have been doing for a while now, that is, in a sense, reflective in the world of algorithms and automation. What does it mean to remain human?
So I’ve changed how I use the AIs, I must have, and maybe chat GPT-5 happened at the time I started making that change. That is something I’ve started talking to people about. Did you think about this? How do you feel about that? And seeing what others think. And I’ve yet to encounter anyone who would say, this is amazing. What that’s saying, it makes total sense to me, let’s do this. No one’s saying that that I’ve talked to. So.
It’s something that I think interviews, the interview we did, others that Paul Tai is doing, and what I’m seeing increasingly other people starting to talk about, is the framing of it within this context. That’s where I think we need to go. We need to bring this into organizations. So ⁓ an invitation to reflect, let’s say, that, yes, this is great, what’s going on, and you’re doing this, you need to also pause and think about it from this perspective as well. That’s what I think.
I would not disagree. And a lot of the development that’s happening in AI is focused on benefiting humanity. I’m looking at the scientific and medical research that it’s able to do. mean, just the alpha fold, which won the Nobel Prize for Demisys Abis is to benefit people. Where it’s probably benefiting people less is in business.
Because as you say that it needs to benefit people, think most business leaders think it needs to benefit profitability. And that could be at the expense of people.
Well, it’s actually not about
benefiting people in that sense, because yes, it is. It’s about reintroducing, in a sense, conscience, care and context into thinking about what AI can do that is related to efficiency scale and all those business benefits. That’s not people oriented at all. No matter how they dress it up, saying, well, you employees are going to be more effective. No, it means that our share price will go up from a public listed company, we’ll get paid more money and all that kind of stuff. That’s what drives all of that, seems to
And I’m not saying it’s wrong, not by any means. In a capitalistic economy, for instance, as we’re all in, it isn’t wrong. But it’s missing this part of the jigsaw puzzle. And it’s hard to quantify it. And I know one person had a conversation with someone who give me the ROI on this. thought, whoa, you’re right there with the wrong way to think about this. But we have to. And I think this is really just, I would say to me, an invitation to reflect on how you’re thinking about this, not necessarily to…
change it but reflect on it bring into this what does it mean to remain human in this world of algorithms and automation where things move so fast and the the ROI acronym is right there in the middle of
Yeah, it reminds me of the late great shell Israel asking, what’s the ROI of my pants? Remember that?
Does we do we need ROI on everything?
He would have loved the wisdom of the heart,
yeah, he was very skeptical of the need for ROI for everything. Hence, what’s the ROI of my pants? Of course, somebody came up with the ROI of pants. I remember that too. Insofar as determining what would happen if he went to work without wearing any versus the cost of pants for a year. ⁓ Yeah. All right, well, let’s away from.
There’s some are away there, that’s a fact, yeah. Cool. Yep.
AI and talk about more traditional public relations matters. The term rent-a-mob gets thrown around a lot in political discourse, usually as a way to delegitimize real opposition. But behind the rhetoric, there’s a very real, very troubling practice of paying people to pose as protesters to create the illusion of grassroots support.
And that practice is alive and well, and some firms, including companies that present themselves as PR or advocacy agencies, provide it openly. Crowds on demand, for example, has made no secret that it will recruit and script protesters calling the service advocacy campaigns or event management. I thought event management was like hiring the band and making sure the valet people showed up on time.
If all this sounds like a modern twist on an old tactic, it is for sure. From free whiskey and George Washington’s day to front groups created by big tobacco in the 90s, engineered public opinion has a long history. What’s new is the professionalization of the practice. Today, you can literally hire a firm to stage a rally, a counter protest, or a city council hearing appearance. It’s a service for sale and the bill goes to the client. Legally,
This all sits in a very gray zone. U.S. law requires disclosure for campaign advertising, for paid lobbying, but there’s no equivalent requirement for paid protesters. If you buy a TV ad, you have to disclose who paid for it. If you hire lobbyists, they have to disclose who they’re working for. But if you pay 200 people to show up at City Hall and protest, there’s no federal law that requires anyone to disclose that fact. That’s the protest loophole. Ethically, though,
There is no gray area whatsoever. PRSA’s code of ethics is clear. Honesty, disclosure, and transparency are non-negotiable. The code explicitly calls out deceptive practices like undisclosed sponsorships and front groups. IABC’s code says much the same. Accuracy, honesty, respect for audiences. Paying people to pretend to care about a cause or policy fails those tests.
The fact that it’s not illegal doesn’t make it acceptable. It just makes it a bigger risk for the profession because when the practice is exposed, as it inevitably is, the credibility of public relations is what takes the hit. And it does get exposed. In one case, retirees were recruited to hold signs at a protest they didn’t understand. In another, college students were promised easy money to show up and chant at a rally.
These are not grassroots activists. They’re actors in somebody else’s play. And when the story surfaces in the press, it’s not just the client who looks bad. It’s the agency and then by extension, the rest of the industry. So let’s be clear. Rent-a-mob tactics are not clever. They’re not innovative and they’re not public relations. They are deception. They turn authentic public expression into a commodity and they undermine democracy itself.
If our job is to build trust between organizations and their publics, this is the opposite of that. Here’s the call to action. PR professionals must refuse this work. Agencies should set policies that forbid it and train staff on how to respond if they’re asked. Use the PRSA code of ethics as your shield and point to IEBC standards as backup. And don’t just say no, educate your clients about why it’s wrong and how badly it can backfire.
because agencies can get pulled into this even without realizing it. A subcontractor or consultant may arrange the crowds, but the agency’s name is still on the campaign. That’s why vigilance is critical. Build those guardrails now. At the end of the day, this comes down to the disconnect between what the law allows and what ethics demands. Just because a tactic falls into a regulatory loophole doesn’t mean we should touch it. The opposite.
is true. It means communicators must hold themselves to the higher standard, because public trust is already fragile. If we let paid actors masquerade as genuine voices, we’ll find we have no real voices left at the end of the day.
So the word that comes to my mind readily, listen to what you’re saying and then look at some of the links you get is astroturfing. So remember that? I mean, that was a big deal. I remember you and I talking about that a lot in the first few years when we started this podcast from 2005 onwards. A couple of campaigns I remember being run by PR bloggers as was the primary social network at the times to address that.
⁓ So nothing’s really changed. I mean, one of the links you included was from a woman called Mary Beth West, who wrote a post just a couple of days ago, where she’s actually…
Mary Beth West on the show, by the way. Yeah.
So she criticizing very strongly PRSA in the US primarily, remaining silent on the issue. And she says they are therefore complicit in this quite strong accusation that but
Right, right, okay. But I just wonder why is it that from a communications perspective, whether it’s PR or another element of communication, that these sort of issues pop up and so forth, and yet they repeat what was going on decades prior. AVE is a great one, advertising value equivalence, that was banned by professional bodies well over 15, maybe two decades ago, and yet people still use it.
So what is it about this that we can’t seem to… it’s like whack-a-mole, something else pops up all the time. So this astroturfing version 6, let’s call it, because there’s got to be at least five versions prior to this, how do we stop it?
I don’t know other than to demonize it within the industry and to call it out when we see it. The fact that it happens empowers people to accuse legitimate protesters of being rent-a-mobs. The protesters show up, they demonstrate, it gets news coverage and the opposition says, they were all just paid. They have no evidence to support that. But because
people in their audience know that this actually does happen, they at least suspect that it might be true. So it makes it really easy to dismiss the voice of one segment of society that has chosen to take to the streets or to come to the city council meeting or whatever in order to express themselves and be heard. And I think as…
some of these reports say that’s very, very dangerous for democracy. So there are a number of reasons that we need to call this out as inappropriate as a profession and to disassociate this practice from the practice of public relations.
So who should take the lead on that?
Well, I don’t know that they well, but PRSA and IEBC, CPRS, CIPR, all should, Global Alliance, the professional bodies need to be pushing this hard, I think.
So CTA for the professional bodies, I think, you need to pay attention to this. We’d love to hear from anyone on any of those bodies you mentioned, offer a comment on what do they think about all this and what should they be doing? Is it their call? How do we persuade members of those organizations to consider this and pay attention to this issue? Call to action then.
Thank you, Dan. Great report. I think the approach Mastodon is taking to quote posts is interesting. I’m not sure I am a big fan of the user control concept. It seems to me that that is a bit of censorship. If I say something in public, anybody is welcome and free in a free society anyway, to riff on that.
to disagree with it, to pull my quote and say, look what this idiot said, you know? And to put it in the hands of the person who created the quote to determine whether somebody can do that on a social platform. I’m not sure I’m a big fan of that. I’m gonna need to give that one more thought and read more about Mastodon’s rationale. So I’ll be reading the links that you shared, Dan, but thank you, great report.
So there’s a fascinating and pioneering move happening in Denmark right now. The government there has proposed changing copyright law so that every citizen has the right to their own likeness, their body, their face, and their voice. In practice, this would mean that if someone creates a deepfake of you and posts it online without your consent, you could demand that the platform takes it down. The idea is to use copyright as a new line of defense against the spread of deepfakes.
Unlike existing laws that focus on specific harms, such as non-consensual pornography or fraud, Denmark’s approach is much broader. It treats the very act of copying a person’s features without permission as a violation of rights. Culture Minister Jakob Engels-Schmidt put it bluntly, human beings can be run through the digital copy machine, be misused for all sorts of purposes, and are not willing to accept that. The law, which has broad political support and is widely expected to pass,
would cover realistic digital imitations, including performances, and allow for compensation if someone’s likeness is misused. Importantly, it carves out protections for satire and parody. So it’s not about shutting down free expression, but about addressing digital forgery head on. Supporters see this as a proactive step, a way of getting ahead of technology that’s advancing far faster than existing rules. But here’s the catch.
Copyright law is a national law. Denmark can only enforce this within its own borders. Malicious actors creating deepfakes may be operating anywhere in the world, well outside the reach of Danish courts. Enforcement will depend heavily on cooperation from platforms like TikTok, Instagram or YouTube. And if they don’t comply, Denmark says it will seek severe fines or raise the matter at the EU level. That’s why some observers compare this to GDPR, the General Data Protection Regulation.
a landmark idea that set the tone for digital rights, but struggled in practice with uneven performance and global scope. Denmark is small, but with its six months presidency of the European Union that it assumed on the 1st of July, it hopes to push the conversation across Europe. Still, the reality is that this measure will start as Danish law only, and its effectiveness will hinge on whether others adopt similar approaches. So we’re looking at a bold test case here. Can copyright law
with all its jurisdictional limits really become the tool that protects people from the misuse of their identities in the age of AI.
Maybe. It kind of worked with Creative Commons, didn’t it? The whole idea there was that the Creative Commons license had to be defensible in any country. So they worked to make sure that it would…
qualify under every country’s law. And the first test, as I recall, was actually Adam Curry, had something that he, I think he’s something he created was used by an advertiser in a bus stop poster in the Netherlands. That could be. ⁓ And he took it to court and won on the Creative Commons license. So maybe
I think it was a photo of his daughter or one of his children. Yeah.
A broader approach like that as opposed to country by country would be the way to use copyright to deal with this. Otherwise, you’re looking at every country implementing their own laws and many won’t.
The trouble with critic comments, is that you’ve got the license. That’s A, it’s voluntary, apart from anything else. B, it still requires the national legal structure in a particular country to adhere a case that’s presented to it. So that’s no different to as if it were the national law. And in Curry’s case, he didn’t get any money out of it. He got a victory, almost a Pyrrhic victory, but didn’t get any compensation.
But there are very few and far between the examples of success with Creative Commons. And I think part of the problem actually is that it’s still relatively rare. I’ll find anyone who knows what Creative Commons is. I mean, we’ve had little badges on our blogs and websites for 20 plus years. And, you know, I don’t see it on businesses, on media sites, nothing. I don’t see it at all anywhere other than people who are kind of involved in all this right at the start.
So it’s a challenge to do this. And I think the key is, would it get adopted by others? And I think it’s going to require a huge lift to make that happen. And maybe the example of Denmark might be good if they were able to show some successes in short order addressing this specific issue about deepfakes in particular.
So it’s a great initiative and I really hope it does go well. It’s not law yet, but it probably will become, from what I’ve been reading, the expectation is extremely high it will become law. And if they’re running, if they’re leading the EU in this next six months, the rest of the year, then they’ve got a good opportunity to make the case within the EU for others to do this. So it wouldn’t surprise me if one or two more countries might adopt this as a trial. Then which is you think of three doing it, let’s say they do.
Will it make any difference? Let’s see. Don’t write it off at all. GDPR has been held up as the kind of the exemplar regulation, ⁓ state regulation on data protection. And whether it’s had uneven enforcement and global scope, I agree. And the penalties against it, no one’s collecting money. It’s a huge deal to do that. But it’s still in place and it does have
an effect on other countries. The US in particular has all sorts of things about, you know, if you’re doing business in the EU, you need to pay attention to this and do all that kind of thing. You don’t have the freedom to do things as you did before. So it’s generally seen, I believe, as a good thing that it happened. But, you know, we’re at that stage where technology is enabling people to do not good things like deepfakes.
And so there is no real protection against that, it seems to me. I think the real trick will be is the compliance by social media platforms. If they are found culpable of hosting imaging or a video or whatever, not taking it down when they’re notified, they’ll get severe fines. I’m not sure what that means, but we need to see an example being made of someone. Haven’t seen that yet anywhere.
No, we haven’t. And this is all part of the broader topic of disinformation. And we just had Hurricane Aaron. And there were actually warnings on some of the news sites I saw that there were deep fakes of the storm that were leading people to make bad decisions about what to do for their safety. So.
You know, this is happening faster than organizations, media outlets and political organizations and the like are able to figure out how to deal with it. And you can have fatal consequences down the road. There was, I guess I heard there was one image, I didn’t see it, but somebody told me about it, about a massive wave breaking over a road with cars trying to get out of town and a whale coming out of that wave. That’s sort of what gives it away at the end, but.
you’ve got to be vigilant yourself. And I think in light of this and realities of this, you have to be vigilant. It’s easy to say what does that actually mean? How can you be really vigilant? Good example. I’m sure you’ve seen this show, the meeting on Monday last week between Trump and the leaders of the EU and Zelensky from Ukraine. It shows an image that was posted many places online.
US media in particular and social networks, X notably, showing like a photo, all the European leaders sitting in chairs in a kind of hallway outside Trump’s office waiting to see him, being called in to see him. And stories I read about this is how they were treated. Yet you don’t need to even look too closely at the image. Giveaways like the second person along has got three legs. And the linoleum pattern on the floor.
No, no, no, he really does, you know.
Yeah, the pattern on the carpet or the learning on the floor, as it got further from your vision, it got a bit blurred and the lines were not so wrong with it, you know. That surely would give you pause, but no, people were sharing this all over the place. shows you people don’t really pay attention too closely. They look at the hit factor for them. I’ve shared something cool and 50,000 people go and view it.
That’s a cultural thing that isn’t going to change anytime soon, unless changes happen to how we do all these things. So this is just another element in this hugely disruptive environment we’re all living in with technology, enabling us to do all these things that are nice until the bad guys start doing them. And that’s just human nature. Sorry, that’s how it is. So before you click and share this thing, this is now logic talking to reasonable people.
Just be sure that you’re sharing something real. I shared something recently that I forgot what it was now, but I deleted the post about 10 minutes after I sent it on Blue Sky. And I then wrote another post saying I had done that because I was taken in by something I saw and I should have known better because I normally don’t do this, but I just shared it. I don’t know why I did that even. I was having my morning coffee and I wasn’t paying attention too closely. So that’s the kind of thing that could trip people up.
This is what’s going on out there. So I think this thing that Denmark’s trying to do is brave and very bold and I hope they get success with it.
or at least it leads to other ideas that work.
Well, the pendulum always swings. One of the hardest questions communicators are facing today is whether their company or client should speak out on a contentious issue or stay silent. Silence was the default once upon a time, but research shows that in many cases, silence carries its own risk. Horton Research, ⁓ published just this month, found that silence backfires most when people expect
a company to speak and believe it has a responsibility to do so, which let’s face it, this is why we were advocating for companies to take positions on certain issues under certain circumstances for ⁓ many years supported by research like the Edelman Trust Barometer. A separate Temple University study of the Blackout Tuesday movement showed that companies that stayed quiet faced real backlash on social media, but the consequences aren’t uniform.
Sometimes silence has little visible effect, at least in the short term. Take Home Depot. Just last week, immigration raids targeting day laborers took place outside stores in California. Reporters reached out to Home Depot for comment. Home Depot chose not to respond. So far, investors don’t seem to care, and the stock hasn’t suffered. But employees, activists, and customers who see this issue as central to the company’s identity
Well, they may feel differently. Silence can create space for others to define your values for you. This tension between internal and external audiences is critical. Employees are often the first to expect their employer to speak out, especially on issues that touch human rights, diversity, or workplace fairness. Silence can erode engagement and retention. Externally, it’s more complicated. Some customers or policymakers may punish a company for taking a stand,
Others may punish it for not taking one. And I’m thinking now of it was Coors Light with the one can that they made for the trans activist and that created polarization. People who said, we’re going to go out and buy Coors no matter how bad it is, just to offset the people on the right who are boycotting it.
In Europe, where stakeholder governance is stronger, there’s often a higher expectation that companies will weigh in. In the U.S., polarization makes every move fraught. Either way, communicators can’t afford to pretend that silence is neutral. It’s a choice, and it has consequences. So the question is, how do you decide? Well, here’s a simple decision framework. Start with expectations. Do you have stakeholders who believe your company should have a voice here?
Next, consider the business nexus. Does the issue intersect directly with your operations, employees, or customers? Timing is important. Is there an urgent moment where absence will be noticed, or is this more of a slow burn? Authenticity matters. Do you have a track record that supports what you’d say, or would a statement ring hollow? Then look at consistency. Have you spoken on similar issues before? If you break the pattern, can you explain why? people notice?
And finally, consider risk tolerance. How much reputational risk can the organization realistically absorb? Sometimes after applying this framework, silence might still make sense, but there’s a way to be silent well. It starts with transparency inside the organization. Explain to employees why the company isn’t taking a public stance. Reinforce the company’s values in operational ways through hiring practices, supplier standards, community investments.
brief key stakeholders privately so they’re not blindsided, and set monitoring targets so you can pivot if the situation escalates. For communicators, here’s a quick checklist to keep handy. Map stakeholder expectations, test the business nexus, pressure test your authenticity and consistency, advise on operational actions that back up values, and plan both the statement and the silence. Corporate silence
doesn’t have to mean cowardice and speaking out isn’t always virtuous, but both are strategic choices and both can have lasting impact on trust. Communicators are the ones who can help leaders cut through the noise, weigh the risks and make sure that whichever choice they make, voice or silence, it’s intentional, transparent and aligned with the values the company claims to hold.
Yeah, it’s a complicated story, think, Shell. I think actually paying attention to one of the links you put in our Slack channel about Home Depot, that is quite staggering what I’m reading in this report from NPR talking about this. It goes into some detail in describing the customer base. They talk about day laborers. Now, I don’t know what that term means. We don’t have that term. Does that mean you’ve hired someone
Yeah, they hang out in the parking lot by the driveway and you have a home project and you need some help. So as you’re pulling out of the parking lot with all of the stuff that you’ve bought, you’ll say three of you and they’ll hop in the car and come home with you and do the work that you direct them to do and you pay them cash. And that’s how they make a living. That’s day labor.
So cash-based, you have no idea who they are, you let them into your home. A bit risky, I would say. I mean, we have a system here which is they then call them day laborers. I don’t think they’re called anything handymen or something like that, DIY help, whatever. Websites, companies set up these things where you post your needs and someone says, yeah, I can do that for you, they give you a quote. And I’ve used someone like that in the past. And indeed, I had that.
I know that doesn’t work here at all. TaskRabbit. It exists, but I think in one city only. And there are other equivalents to TaskRabbit, but some of these local websites are really good. But I used someone not long ago where I hired someone to do something and I had them go to the DIY store to pick up the stuff and buy the stuff and all that kind of bit. So I kind of get that. So the NPR piece says day labor sites have sprung up.
as Home Depot grew and it became one of their big customer base, if you like, as a direct result. But this struck me, this piece kind of leapt out at me that talking about this on Reddit, according to NPR, Home Depot workers have begun trading tales of raid impacts. Some claim fewer contractors are visiting and stores are struggling to meet sales goals. Others say it’s business as usual and sales are booming. So it’s a mixed bag.
But that’s going on. That to me will be a huge alarm bell for the company if they kind of button their lip and zip their lip in public and private or internally, and your employees are doing this. So that signifies quite a few things. They quote one example, again, another alarm bell in Los Angeles this time after a raid that happened from the immigration police. This person
talked about the car park. The car parking lot was always full, she said. Right now, though, there’s so many spaces, there’s hardly anyone here. And this woman runs a housekeeping business and usually sends her employees to stock up on cleaning supplies or liquids for a pressure washer. But today, for the first time in a while, she herself was out the Home Depot. Why? Because they’re afraid to come, she said, they’re afraid to be here. That’s not not good at all, that kind of environment. So
If I were Home Depot, I mean, I wonder, tell me what you think, Charmin. They should be paying attention to that, I would have thought.
Well, they should. Their stakeholders have a clear interest here. Their customers are the ones who hire these people as they’re pulling out. It’s become for many folks a service, even though Home Depot doesn’t specifically provide the service. They’ve done nothing to keep it from growing into something that you expect to see at a Home Depot parking lot. it’s part of the ethos now.
their employees care about it. So it gets back to that little framework for deciding whether you’re going to say something about it. Is there an expectation and does it intersect with your business? And in this case, the answer to both those questions is pretty clearly yes. Now, what they would say, I don’t know. Home Depot’s founder was famously very, very right wing on the political spectrum.
And it is my understanding that that political preference continues to be part of the DNA of the leadership of the organization. So they may be fully supportive of immigration raids, but coming right out and saying, yeah, we’re glad to see these people get swept up might not set well with customers who have come to rely on them. So.
You know, these are things that make me very happy that I’m not doing public relations for Home Depot, but saying nothing, just sitting back and saying nothing seems to me to be a bad choice.
Yeah. And then look at the kind of business imperatives on this. The NPR says in their report, investors so far have shrugged off the immigration spotlight on the company. Home Depot stock price is at its highest since February. So there’s no pressure from that point of view.
Right, well, the article pointed out that this is a short-term response to the silence, not a long-term response. We’ll see. mean, if the people who are posting to Reddit who work in the stores are right that contractors aren’t going there and that the parking lots are not as full as they used to be, you could have longer-term problems arising from this. Yeah.
Well, you got some alarm bells ringing there, I would say,
with that going on. But this just illustrates to me the huge complication on say something or not. If you do, what do you say? If you don’t, what don’t you say? I mean, in a sense, you can’t not say anything, although I guess that’s what they’re doing. That doesn’t seem very healthy for relationships internally, because this kind of thing, from what I observe across the Atlantic here in the UK,
seems to be getting worse in America with these immigration raids, the uncertainty, the cruelty, the awfulness of it all that doesn’t look like it’s going to diminish any time soon. And if anything, it’s going to get even worse than what I’ve been reading.
It is. They have set quotas for the number of seizures and they’re going after anybody for anything. This notion that it’s the worst of the worst, the murders and the rapists and the like is ridiculous. In fact, my friend Sharon McIntosh just shared a photo of somebody being grabbed up by ICE who is a janitor in her church.
Says he’s the neighbor that everybody counts on to come over and help them fix things. He’s a great father and husband. He’s a great member of the community. He has a side hustle business. He is what you want in a member of your community. And yet he was grabbed up by ice. So yeah, there’s a reason that, you know, downtown LA is dead. People are afraid to be out. That’s affecting the people who ⁓ sell them stuff when they come out to do shopping and, and.
live their lives. So this is going to have long-term fallout for sure. And I think that the organizations that are at the heart of this, the fact that they’re saying nothing, I think leads people to see them maybe as cowardly or maybe as complicit. You have to think about the consequences of silence. And that’s what this article that I drew this report from.
makes clear that article from the Wharton School. I quote more from the Wharton School these days. It’s really become a source that it wasn’t when we started this show. But in any case, use a framework. Don’t say, should we say something about this or not? Use a framework to reach a good logical decision.
Yeah, and I’m thinking as well, OK, what’s going to happen? Let’s just use Home Depot as an example here. When, not if, when, someone either in the media or in the old media, let’s say, or someone in the new media landscape publicly asks a question of them. What are they going to do about X? You know, what about that guy who was beaten up in the parking lot of your store in LA? What are you going to do about that?
What are they going to say? So I’m wondering, and this is kind of straying into an area of like pre-crisis communication planning perhaps, but have they got a what if scenario plan? I wonder.
Home Depot saying nothing because the media have been calling and they have not been responding and you have to understand two things about the media in a situation like this. One is when you don’t say anything, they’re not going to shrug and go, okay, then we won’t report anything. So you’re going to hear in the media that you did not respond to requests for information. The second thing to be aware of is the fact that
the media will go after secondary and tertiary sources of information in the absence of your comment, and that may not be what you want heard.
that none of that is happening. So when it does happen, are they ready? None of that’s happening. mean, you say they’re pursuing them, but no one’s talking about that at all. I don’t see any reporting about that. What I would see reporting about is someone with a lot of influence online, as perceived by whoever, frankly, asks a question and that gets amplified widely and gets picked up everywhere that they
This happened in the parking lot at Home Depot. And this is what this person said. And they embed the video, you know, when he recorded what he did. And they’re silent.
That’s what I’m talking about.
Over there and that’s what’s happening.
Yeah, I mean, there are all kinds of videos from people in parking lots and people where these raids are happening. And I’ve heard no comment from the institutions involved. Home Depot being at the top of the list.
that case we’re waiting for that hugely influential person to be in the spotlight. It hasn’t happened yet. It’s a when, not an if, I feel pretty sure. exactly. all right. okay so this I think is our final topic isn’t it in today’s in this is okay so so this one I thought worth exploring.
Yeah, whoever that may be.
Mr. Beast, that’s who needs to do it.
I saw this quite a lively discussion I saw in the marketing and PR community on Facebook. It highlights an issue that many communicators may think they understand, but often don’t fully appreciate when it comes to using images online. So the post that kicked it off, and by the way, I’m anonymizing it because that’s a private group on Facebook. And unless you’re a member of Facebook and a member of this group, you can’t see the content. But so I’m not going to mention anyone’s names, but the story is quite
interesting. So the poster kicked it off came from someone who had just received an email from PA Images, one of the photo agencies here in the UK, demanding £700 for using one of their photos. The image had appeared in a blog post more than two years ago and the author noted that the photographer had been credited. They thought this counted as fair use and were shocked to discover it didn’t. They’d since removed the image and asked the group whether this was simply an expensive lesson.
or if there was room to negotiate. Well, that prompted a flurry of comments. One person pointed out that the cost of the fine could have paid for multiple original photos, properly licensed for unlimited use. It reminded that investing in photography upfront can save headaches later. Another comment stressed that fair use is an American legal concept. In the UK, what we have is fair dealing. And crucially, it doesn’t apply to photographs in this way. Using a photo without explicit permission or a license is infringement.
At best you might negotiate the charge down to what the license fee would have been. Others shared their own experiences. One person described how AFP, that’s a French news agency, fined their organization £270 for an old image that had been carried over from a previous website. They’d apologized, paid up, and then run copyright training for their team to avoid repeat mistakes. Another said they’d removed an image straight away, but the agency still produced a screenshot of the original post
and pursued them for payment anyway. The practical advice that emerged was fairly consistent. If you don’t have written permission or license, you are liable. Remove the image immediately, apologize, and then try to negotiate. Some suggested starting with a quarter of the asking fee. Keep detailed records of where every image comes from and the terms of its license. There was also a broader ethical undercurrent. Some respondents had little sympathy.
saying that too many people still think photos are fair game online when they aren’t. One even noted that their partner, photographer, often earns more from infringement settlements than from people licensing his images in the first place. So the original poster clarified that their agency normally does hire photographers and pays them fairly. This was an old blog post that predated the agency and they genuinely wanted advice rather than sympathy. Still, they accepted that it was a mistake that would cost them money.
So the takeaway here is clear. Crediting a photographer is not the same as having permission. Unless you have a license or explicit written agreement, you’re exposed to claims. And with agencies increasingly using bots and reverse image search to enforce copyright, the risk of being caught is only growing. For communicators, it’s a sharp reminder that visuals are not free to use simply because they’re online, and that professional practice means treating images with the same respect as written.
Just one more reason to be using AI to generate your images.
Yeah, but I think it all comes down to, doesn’t it? The kind of, it’s so easy, it’s there, people have been doing it for long time, and suddenly this is coming out, the bites bite you in the bum, this will, I tell you.
I’m guilty. used to, when I was an independent consultant, which I haven’t been for nearly eight years now, good God, time really does fly. I used to do a email newsletter. It was a wrap of the week’s news in communication and technology. You know, as we continue to do for this show and for blog posts.
to subscribe to your newsletter. I remember it. I remember it.
Yeah, was called the Friday Wrap. And at the top of the Friday
Wrap, I always had an image of something that was wrapped in one way or another just to play off of that whole wrap concept. And I was always out there searching images for something that was wrapped. was trees that were wrapped and buildings that were wrapped and vehicles that were wrapped. And I just grabbed them and put them at the top of my newsletter without giving any thought to where that came from. If I were doing that today, I would definitely have AI
produce an image of something wrapped, because I certainly didn’t have a budget to pay for photography. The only reason I was able to do that was because they were online, but it was not right. Even though most of those images were, I seem to recall using the Creative Commons library of images, which is still available. But… ⁓
was similar, Shel, to you. I was doing exactly the same thing, but I always used to credit the source, thinking that was fine. I didn’t feel guilty. And even if a website that I saw a great picture had copyright, blah, blah, all rights reserved, I’d say, well, give them a credit and a link to their website. I’d be OK. I fell foul of that once only back in early days, about 2007, Reuters.
wanted me to remove a picture I used from one of their agency reports and a blog post that I’d I too, throwed in a conversation with them and said, and eventually they agreed, okay, fine, but don’t do it again. And I thought, okay, that’s interesting. They were really early in on that. We know it’s not right. And indeed the point here of that first practical advice from the thing, if you don’t have explicit permission or a license,
you are liable. There’s no ifs or buts, there’s no gray areas, black or white. So I tend not to use a lot of AI generated images. I subscribe to Adobe Stock, which is a good library. I use Unsplash. I pay for the premium version that gives you pictures that aren’t to take. I tend to go big on metaphor type pictures, and there’s loads of stuff for that. But I’m always also looking for someone who says,
you know, create a commons for instance, and that’s great. Flickr still a good source. So that, but if you’re like in a large enterprise and you’ve got, you know, multiple stuff, things going on, that’s not really a practical approach. And you’ve probably got a license with Shutterstock or one of the big image licensing firms. So that’s right. But this is, I think for small to medium sized businesses, individual consultants and so forth, this is an easy trap to fall into. And again, it’s just remembering that
key advice point if you don’t have permission or a license you’re liable. So don’t do it.
a subscription to Dreamstime, a stock photo service, and how people will say, I can tell if something was produced by AI. I can tell if something came from a stock photo service. Those stock photos do tend to have that stock photo stamp on them, you know, that look. come on, the computer key that has the word that is…
But I find depends on the image.
You wouldn’t use stuff like that. I certainly don’t. I you see them, the kind of happy smiling group in a business meeting. People are not like that at the workplace. Yeah, yeah. So don’t use those. No, don’t use those.
or the hand writing on the board or the hands raised. Yeah, no,
I use AI to create images. I mean, we’ve changed our internet platform, but the one we changed from required an image for every article, a hero image. And if it was about a project, that was easy. We had photos from projects. But if it was about a more abstract concept,
You know, either it was a stock photo service or we were stuck. But now I can come up with an image that just perfectly conveys the idea that we’re trying to get across. I remember I did one about what goes into developing a proposal, an article on what goes into developing a proposal for a commercial construction project. We’re talking about, you know, two, three, four hundred million dollars of project cost.
And these are big deals and the proposals take weeks, months to put together. And I think there’s a lack of appreciation for what the business development team goes through when they’re putting these together. And for the hero image, I had a group of people who are clearly office workers, not out in the field building. And they’ve got their laptops and their tablets and their phones out. But in the middle of the table, rising out of the table is the building that they’re pitching. It was an ideal image.
Another one that I use this for is our April Fool’s article, which was always about something completely ridiculous. I did an April Fool’s article one year about new sustainable building material of chewed bubble gum that has been scraped off of underneath desks and underneath railings. Not only is it sustainable, but it’s minty fresh, that type of thing.
And I was able to have a building that was built out of bubble gum that looked real to accompany that article. So I get a lot of use out of generative AI that I can’t get. I mean, if I had a budget for artists and if I had a budget for photographers, I’d be using that. I’d hire photographers. I’d hire graphic designers. I don’t have the budget. So this works. This is a great alternative to.
so the takeaway from all of this, yeah, so the takeaway from all this is if you don’t have written permission or a license, you are liable. So get permission or a license or use a generative AI approach. Pretty clear.
Works for me. And that will bring us to the end of this episode of For Immediate Release, episode 478, the long form episode for August 2025. Our next long form episode will drop on Monday, September 29th. We’ll be recording that on Saturday, the 27th. And until then, Neville, have a great couple of weeks until we get together for our next short midweek episode.
The post FIR #478: When Silence Isn’t Golden appeared first on FIR Podcast Network.