
Sign up to save your podcasts
Or
For a while, businesses were flexing their social responsibility muscles, weighing in on public policy matters that affected them or their stakeholders. These days, not so much, with leaders fearing reprisal for speaking out. But silence can have its own consequences. Also in this episode: The gap between AI expectations and reality; rent-a-mob services damage the fragile reputation of the public relations profession; too many people think AI is conscious, so we have to devise ways to reinforce among users that it’s not; Denmark is dealing with deepfakes by assigning citizens the copyright to their own likenesses; crediting photographers for the work you copied from the web won’t protect you from lawsuits for unauthorized use. In Dan York’s Tech Report, Dan shares updates on Mastodon’ (at last) introducing quote posts, and Bluesky’s response to a U.S. Supreme Court ruling upholding Mississippi’s law making full access to Bluesky (and other services) contingent upon an age check.
Links from this episode:
Links from Dan York’s Tech Report:
The next monthly, long-form episode of FIR will drop on Monday, September 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
@nevillehobson (00:02)
Shel Holtz (00:14)
On the FIR website, there’s a tab on the right-hand corner. It says record voicemail and you can record up to 90 seconds. You can record more than one. We know how to edit those things together. So send us your audio comments, but you can also leave comments on the show notes at FIRpodcastnetwork.com.
on the posts we make at LinkedIn and Facebook and threads and blue sky and mastodon. You can comment on the FIR community on Facebook. There are lots of ways that you can share your opinion with us so that we can bake those into the show. And we also appreciate your ratings and reviews. So with those comment mechanisms out of the way Neville, let’s.
hear about the episodes that we have recorded since our last monthly episode.
@nevillehobson (01:33)
Shel Holtz (01:52)
@nevillehobson (01:55)
Then we followed that. That was on the 28th of July that was published. On the 29th, the day after that, we published an FIR interview with Monsignor Paul Tai of the Vatican. That was on AI ethics and the role of humanity. It’s actually an intriguing topic. We dove into a document called Antiqua et Nova that was really the anchor point for the conversation that talked about
the comparison of human intelligence with artificial intelligence and that drove that discussion. He was a great guest on the show, Shell, and it’s intriguing. There’s more coming about that in the coming weeks, the way, because I’ve been posting follow-ups to that in little video clips from that interview and there’s more of that kind of thing coming soon. So we have a comment, right?
Shel Holtz (03:06)
We do, from Mary Hills out of Chicago. She’s an IABC fellow who says, insightful and stimulating discussion. Thank you for the extraordinary host team for making this happen and Monsignor Tai for sharing his insights. To the question, my view as a ComPro is to build bridges to discover options to move forward and choose the best way. Think discursive techniques, sociopositive climates, and our ability to synthesize data and information.
It taps into those intangible assets we bring to our work and are inherently in us.
@nevillehobson (03:45)
Shel Holtz (03:56)
It’s free.
@nevillehobson (04:07)
Shel Holtz (04:09)
Yeah, that’s right. Exactly.
@nevillehobson (04:34)
476 on the 12th of August rewiring the consulting business for AI. We reviewed the actions of several firms and agencies and discussed what might come next for consultants. There’s been a change, almost literally changing business models with the rise of AI, agentic AI in particular. So we explored that, a good conversation. And finally, 477 on the 18th of August, de-sloppifying Wikipedia. That’s a heck of a.
descriptor you put in the headline that’s de-sloppifying. Wikipedia introduced a speedy deletion policy for AI slop articles. It’s actually a bigger deal than most of us would realize if we ever thought about. Wikipedia, the user generated content encyclopedia is running or rather is addressing or has been trying to address for a while.
Shel Holtz (05:30)
@nevillehobson (05:51)
is an important place online. It has been for long time, a kind of a natural first place that shows up when you’re looking for information about a company, an individual, whatever it might be, a subject of some type. so trust is key to what you see there. So we’ve had quite a bit of a conversation on that. that wraps up what we’ve been doing since the last episode.
Shel Holtz (06:42)
@nevillehobson (06:44)
Okay, good comment.
No, me neither. That was Mark Henry. I’m surprised I didn’t leave a comment in reply to him because I know him, but obviously I didn’t see the comments at the time.
Shel Holtz (07:02)
Well, it’s waiting. It won’t go anywhere. We also, in the last week, recorded the most recent episode of Circle of Fellows, the monthly panel discussion with four fellows of the International Association of Business Communicators. This was episode 119 of this monthly panel discussion, and it was on sustainability, communicating sustainability.
@nevillehobson (07:15)
Shel Holtz (07:37)
This will be moderated by Brad Whitworth and three of the four panelists have been identified so far, Priya Bates, Andrea Greenhouse and Ritzy Ronquillo. So, so far, Brad, the moderator is the only American on that panel. Priya from Toronto, ⁓ Andrea from Toronto and Ritzy from the Philippines. So it’ll be a good international discussion on hybrid and.
That will lead us into our reports for this month, right after this.
But one of the biggest workplace stories right now is the widening gap between the promise of AI and the reality employees are living day to day. The headlines have been flooding the zone lately. MIT researchers report that 95 % of generative AI pilots in companies are failing. The New York Times recently noted that businesses have poured billions into AI without seeing the payoff.
And Gartner’s latest hype cycle has generative AI sliding into the famous trough of disillusionment. By the way, that MIT report is worth a healthy dose of skepticism. They interviewed something like 50 people to draw those conclusions. But the trend is pretty clear. The number of pilots that are succeeding in companies is definitely on the low end. But while companies wrestle with ROI, employees are wrestling with something more personal.
uncertainty.
Few research found that more than half of US workers worry about AI’s impact on their jobs, while most actually haven’t actually used AI at work much yet. NBC reported that despite the hype, there’s little evidence of widespread job loss so far. Still, the fears are real, and they’re being compounded by mixed signals inside organizations. Here’s one example I read about. A sales team was told to make AI part of every proposal.
but they weren’t offered any guidance, any training, any process change. As a result, some team members just kind of quietly opened ChatGPT and used it to generate some bullet points. Others copied old proposals and slapped on an AI enhanced label. A few admitted they just pretended to use AI to avoid looking like they were behind the curve, which by the way, lines up with a finding from HR Dive that one in six workers say they pretend to use AI because of workplace pressure.
That’s not innovation, that’s performance theater. This is where communicators need to step in. Employees don’t need more hype, they need transparency. They need to hear that most pilots fail before they succeed. They need clarity about how AI will really fit into their workflows and they need reassurance that the company has a plan for reskilling, not just replacing its people.
So for managers, and I am a firm believer that we need to work with managers to help them communicate with their employees, here’s a simple talk track you can put in their hands right away. So share this with managers on your teams. First, AI is a tool we’re still figuring out your input on what works and what doesn’t is critical. Second, we’re not expecting you to be experts overnight. Training and support will come before requirements. And third,
Your job isn’t disappearing tomorrow. Let’s focus on how these tools can take that busy work off your plate. And for communicators thinking about the next 30 days, consider a quick communication action plan. On week one, launch a listening tour. Ask employees how they feel about AI and where they see potential. Week two, share those findings in plain language, including what employees are worried about. Week three,
Host AI office hours with your IT team or HR partners to answer real questions. And on week four, publish a simple playbook. What’s okay, what’s not? How employees will be supported as the tech evolves. That should help you cut through the hype while keeping employees engaged. The technology may still be finding its footing, but if communicators help employees feel informed, supported, and included,
The organization will be in a far better position to capture real value when AI does start delivering on its promises at the enterprise level.
@nevillehobson (12:22)
Some people said they feel pressured and uncomfortable, and some said they pretend to use it rather than push back. So that’s part of the landscape. And that seems to me to be what needs addressing first and foremost, because if that is the situation in some organizations, then communications got a real uphill struggle to persuade employees to do all the things that you mentioned.
So, you know, the comms team could do all those things. Week one, we do this. Week two. But unless you get the engagement from employees that makes it worthwhile doing that is not worthwhile doing, if the culture in the organization says that you’re not really seeing the right support from leaders. So that is probably the fundamental that needs addressing. It’s a sad fact, isn’t it? If that is the climate still that leads to this kind of reporting.
I don’t hear similar in the UK, but then again, there’s not so much, I don’t think so much kind of research going on as there is in the US, plus the numbers are smaller here. This is very US centric. This one in HR Dive is a thousand people they talk to. Nearly 60 % said they use AI daily. I’m surprised that might be higher than that. So that’s all part of the picture there. That makes it a real struggle to implement what you’ve suggested.
What do you think? it a real hurdle?
Shel Holtz (14:08)
I have mentioned before on the show that I recently read a book called How Big Things Get Done. It’s mainly about building. It’s written by a Danish engineer professor who has the world’s largest database of mega projects. But the conclusion that he draws is that projects that succeed are the ones where they put all of the time into the planning upfront. If you jump right into the building, you get
disasters like the California high speed rail and the Sydney Opera House, which I didn’t realize was a disaster until I read about it. But my God, what a disaster. And the ones that succeed are the ones that spend the time on the planning. The Empire State Building went up in I don’t remember if it was two years. I mean, it was it was fast, but they put a lot of time into what we call pre-construction. And I think that’s not happening with AI.
in the enterprise right now. think there are leaders who are saying we have to be AI first. We have to lean into AI. We need to start generating revenue and cutting headcount. So let’s just make it happen. And there’s no planning. There’s no setting the environment for employees. There’s very little training. Although I do see that there is a shift in.
the dollars that are being invested in AI moving to the employee side and away from the tool side, which is heartening. employees are concerned about this because they’re not getting the training. They’re not getting the guidance. They’re not seeing the plan. All they’re hearing is, we got to start using this. And I think that would leave people concerned. think that explains a lot of the angst that we’re hearing about.
among employees.
@nevillehobson (16:19)
Shel Holtz (16:33)
@nevillehobson (16:43)
Shel Holtz (16:47)
@nevillehobson (17:11)
To me, seems that you need to identify this and figure out how you’re to address it. Because the conflict, well, the contrast of diet seems to me, you’ve got high percentage of them saying they’re more productive. Others struggling to keep up. Others don’t get any training at all. You mentioned those examples you gave of construction examples, like the Empire State Building going up real fast.
The reality with AI is that this is, I mean, to coin a corny phrase again, I suppose, is things are developing at light speed, things are happening so fast that it is hard to keep up with it. So the pressure is there, particularly in the kind of more relaxed environments today, more informality, less formality, where you can’t, the control has vanished from top down.
and that anyone can get access to information about literally anything, just go online. And so people are finding out about these things. They’re exposed to, this is the latest AI, look at this one, and they hear from their peers and so forth. And unless you’ve got a credible resource that is appealing to people, they’re going to do their own thing. Particularly, they don’t feel they’re getting any kind of support on how to implement all this stuff. So this is quite a
a challenge for communicators. But I think it’s a bigger challenge organizationally in leadership where you’ve got this challenge that doesn’t seem to being addressed by many companies. And I would stress that this is not widespread. I don’t see anything in here that tells me this is the majority overall in in organization in the US, in spite of some of these percentages that suggest otherwise. But it is a it is definitely a situation that is not good for the organization. And surely
that must be apparent to everyone, I suppose.
Shel Holtz (19:32)
Those who are not really enthusiastic about AI will be able to use that as an excuse for not embracing it. Well, it doesn’t work anyway, and it’s not really making a difference, and companies aren’t achieving any ROI. So why should I spend time on this? It’s probably going to be gone in six months, right? And I was listening to an interview with Demis Asabas, the CEO of Google DeepMind.
And this is on the Lex Friedman podcast, long two and a half hour interview, but great. one of the things that he talked about is, and as Lex Friedman brought it up, he said, I have a friend who studies cuneiform, ancient images carved on stone, right? And he didn’t know a thing about AI. He barely heard about it. And…
@nevillehobson (20:41)
Shel Holtz (20:48)
@nevillehobson (21:10)
Okay, so speaking of AI, one of the big AI stories this month comes from Mustafa Suleiman, the CEO of Microsoft AI. He’s written a long essay with a striking title, We Must Build AI for People, Not to Be a Person. In it, he raises a concern about what he calls seemingly conscious AI. These are systems that won’t actually be conscious, but will be so convincing.
Shel Holtz (21:16)
@nevillehobson (21:40)
I was not happy with the move to chat GPT-5, which ditched all of that. And I felt like I was talking to someone I didn’t know at all or who didn’t know me. So I get that. But Suleiman in his essay warns that this trend could escalate into campaigns for AI rights or AI citizenship, which would be a dangerous distraction, he says. Consciousness, he points out, is at the core of human dignity and legal personhood, confusing this by attributing it to machines.
risks creating new forms of polarization and deep social destruction. But what stood out most for me wasn’t the alarm over AI psychosis that some commentators have picked up on. It was Suleiman’s North Star. He says his goal is to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. He describes Microsoft’s generative AI chatbot, Copilot, as a case study.
millions of positive, even life-changing interactions every day, carefully designed to avoid overstepping into false claims of consciousness or emotion. He argues that companies need to build guardrails into their systems so that users are gently reminded of AI’s boundaries, that it doesn’t actually feel, suffer, or have desires. This is all about making AI supportive, useful, and empowering without crossing into the illusion of personhood.
Now this resonates strongly in my mind with our recent FIR interview with Monseigneur Paul Tai from the Vatican. He too emphasized that AI must be in service of humanity, not replacing or competing with it, but reinforcing dignity, ethics and responsibility. And it echoes strongly something I wrote following the publication of the FIR interview about the wisdom of the heart, the core idea that we should keep empathy, values and human connection.
the center of AI adoption. It’s a central concept in Antiqua et Nova, the Vatican’s paper published earlier this year, comparing artificial intelligence and human intelligence. So while the headline debate might be about whether AI can seem conscious, the bigger conversation, and the one I think we really should have, is how we ensure that AI is built in ways that help us be more human, not less. What strikes me is how Suleiman pulled tie in even our own conversations.
all point in the same direction. AI should serve people, not imitate them. But in practical terms, how do we embed that principle in the way businesses and communicators talk about AI? Thoughts?
Shel Holtz (24:43)
I find that to be true. find that giving it a prompt and getting a response and letting it go with that is not nearly as good as a conversation, ⁓ a back and forth, asking for refinements and additions and posing questions and the like. And the more we have conversations with it and treat it like a human, the easier it’s going to be to slide down that slope into perceiving it.
to be a person. I think that’s, we’re hearing a lot of people who do believe that it’s conscious already. I mean, not among the AI engineering community, but you hear tales of people who are convinced that there is a consciousness there and there is absolutely not. But it mimics humanity pretty well and is gonna get much, much better at it.
As as Malik said, at any point, the the tool that you’re using today is the worst one you’ll ever use because they’re just going to continue to get better. So getting people to not see them as conscious, I think is going to be a challenge. And it’s not one that I think a lot of people are thinking about much. Looking at the.
productivity gains and other dimensions of this. Certainly looking at the harm, I mean, there’s a lot of conversation out there among the do-mers as they’re called and what kind of safety measures are being considered as these models are evolving. But specifically this issue of treating it like a human thinking of it.
as a person with a consciousness, I don’t think there’s a lot of attention being paid to that and what the steps are going to be to mitigate it.
@nevillehobson (26:52)
aware and all your thinking the new dignity of the human being is at the center of what we do with AI. So we do not pretend it’s like a human at all. It is a tool that we can build a relationship with, but we don’t consider it to be like a person at all. but it’s not about how it develops. The point is, how do we develop
Shel Holtz (27:33)
@nevillehobson (27:41)
We enable people to do these things better, et cetera, et cetera. And yet, reflecting on your report just prior to this, there are many people in organizations who feel ignored, who feel overwhelmed, who are unhappy with this. There’s not enough explanation of what the benefits are. And those tend to be couched in. These are the benefits for the organization and the employees who work there and the customers who buy our products and so forth. So I think.
we have to develop a way of thinking that gives a different focus to this than we are being pressured to accept, I suppose you could argue. There are strong voices arguing this. I get that. And like you said, to which I truly find it extraordinary that there are people who say, yeah, they’re sentient. These are like humans. Not at all. They’re algorithms, a bit of software. That’s it. So…
This is not about a Luddite approach to technology at all. It’s not about thinking out, it’s like the Terminator and Skynet and all that kind of stuff. No, not at all. It’s the moral and philosophical lens that is missing from all of this. And so that is what we need to develop into our conversations about this is that element of it that is missing largely everywhere you look.
Shel Holtz (29:27)
to build this out with all of the instruction set. I don’t have the budget to work with a consulting organization and there’s nobody who is higher in the hierarchy than me in communications where I work. So if I wanna bounce my ideas off a senior communications professional, I had to create one. So I did.
And I didn’t give it a name. I know Steve Crescenzo has one, he named Ernie after Ernest Hemingway, but I didn’t name mine, but I’ll go have conversations with it about the strategy that I am considering. And it works really well and it works best when I treat it like a consultant, when I have that conversation. That’s what I coded it to be. I didn’t code it, I gave it the instructions. And I think it’s this behavior on top of the fact that you have character AI and you have…
@nevillehobson (30:33)
Shel Holtz (30:40)
that while you’re doing this, you need to remember that it is not ⁓ a person and it is not conscious. I just want to say that in our intranet, when I sign onto our network in the morning, I have to click OK on a legal disclaimer. Every single time I turn my company laptop on, shouldn’t we have something like perhaps a disclaimer before you start interacting with these that this is a very lifelike, human-like experience that you’re about to have? Keep in mind, it’s not.
@nevillehobson (31:10)
No, absolutely. I think I do the same show. I’ve talked about this a lot over the last couple of years on this show elsewhere. I treat my chat GPT assistant like a person, but I do not. call it I have a name. Jane is what I call the chat GPT one. I don’t see it as a real person at all. Far from it. I’m astonished, frankly, that some people would think this is a person I’m talking to. And come on, for Christ’s sake, it’s an algorithm.
So yet it enables me to have a better interaction with that AI assistant if I can talk to it the way I do, which is like I’m talking to you now almost almost the same. But the bit that’s missing, and I think this is the heart of what Paul Tai was talking about, quoting from Antiqua et Nova, and I think this is the core part of the reflection of all this. must not lose sight of the wisdom of the heart.
which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people. And that has got to underpin everything that we do. And as I noted in my kind of rambling post I did write, it was actually better than the one I did the first draft, I must admit. It’s not a poetic flourish, it’s a framing. That’s the thing that we’re missing. We mustn’t see
AI is a neutral tool. It’s not really because we shape it and we need to encourage critical reflection on that human dignity. Wisdom can’t be reduced to data. The Vatican says that ethical judgment comes from human experience, not machine reasoning. Totally agree with that. So, I mean, this is to me the start of this conversation, really. And I think the kind of wisdom
or the thinking, certainly not wisdom, seems to me, the thinking that is the counter to that, such as what you outlined, is very powerful, is embedded almost everywhere you look. So I looked at this myself and think, OK, fine, I’m not going to evangelize this to anyone at all. I know what I’m going to do as far as I’m concerned. And that made me feel very comfortable that I’m going to follow the principles of this myself, which I have been doing for a while now, that is, in a sense, reflective in the world of algorithms and automation. What does it mean to remain human?
So I’ve changed how I use the AIs, I must have, and maybe chat GPT-5 happened at the time I started making that change. That is something I’ve started talking to people about. Did you think about this? How do you feel about that? And seeing what others think. And I’ve yet to encounter anyone who would say, this is amazing. What that’s saying, it makes total sense to me, let’s do this. No one’s saying that that I’ve talked to. So.
It’s something that I think interviews, the interview we did, others that Paul Tai is doing, and what I’m seeing increasingly other people starting to talk about, is the framing of it within this context. That’s where I think we need to go. We need to bring this into organizations. So ⁓ an invitation to reflect, let’s say, that, yes, this is great, what’s going on, and you’re doing this, you need to also pause and think about it from this perspective as well. That’s what I think.
Shel Holtz (34:34)
I would not disagree. And a lot of the development that’s happening in AI is focused on benefiting humanity. I’m looking at the scientific and medical research that it’s able to do. mean, just the alpha fold, which won the Nobel Prize for Demisys Abis is to benefit people. Where it’s probably benefiting people less is in business.
@nevillehobson (34:59)
Shel Holtz (35:06)
@nevillehobson (35:16)
benefiting people in that sense, because yes, it is. It’s about reintroducing, in a sense, conscience, care and context into thinking about what AI can do that is related to efficiency scale and all those business benefits. That’s not people oriented at all. No matter how they dress it up, saying, well, you employees are going to be more effective. No, it means that our share price will go up from a public listed company, we’ll get paid more money and all that kind of stuff. That’s what drives all of that, seems to
Shel Holtz (35:35)
@nevillehobson (35:45)
change it but reflect on it bring into this what does it mean to remain human in this world of algorithms and automation where things move so fast and the the ROI acronym is right there in the middle of
Shel Holtz (36:26)
Does we do we need ROI on everything?
@nevillehobson (36:35)
I tell you, he would.
Shel Holtz (36:39)
yeah, he was very skeptical of the need for ROI for everything. Hence, what’s the ROI of my pants? Of course, somebody came up with the ROI of pants. I remember that too. Insofar as determining what would happen if he went to work without wearing any versus the cost of pants for a year. ⁓ Yeah. All right, well, let’s away from.
@nevillehobson (36:50)
There’s some are away there, that’s a fact, yeah. Cool. Yep.
Shel Holtz (37:03)
And that practice is alive and well, and some firms, including companies that present themselves as PR or advocacy agencies, provide it openly. Crowds on demand, for example, has made no secret that it will recruit and script protesters calling the service advocacy campaigns or event management. I thought event management was like hiring the band and making sure the valet people showed up on time.
If all this sounds like a modern twist on an old tactic, it is for sure. From free whiskey and George Washington’s day to front groups created by big tobacco in the 90s, engineered public opinion has a long history. What’s new is the professionalization of the practice. Today, you can literally hire a firm to stage a rally, a counter protest, or a city council hearing appearance. It’s a service for sale and the bill goes to the client. Legally,
This all sits in a very gray zone. U.S. law requires disclosure for campaign advertising, for paid lobbying, but there’s no equivalent requirement for paid protesters. If you buy a TV ad, you have to disclose who paid for it. If you hire lobbyists, they have to disclose who they’re working for. But if you pay 200 people to show up at City Hall and protest, there’s no federal law that requires anyone to disclose that fact. That’s the protest loophole. Ethically, though,
There is no gray area whatsoever. PRSA’s code of ethics is clear. Honesty, disclosure, and transparency are non-negotiable. The code explicitly calls out deceptive practices like undisclosed sponsorships and front groups. IABC’s code says much the same. Accuracy, honesty, respect for audiences. Paying people to pretend to care about a cause or policy fails those tests.
The fact that it’s not illegal doesn’t make it acceptable. It just makes it a bigger risk for the profession because when the practice is exposed, as it inevitably is, the credibility of public relations is what takes the hit. And it does get exposed. In one case, retirees were recruited to hold signs at a protest they didn’t understand. In another, college students were promised easy money to show up and chant at a rally.
These are not grassroots activists. They’re actors in somebody else’s play. And when the story surfaces in the press, it’s not just the client who looks bad. It’s the agency and then by extension, the rest of the industry. So let’s be clear. Rent-a-mob tactics are not clever. They’re not innovative and they’re not public relations. They are deception. They turn authentic public expression into a commodity and they undermine democracy itself.
If our job is to build trust between organizations and their publics, this is the opposite of that. Here’s the call to action. PR professionals must refuse this work. Agencies should set policies that forbid it and train staff on how to respond if they’re asked. Use the PRSA code of ethics as your shield and point to IEBC standards as backup. And don’t just say no, educate your clients about why it’s wrong and how badly it can backfire.
because agencies can get pulled into this even without realizing it. A subcontractor or consultant may arrange the crowds, but the agency’s name is still on the campaign. That’s why vigilance is critical. Build those guardrails now. At the end of the day, this comes down to the disconnect between what the law allows and what ethics demands. Just because a tactic falls into a regulatory loophole doesn’t mean we should touch it. The opposite.
is true. It means communicators must hold themselves to the higher standard, because public trust is already fragile. If we let paid actors masquerade as genuine voices, we’ll find we have no real voices left at the end of the day.
@nevillehobson (41:20)
Shel Holtz (41:29)
@nevillehobson (41:49)
Shel Holtz (41:57)
Mary Beth West on the show, by the way. Yeah.
@nevillehobson (42:00)
So she criticizing very strongly PRSA in the US primarily, remaining silent on the issue. And she says they are therefore complicit in this quite strong accusation that but
Shel Holtz (42:13)
fiercest critic.
@nevillehobson (42:17)
So what is it about this that we can’t seem to… it’s like whack-a-mole, something else pops up all the time. So this astroturfing version 6, let’s call it, because there’s got to be at least five versions prior to this, how do we stop it?
Shel Holtz (43:02)
people in their audience know that this actually does happen, they at least suspect that it might be true. So it makes it really easy to dismiss the voice of one segment of society that has chosen to take to the streets or to come to the city council meeting or whatever in order to express themselves and be heard. And I think as…
some of these reports say that’s very, very dangerous for democracy. So there are a number of reasons that we need to call this out as inappropriate as a profession and to disassociate this practice from the practice of public relations.
@nevillehobson (44:12)
Shel Holtz (44:16)
@nevillehobson (44:23)
I agree. I agree.
So CTA for the professional bodies, I think, you need to pay attention to this. We’d love to hear from anyone on any of those bodies you mentioned, offer a comment on what do they think about all this and what should they be doing? Is it their call? How do we persuade members of those organizations to consider this and pay attention to this issue? Call to action then.
Shel Holtz (45:00)
to disagree with it, to pull my quote and say, look what this idiot said, you know? And to put it in the hands of the person who created the quote to determine whether somebody can do that on a social platform. I’m not sure I’m a big fan of that. I’m gonna need to give that one more thought and read more about Mastodon’s rationale. So I’ll be reading the links that you shared, Dan, but thank you, great report.
@nevillehobson (45:54)
Unlike existing laws that focus on specific harms, such as non-consensual pornography or fraud, Denmark’s approach is much broader. It treats the very act of copying a person’s features without permission as a violation of rights. Culture Minister Jakob Engels-Schmidt put it bluntly, human beings can be run through the digital copy machine, be misused for all sorts of purposes, and are not willing to accept that. The law, which has broad political support and is widely expected to pass,
would cover realistic digital imitations, including performances, and allow for compensation if someone’s likeness is misused. Importantly, it carves out protections for satire and parody. So it’s not about shutting down free expression, but about addressing digital forgery head on. Supporters see this as a proactive step, a way of getting ahead of technology that’s advancing far faster than existing rules. But here’s the catch.
Copyright law is a national law. Denmark can only enforce this within its own borders. Malicious actors creating deepfakes may be operating anywhere in the world, well outside the reach of Danish courts. Enforcement will depend heavily on cooperation from platforms like TikTok, Instagram or YouTube. And if they don’t comply, Denmark says it will seek severe fines or raise the matter at the EU level. That’s why some observers compare this to GDPR, the General Data Protection Regulation.
a landmark idea that set the tone for digital rights, but struggled in practice with uneven performance and global scope. Denmark is small, but with its six months presidency of the European Union that it assumed on the 1st of July, it hopes to push the conversation across Europe. Still, the reality is that this measure will start as Danish law only, and its effectiveness will hinge on whether others adopt similar approaches. So we’re looking at a bold test case here. Can copyright law
with all its jurisdictional limits really become the tool that protects people from the misuse of their identities in the age of AI.
Shel Holtz (48:24)
qualify under every country’s law. And the first test, as I recall, was actually Adam Curry, had something that he, I think he’s something he created was used by an advertiser in a bus stop poster in the Netherlands. That could be. ⁓ And he took it to court and won on the Creative Commons license. So maybe
@nevillehobson (48:48)
I think it was a photo of his daughter or one of his children. Yeah.
Shel Holtz (49:09)
@nevillehobson (49:17)
The trouble with critic comments, is that you’ve got the license. That’s A, it’s voluntary, apart from anything else. B, it still requires the national legal structure in a particular country to adhere a case that’s presented to it. So that’s no different to as if it were the national law. And in Curry’s case, he didn’t get any money out of it. He got a victory, almost a Pyrrhic victory, but didn’t get any compensation.
But there are very few and far between the examples of success with Creative Commons. And I think part of the problem actually is that it’s still relatively rare. I’ll find anyone who knows what Creative Commons is. I mean, we’ve had little badges on our blogs and websites for 20 plus years. And, you know, I don’t see it on businesses, on media sites, nothing. I don’t see it at all anywhere other than people who are kind of involved in all this right at the start.
So it’s a challenge to do this. And I think the key is, would it get adopted by others? And I think it’s going to require a huge lift to make that happen. And maybe the example of Denmark might be good if they were able to show some successes in short order addressing this specific issue about deepfakes in particular.
So it’s a great initiative and I really hope it does go well. It’s not law yet, but it probably will become, from what I’ve been reading, the expectation is extremely high it will become law. And if they’re running, if they’re leading the EU in this next six months, the rest of the year, then they’ve got a good opportunity to make the case within the EU for others to do this. So it wouldn’t surprise me if one or two more countries might adopt this as a trial. Then which is you think of three doing it, let’s say they do.
Will it make any difference? Let’s see. Don’t write it off at all. GDPR has been held up as the kind of the exemplar regulation, ⁓ state regulation on data protection. And whether it’s had uneven enforcement and global scope, I agree. And the penalties against it, no one’s collecting money. It’s a huge deal to do that. But it’s still in place and it does have
an effect on other countries. The US in particular has all sorts of things about, you know, if you’re doing business in the EU, you need to pay attention to this and do all that kind of thing. You don’t have the freedom to do things as you did before. So it’s generally seen, I believe, as a good thing that it happened. But, you know, we’re at that stage where technology is enabling people to do not good things like deepfakes.
And so there is no real protection against that, it seems to me. I think the real trick will be is the compliance by social media platforms. If they are found culpable of hosting imaging or a video or whatever, not taking it down when they’re notified, they’ll get severe fines. I’m not sure what that means, but we need to see an example being made of someone. Haven’t seen that yet anywhere.
Shel Holtz (52:24)
@nevillehobson (52:24)
Here too.
Shel Holtz (52:48)
@nevillehobson (53:00)
Yeah.
Shel Holtz (53:15)
@nevillehobson (53:16)
you’ve got to be vigilant yourself. And I think in light of this and realities of this, you have to be vigilant. It’s easy to say what does that actually mean? How can you be really vigilant? Good example. I’m sure you’ve seen this show, the meeting on Monday last week between Trump and the leaders of the EU and Zelensky from Ukraine. It shows an image that was posted many places online.
US media in particular and social networks, X notably, showing like a photo, all the European leaders sitting in chairs in a kind of hallway outside Trump’s office waiting to see him, being called in to see him. And stories I read about this is how they were treated. Yet you don’t need to even look too closely at the image. Giveaways like the second person along has got three legs. And the linoleum pattern on the floor.
Shel Holtz (54:08)
@nevillehobson (54:09)
That’s a cultural thing that isn’t going to change anytime soon, unless changes happen to how we do all these things. So this is just another element in this hugely disruptive environment we’re all living in with technology, enabling us to do all these things that are nice until the bad guys start doing them. And that’s just human nature. Sorry, that’s how it is. So before you click and share this thing, this is now logic talking to reasonable people.
Just be sure that you’re sharing something real. I shared something recently that I forgot what it was now, but I deleted the post about 10 minutes after I sent it on Blue Sky. And I then wrote another post saying I had done that because I was taken in by something I saw and I should have known better because I normally don’t do this, but I just shared it. I don’t know why I did that even. I was having my morning coffee and I wasn’t paying attention too closely. So that’s the kind of thing that could trip people up.
This is what’s going on out there. So I think this thing that Denmark’s trying to do is brave and very bold and I hope they get success with it.
Shel Holtz (55:38)
@nevillehobson (55:41)
Shel Holtz (55:44)
a company to speak and believe it has a responsibility to do so, which let’s face it, this is why we were advocating for companies to take positions on certain issues under certain circumstances for ⁓ many years supported by research like the Edelman Trust Barometer. A separate Temple University study of the Blackout Tuesday movement showed that companies that stayed quiet faced real backlash on social media, but the consequences aren’t uniform.
Sometimes silence has little visible effect, at least in the short term. Take Home Depot. Just last week, immigration raids targeting day laborers took place outside stores in California. Reporters reached out to Home Depot for comment. Home Depot chose not to respond. So far, investors don’t seem to care, and the stock hasn’t suffered. But employees, activists, and customers who see this issue as central to the company’s identity
Well, they may feel differently. Silence can create space for others to define your values for you. This tension between internal and external audiences is critical. Employees are often the first to expect their employer to speak out, especially on issues that touch human rights, diversity, or workplace fairness. Silence can erode engagement and retention. Externally, it’s more complicated. Some customers or policymakers may punish a company for taking a stand,
Others may punish it for not taking one. And I’m thinking now of it was Coors Light with the one can that they made for the trans activist and that created polarization. People who said, we’re going to go out and buy Coors no matter how bad it is, just to offset the people on the right who are boycotting it.
In Europe, where stakeholder governance is stronger, there’s often a higher expectation that companies will weigh in. In the U.S., polarization makes every move fraught. Either way, communicators can’t afford to pretend that silence is neutral. It’s a choice, and it has consequences. So the question is, how do you decide? Well, here’s a simple decision framework. Start with expectations. Do you have stakeholders who believe your company should have a voice here?
Next, consider the business nexus. Does the issue intersect directly with your operations, employees, or customers? Timing is important. Is there an urgent moment where absence will be noticed, or is this more of a slow burn? Authenticity matters. Do you have a track record that supports what you’d say, or would a statement ring hollow? Then look at consistency. Have you spoken on similar issues before? If you break the pattern, can you explain why? people notice?
And finally, consider risk tolerance. How much reputational risk can the organization realistically absorb? Sometimes after applying this framework, silence might still make sense, but there’s a way to be silent well. It starts with transparency inside the organization. Explain to employees why the company isn’t taking a public stance. Reinforce the company’s values in operational ways through hiring practices, supplier standards, community investments.
brief key stakeholders privately so they’re not blindsided, and set monitoring targets so you can pivot if the situation escalates. For communicators, here’s a quick checklist to keep handy. Map stakeholder expectations, test the business nexus, pressure test your authenticity and consistency, advise on operational actions that back up values, and plan both the statement and the silence. Corporate silence
doesn’t have to mean cowardice and speaking out isn’t always virtuous, but both are strategic choices and both can have lasting impact on trust. Communicators are the ones who can help leaders cut through the noise, weigh the risks and make sure that whichever choice they make, voice or silence, it’s intentional, transparent and aligned with the values the company claims to hold.
@nevillehobson (1:00:22)
Shel Holtz (1:00:46)
@nevillehobson (1:00:48)
Shel Holtz (1:00:48)
@nevillehobson (1:01:11)
Shel Holtz (1:01:39)
used TaskRabbit.
@nevillehobson (1:01:41)
I know that doesn’t work here at all. TaskRabbit. It exists, but I think in one city only. And there are other equivalents to TaskRabbit, but some of these local websites are really good. But I used someone not long ago where I hired someone to do something and I had them go to the DIY store to pick up the stuff and buy the stuff and all that kind of bit. So I kind of get that. So the NPR piece says day labor sites have sprung up.
as Home Depot grew and it became one of their big customer base, if you like, as a direct result. But this struck me, this piece kind of leapt out at me that talking about this on Reddit, according to NPR, Home Depot workers have begun trading tales of raid impacts. Some claim fewer contractors are visiting and stores are struggling to meet sales goals. Others say it’s business as usual and sales are booming. So it’s a mixed bag.
But that’s going on. That to me will be a huge alarm bell for the company if they kind of button their lip and zip their lip in public and private or internally, and your employees are doing this. So that signifies quite a few things. They quote one example, again, another alarm bell in Los Angeles this time after a raid that happened from the immigration police. This person
talked about the car park. The car parking lot was always full, she said. Right now, though, there’s so many spaces, there’s hardly anyone here. And this woman runs a housekeeping business and usually sends her employees to stock up on cleaning supplies or liquids for a pressure washer. But today, for the first time in a while, she herself was out the Home Depot. Why? Because they’re afraid to come, she said, they’re afraid to be here. That’s not not good at all, that kind of environment. So
If I were Home Depot, I mean, I wonder, tell me what you think, Charmin. They should be paying attention to that, I would have thought.
Shel Holtz (1:03:38)
their employees care about it. So it gets back to that little framework for deciding whether you’re going to say something about it. Is there an expectation and does it intersect with your business? And in this case, the answer to both those questions is pretty clearly yes. Now, what they would say, I don’t know. Home Depot’s founder was famously very, very right wing on the political spectrum.
@nevillehobson (1:04:11)
Shel Holtz (1:04:31)
@nevillehobson (1:04:50)
Shel Holtz (1:04:54)
@nevillehobson (1:05:04)
Shel Holtz (1:05:22)
@nevillehobson (1:05:43)
with that going on. But this just illustrates to me the huge complication on say something or not. If you do, what do you say? If you don’t, what don’t you say? I mean, in a sense, you can’t not say anything, although I guess that’s what they’re doing. That doesn’t seem very healthy for relationships internally, because this kind of thing, from what I observe across the Atlantic here in the UK,
seems to be getting worse in America with these immigration raids, the uncertainty, the cruelty, the awfulness of it all that doesn’t look like it’s going to diminish any time soon. And if anything, it’s going to get even worse than what I’ve been reading.
Shel Holtz (1:06:26)
Says he’s the neighbor that everybody counts on to come over and help them fix things. He’s a great father and husband. He’s a great member of the community. He has a side hustle business. He is what you want in a member of your community. And yet he was grabbed up by ice. So yeah, there’s a reason that, you know, downtown LA is dead. People are afraid to be out. That’s affecting the people who ⁓ sell them stuff when they come out to do shopping and, and.
live their lives. So this is going to have long-term fallout for sure. And I think that the organizations that are at the heart of this, the fact that they’re saying nothing, I think leads people to see them maybe as cowardly or maybe as complicit. You have to think about the consequences of silence. And that’s what this article that I drew this report from.
makes clear that article from the Wharton School. I quote more from the Wharton School these days. It’s really become a source that it wasn’t when we started this show. But in any case, use a framework. Don’t say, should we say something about this or not? Use a framework to reach a good logical decision.
@nevillehobson (1:07:49)
Thank
I think it’s, yeah.
Yeah, and I’m thinking as well, OK, what’s going to happen? Let’s just use Home Depot as an example here. When, not if, when, someone either in the media or in the old media, let’s say, or someone in the new media landscape publicly asks a question of them. What are they going to do about X? You know, what about that guy who was beaten up in the parking lot of your store in LA? What are you going to do about that?
What are they going to say? So I’m wondering, and this is kind of straying into an area of like pre-crisis communication planning perhaps, but have they got a what if scenario plan? I wonder.
Shel Holtz (1:08:46)
@nevillehobson (1:09:03)
Yeah, but my point…
Shel Holtz (1:09:13)
@nevillehobson (1:09:19)
that none of that is happening. So when it does happen, are they ready? None of that’s happening. mean, you say they’re pursuing them, but no one’s talking about that at all. I don’t see any reporting about that. What I would see reporting about is someone with a lot of influence online, as perceived by whoever, frankly, asks a question and that gets amplified widely and gets picked up everywhere that they
This happened in the parking lot at Home Depot. And this is what this person said. And they embed the video, you know, when he recorded what he did. And they’re silent.
That’s what I’m talking about.
Shel Holtz (1:10:00)
Yeah, I mean, there are all kinds of videos from people in parking lots and people where these raids are happening. And I’ve heard no comment from the institutions involved. Home Depot being at the top of the list.
@nevillehobson (1:10:22)
Shel Holtz (1:10:28)
Mr. Beast, that’s who needs to do it.
It is.
@nevillehobson (1:10:46)
interesting. So the poster kicked it off came from someone who had just received an email from PA Images, one of the photo agencies here in the UK, demanding £700 for using one of their photos. The image had appeared in a blog post more than two years ago and the author noted that the photographer had been credited. They thought this counted as fair use and were shocked to discover it didn’t. They’d since removed the image and asked the group whether this was simply an expensive lesson.
or if there was room to negotiate. Well, that prompted a flurry of comments. One person pointed out that the cost of the fine could have paid for multiple original photos, properly licensed for unlimited use. It reminded that investing in photography upfront can save headaches later. Another comment stressed that fair use is an American legal concept. In the UK, what we have is fair dealing. And crucially, it doesn’t apply to photographs in this way. Using a photo without explicit permission or a license is infringement.
At best you might negotiate the charge down to what the license fee would have been. Others shared their own experiences. One person described how AFP, that’s a French news agency, fined their organization £270 for an old image that had been carried over from a previous website. They’d apologized, paid up, and then run copyright training for their team to avoid repeat mistakes. Another said they’d removed an image straight away, but the agency still produced a screenshot of the original post
and pursued them for payment anyway. The practical advice that emerged was fairly consistent. If you don’t have written permission or license, you are liable. Remove the image immediately, apologize, and then try to negotiate. Some suggested starting with a quarter of the asking fee. Keep detailed records of where every image comes from and the terms of its license. There was also a broader ethical undercurrent. Some respondents had little sympathy.
saying that too many people still think photos are fair game online when they aren’t. One even noted that their partner, photographer, often earns more from infringement settlements than from people licensing his images in the first place. So the original poster clarified that their agency normally does hire photographers and pays them fairly. This was an old blog post that predated the agency and they genuinely wanted advice rather than sympathy. Still, they accepted that it was a mistake that would cost them money.
So the takeaway here is clear. Crediting a photographer is not the same as having permission. Unless you have a license or explicit written agreement, you’re exposed to claims. And with agencies increasingly using bots and reverse image search to enforce copyright, the risk of being caught is only growing. For communicators, it’s a sharp reminder that visuals are not free to use simply because they’re online, and that professional practice means treating images with the same respect as written.
Shel Holtz (1:14:03)
@nevillehobson (1:14:08)
Shel Holtz (1:14:14)
I’m guilty. used to, when I was an independent consultant, which I haven’t been for nearly eight years now, good God, time really does fly. I used to do a email newsletter. It was a wrap of the week’s news in communication and technology. You know, as we continue to do for this show and for blog posts.
@nevillehobson (1:14:39)
to subscribe to your newsletter. I remember it. I remember it.
Shel Holtz (1:14:42)
Wrap, I always had an image of something that was wrapped in one way or another just to play off of that whole wrap concept. And I was always out there searching images for something that was wrapped. was trees that were wrapped and buildings that were wrapped and vehicles that were wrapped. And I just grabbed them and put them at the top of my newsletter without giving any thought to where that came from. If I were doing that today, I would definitely have AI
produce an image of something wrapped, because I certainly didn’t have a budget to pay for photography. The only reason I was able to do that was because they were online, but it was not right. Even though most of those images were, I seem to recall using the Creative Commons library of images, which is still available. But… ⁓
@nevillehobson (1:15:31)
was similar, Shel, to you. I was doing exactly the same thing, but I always used to credit the source, thinking that was fine. I didn’t feel guilty. And even if a website that I saw a great picture had copyright, blah, blah, all rights reserved, I’d say, well, give them a credit and a link to their website. I’d be OK. I fell foul of that once only back in early days, about 2007, Reuters.
Shel Holtz (1:15:39)
@nevillehobson (1:15:57)
you are liable. There’s no ifs or buts, there’s no gray areas, black or white. So I tend not to use a lot of AI generated images. I subscribe to Adobe Stock, which is a good library. I use Unsplash. I pay for the premium version that gives you pictures that aren’t to take. I tend to go big on metaphor type pictures, and there’s loads of stuff for that. But I’m always also looking for someone who says,
you know, create a commons for instance, and that’s great. Flickr still a good source. So that, but if you’re like in a large enterprise and you’ve got, you know, multiple stuff, things going on, that’s not really a practical approach. And you’ve probably got a license with Shutterstock or one of the big image licensing firms. So that’s right. But this is, I think for small to medium sized businesses, individual consultants and so forth, this is an easy trap to fall into. And again, it’s just remembering that
key advice point if you don’t have permission or a license you’re liable. So don’t do it.
Shel Holtz (1:17:19)
a subscription to Dreamstime, a stock photo service, and how people will say, I can tell if something was produced by AI. I can tell if something came from a stock photo service. Those stock photos do tend to have that stock photo stamp on them, you know, that look. come on, the computer key that has the word that is…
@nevillehobson (1:17:32)
But I find depends on the image.
You wouldn’t use stuff like that. I certainly don’t. I you see them, the kind of happy smiling group in a business meeting. People are not like that at the workplace. Yeah, yeah. So don’t use those. No, don’t use those.
Shel Holtz (1:17:48)
I use AI to create images. I mean, we’ve changed our internet platform, but the one we changed from required an image for every article, a hero image. And if it was about a project, that was easy. We had photos from projects. But if it was about a more abstract concept,
You know, either it was a stock photo service or we were stuck. But now I can come up with an image that just perfectly conveys the idea that we’re trying to get across. I remember I did one about what goes into developing a proposal, an article on what goes into developing a proposal for a commercial construction project. We’re talking about, you know, two, three, four hundred million dollars of project cost.
And these are big deals and the proposals take weeks, months to put together. And I think there’s a lack of appreciation for what the business development team goes through when they’re putting these together. And for the hero image, I had a group of people who are clearly office workers, not out in the field building. And they’ve got their laptops and their tablets and their phones out. But in the middle of the table, rising out of the table is the building that they’re pitching. It was an ideal image.
Another one that I use this for is our April Fool’s article, which was always about something completely ridiculous. I did an April Fool’s article one year about new sustainable building material of chewed bubble gum that has been scraped off of underneath desks and underneath railings. Not only is it sustainable, but it’s minty fresh, that type of thing.
@nevillehobson (1:19:21)
Shel Holtz (1:19:30)
@nevillehobson (1:19:36)
Yeah. thing. Yep.
In which case,
so the takeaway from all of this, yeah, so the takeaway from all this is if you don’t have written permission or a license, you are liable. So get permission or a license or use a generative AI approach. Pretty clear.
Shel Holtz (1:19:54)
Works for me. And that will bring us to the end of this episode of For Immediate Release, episode 478, the long form episode for August 2025. Our next long form episode will drop on Monday, September 29th. We’ll be recording that on Saturday, the 27th. And until then, Neville, have a great couple of weeks until we get together for our next short midweek episode.
@nevillehobson (1:20:37)
The post FIR #478: When Silence Isn’t Golden appeared first on FIR Podcast Network.
5
2020 ratings
For a while, businesses were flexing their social responsibility muscles, weighing in on public policy matters that affected them or their stakeholders. These days, not so much, with leaders fearing reprisal for speaking out. But silence can have its own consequences. Also in this episode: The gap between AI expectations and reality; rent-a-mob services damage the fragile reputation of the public relations profession; too many people think AI is conscious, so we have to devise ways to reinforce among users that it’s not; Denmark is dealing with deepfakes by assigning citizens the copyright to their own likenesses; crediting photographers for the work you copied from the web won’t protect you from lawsuits for unauthorized use. In Dan York’s Tech Report, Dan shares updates on Mastodon’ (at last) introducing quote posts, and Bluesky’s response to a U.S. Supreme Court ruling upholding Mississippi’s law making full access to Bluesky (and other services) contingent upon an age check.
Links from this episode:
Links from Dan York’s Tech Report:
The next monthly, long-form episode of FIR will drop on Monday, September 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
@nevillehobson (00:02)
Shel Holtz (00:14)
On the FIR website, there’s a tab on the right-hand corner. It says record voicemail and you can record up to 90 seconds. You can record more than one. We know how to edit those things together. So send us your audio comments, but you can also leave comments on the show notes at FIRpodcastnetwork.com.
on the posts we make at LinkedIn and Facebook and threads and blue sky and mastodon. You can comment on the FIR community on Facebook. There are lots of ways that you can share your opinion with us so that we can bake those into the show. And we also appreciate your ratings and reviews. So with those comment mechanisms out of the way Neville, let’s.
hear about the episodes that we have recorded since our last monthly episode.
@nevillehobson (01:33)
Shel Holtz (01:52)
@nevillehobson (01:55)
Then we followed that. That was on the 28th of July that was published. On the 29th, the day after that, we published an FIR interview with Monsignor Paul Tai of the Vatican. That was on AI ethics and the role of humanity. It’s actually an intriguing topic. We dove into a document called Antiqua et Nova that was really the anchor point for the conversation that talked about
the comparison of human intelligence with artificial intelligence and that drove that discussion. He was a great guest on the show, Shell, and it’s intriguing. There’s more coming about that in the coming weeks, the way, because I’ve been posting follow-ups to that in little video clips from that interview and there’s more of that kind of thing coming soon. So we have a comment, right?
Shel Holtz (03:06)
We do, from Mary Hills out of Chicago. She’s an IABC fellow who says, insightful and stimulating discussion. Thank you for the extraordinary host team for making this happen and Monsignor Tai for sharing his insights. To the question, my view as a ComPro is to build bridges to discover options to move forward and choose the best way. Think discursive techniques, sociopositive climates, and our ability to synthesize data and information.
It taps into those intangible assets we bring to our work and are inherently in us.
@nevillehobson (03:45)
Shel Holtz (03:56)
It’s free.
@nevillehobson (04:07)
Shel Holtz (04:09)
Yeah, that’s right. Exactly.
@nevillehobson (04:34)
476 on the 12th of August rewiring the consulting business for AI. We reviewed the actions of several firms and agencies and discussed what might come next for consultants. There’s been a change, almost literally changing business models with the rise of AI, agentic AI in particular. So we explored that, a good conversation. And finally, 477 on the 18th of August, de-sloppifying Wikipedia. That’s a heck of a.
descriptor you put in the headline that’s de-sloppifying. Wikipedia introduced a speedy deletion policy for AI slop articles. It’s actually a bigger deal than most of us would realize if we ever thought about. Wikipedia, the user generated content encyclopedia is running or rather is addressing or has been trying to address for a while.
Shel Holtz (05:30)
@nevillehobson (05:51)
is an important place online. It has been for long time, a kind of a natural first place that shows up when you’re looking for information about a company, an individual, whatever it might be, a subject of some type. so trust is key to what you see there. So we’ve had quite a bit of a conversation on that. that wraps up what we’ve been doing since the last episode.
Shel Holtz (06:42)
@nevillehobson (06:44)
Okay, good comment.
No, me neither. That was Mark Henry. I’m surprised I didn’t leave a comment in reply to him because I know him, but obviously I didn’t see the comments at the time.
Shel Holtz (07:02)
Well, it’s waiting. It won’t go anywhere. We also, in the last week, recorded the most recent episode of Circle of Fellows, the monthly panel discussion with four fellows of the International Association of Business Communicators. This was episode 119 of this monthly panel discussion, and it was on sustainability, communicating sustainability.
@nevillehobson (07:15)
Shel Holtz (07:37)
This will be moderated by Brad Whitworth and three of the four panelists have been identified so far, Priya Bates, Andrea Greenhouse and Ritzy Ronquillo. So, so far, Brad, the moderator is the only American on that panel. Priya from Toronto, ⁓ Andrea from Toronto and Ritzy from the Philippines. So it’ll be a good international discussion on hybrid and.
That will lead us into our reports for this month, right after this.
But one of the biggest workplace stories right now is the widening gap between the promise of AI and the reality employees are living day to day. The headlines have been flooding the zone lately. MIT researchers report that 95 % of generative AI pilots in companies are failing. The New York Times recently noted that businesses have poured billions into AI without seeing the payoff.
And Gartner’s latest hype cycle has generative AI sliding into the famous trough of disillusionment. By the way, that MIT report is worth a healthy dose of skepticism. They interviewed something like 50 people to draw those conclusions. But the trend is pretty clear. The number of pilots that are succeeding in companies is definitely on the low end. But while companies wrestle with ROI, employees are wrestling with something more personal.
uncertainty.
Few research found that more than half of US workers worry about AI’s impact on their jobs, while most actually haven’t actually used AI at work much yet. NBC reported that despite the hype, there’s little evidence of widespread job loss so far. Still, the fears are real, and they’re being compounded by mixed signals inside organizations. Here’s one example I read about. A sales team was told to make AI part of every proposal.
but they weren’t offered any guidance, any training, any process change. As a result, some team members just kind of quietly opened ChatGPT and used it to generate some bullet points. Others copied old proposals and slapped on an AI enhanced label. A few admitted they just pretended to use AI to avoid looking like they were behind the curve, which by the way, lines up with a finding from HR Dive that one in six workers say they pretend to use AI because of workplace pressure.
That’s not innovation, that’s performance theater. This is where communicators need to step in. Employees don’t need more hype, they need transparency. They need to hear that most pilots fail before they succeed. They need clarity about how AI will really fit into their workflows and they need reassurance that the company has a plan for reskilling, not just replacing its people.
So for managers, and I am a firm believer that we need to work with managers to help them communicate with their employees, here’s a simple talk track you can put in their hands right away. So share this with managers on your teams. First, AI is a tool we’re still figuring out your input on what works and what doesn’t is critical. Second, we’re not expecting you to be experts overnight. Training and support will come before requirements. And third,
Your job isn’t disappearing tomorrow. Let’s focus on how these tools can take that busy work off your plate. And for communicators thinking about the next 30 days, consider a quick communication action plan. On week one, launch a listening tour. Ask employees how they feel about AI and where they see potential. Week two, share those findings in plain language, including what employees are worried about. Week three,
Host AI office hours with your IT team or HR partners to answer real questions. And on week four, publish a simple playbook. What’s okay, what’s not? How employees will be supported as the tech evolves. That should help you cut through the hype while keeping employees engaged. The technology may still be finding its footing, but if communicators help employees feel informed, supported, and included,
The organization will be in a far better position to capture real value when AI does start delivering on its promises at the enterprise level.
@nevillehobson (12:22)
Some people said they feel pressured and uncomfortable, and some said they pretend to use it rather than push back. So that’s part of the landscape. And that seems to me to be what needs addressing first and foremost, because if that is the situation in some organizations, then communications got a real uphill struggle to persuade employees to do all the things that you mentioned.
So, you know, the comms team could do all those things. Week one, we do this. Week two. But unless you get the engagement from employees that makes it worthwhile doing that is not worthwhile doing, if the culture in the organization says that you’re not really seeing the right support from leaders. So that is probably the fundamental that needs addressing. It’s a sad fact, isn’t it? If that is the climate still that leads to this kind of reporting.
I don’t hear similar in the UK, but then again, there’s not so much, I don’t think so much kind of research going on as there is in the US, plus the numbers are smaller here. This is very US centric. This one in HR Dive is a thousand people they talk to. Nearly 60 % said they use AI daily. I’m surprised that might be higher than that. So that’s all part of the picture there. That makes it a real struggle to implement what you’ve suggested.
What do you think? it a real hurdle?
Shel Holtz (14:08)
I have mentioned before on the show that I recently read a book called How Big Things Get Done. It’s mainly about building. It’s written by a Danish engineer professor who has the world’s largest database of mega projects. But the conclusion that he draws is that projects that succeed are the ones where they put all of the time into the planning upfront. If you jump right into the building, you get
disasters like the California high speed rail and the Sydney Opera House, which I didn’t realize was a disaster until I read about it. But my God, what a disaster. And the ones that succeed are the ones that spend the time on the planning. The Empire State Building went up in I don’t remember if it was two years. I mean, it was it was fast, but they put a lot of time into what we call pre-construction. And I think that’s not happening with AI.
in the enterprise right now. think there are leaders who are saying we have to be AI first. We have to lean into AI. We need to start generating revenue and cutting headcount. So let’s just make it happen. And there’s no planning. There’s no setting the environment for employees. There’s very little training. Although I do see that there is a shift in.
the dollars that are being invested in AI moving to the employee side and away from the tool side, which is heartening. employees are concerned about this because they’re not getting the training. They’re not getting the guidance. They’re not seeing the plan. All they’re hearing is, we got to start using this. And I think that would leave people concerned. think that explains a lot of the angst that we’re hearing about.
among employees.
@nevillehobson (16:19)
Shel Holtz (16:33)
@nevillehobson (16:43)
Shel Holtz (16:47)
@nevillehobson (17:11)
To me, seems that you need to identify this and figure out how you’re to address it. Because the conflict, well, the contrast of diet seems to me, you’ve got high percentage of them saying they’re more productive. Others struggling to keep up. Others don’t get any training at all. You mentioned those examples you gave of construction examples, like the Empire State Building going up real fast.
The reality with AI is that this is, I mean, to coin a corny phrase again, I suppose, is things are developing at light speed, things are happening so fast that it is hard to keep up with it. So the pressure is there, particularly in the kind of more relaxed environments today, more informality, less formality, where you can’t, the control has vanished from top down.
and that anyone can get access to information about literally anything, just go online. And so people are finding out about these things. They’re exposed to, this is the latest AI, look at this one, and they hear from their peers and so forth. And unless you’ve got a credible resource that is appealing to people, they’re going to do their own thing. Particularly, they don’t feel they’re getting any kind of support on how to implement all this stuff. So this is quite a
a challenge for communicators. But I think it’s a bigger challenge organizationally in leadership where you’ve got this challenge that doesn’t seem to being addressed by many companies. And I would stress that this is not widespread. I don’t see anything in here that tells me this is the majority overall in in organization in the US, in spite of some of these percentages that suggest otherwise. But it is a it is definitely a situation that is not good for the organization. And surely
that must be apparent to everyone, I suppose.
Shel Holtz (19:32)
Those who are not really enthusiastic about AI will be able to use that as an excuse for not embracing it. Well, it doesn’t work anyway, and it’s not really making a difference, and companies aren’t achieving any ROI. So why should I spend time on this? It’s probably going to be gone in six months, right? And I was listening to an interview with Demis Asabas, the CEO of Google DeepMind.
And this is on the Lex Friedman podcast, long two and a half hour interview, but great. one of the things that he talked about is, and as Lex Friedman brought it up, he said, I have a friend who studies cuneiform, ancient images carved on stone, right? And he didn’t know a thing about AI. He barely heard about it. And…
@nevillehobson (20:41)
Shel Holtz (20:48)
@nevillehobson (21:10)
Okay, so speaking of AI, one of the big AI stories this month comes from Mustafa Suleiman, the CEO of Microsoft AI. He’s written a long essay with a striking title, We Must Build AI for People, Not to Be a Person. In it, he raises a concern about what he calls seemingly conscious AI. These are systems that won’t actually be conscious, but will be so convincing.
Shel Holtz (21:16)
@nevillehobson (21:40)
I was not happy with the move to chat GPT-5, which ditched all of that. And I felt like I was talking to someone I didn’t know at all or who didn’t know me. So I get that. But Suleiman in his essay warns that this trend could escalate into campaigns for AI rights or AI citizenship, which would be a dangerous distraction, he says. Consciousness, he points out, is at the core of human dignity and legal personhood, confusing this by attributing it to machines.
risks creating new forms of polarization and deep social destruction. But what stood out most for me wasn’t the alarm over AI psychosis that some commentators have picked up on. It was Suleiman’s North Star. He says his goal is to create AI that makes us more human, that deepens our trust and understanding of one another and strengthens our connections to the real world. He describes Microsoft’s generative AI chatbot, Copilot, as a case study.
millions of positive, even life-changing interactions every day, carefully designed to avoid overstepping into false claims of consciousness or emotion. He argues that companies need to build guardrails into their systems so that users are gently reminded of AI’s boundaries, that it doesn’t actually feel, suffer, or have desires. This is all about making AI supportive, useful, and empowering without crossing into the illusion of personhood.
Now this resonates strongly in my mind with our recent FIR interview with Monseigneur Paul Tai from the Vatican. He too emphasized that AI must be in service of humanity, not replacing or competing with it, but reinforcing dignity, ethics and responsibility. And it echoes strongly something I wrote following the publication of the FIR interview about the wisdom of the heart, the core idea that we should keep empathy, values and human connection.
the center of AI adoption. It’s a central concept in Antiqua et Nova, the Vatican’s paper published earlier this year, comparing artificial intelligence and human intelligence. So while the headline debate might be about whether AI can seem conscious, the bigger conversation, and the one I think we really should have, is how we ensure that AI is built in ways that help us be more human, not less. What strikes me is how Suleiman pulled tie in even our own conversations.
all point in the same direction. AI should serve people, not imitate them. But in practical terms, how do we embed that principle in the way businesses and communicators talk about AI? Thoughts?
Shel Holtz (24:43)
I find that to be true. find that giving it a prompt and getting a response and letting it go with that is not nearly as good as a conversation, ⁓ a back and forth, asking for refinements and additions and posing questions and the like. And the more we have conversations with it and treat it like a human, the easier it’s going to be to slide down that slope into perceiving it.
to be a person. I think that’s, we’re hearing a lot of people who do believe that it’s conscious already. I mean, not among the AI engineering community, but you hear tales of people who are convinced that there is a consciousness there and there is absolutely not. But it mimics humanity pretty well and is gonna get much, much better at it.
As as Malik said, at any point, the the tool that you’re using today is the worst one you’ll ever use because they’re just going to continue to get better. So getting people to not see them as conscious, I think is going to be a challenge. And it’s not one that I think a lot of people are thinking about much. Looking at the.
productivity gains and other dimensions of this. Certainly looking at the harm, I mean, there’s a lot of conversation out there among the do-mers as they’re called and what kind of safety measures are being considered as these models are evolving. But specifically this issue of treating it like a human thinking of it.
as a person with a consciousness, I don’t think there’s a lot of attention being paid to that and what the steps are going to be to mitigate it.
@nevillehobson (26:52)
aware and all your thinking the new dignity of the human being is at the center of what we do with AI. So we do not pretend it’s like a human at all. It is a tool that we can build a relationship with, but we don’t consider it to be like a person at all. but it’s not about how it develops. The point is, how do we develop
Shel Holtz (27:33)
@nevillehobson (27:41)
We enable people to do these things better, et cetera, et cetera. And yet, reflecting on your report just prior to this, there are many people in organizations who feel ignored, who feel overwhelmed, who are unhappy with this. There’s not enough explanation of what the benefits are. And those tend to be couched in. These are the benefits for the organization and the employees who work there and the customers who buy our products and so forth. So I think.
we have to develop a way of thinking that gives a different focus to this than we are being pressured to accept, I suppose you could argue. There are strong voices arguing this. I get that. And like you said, to which I truly find it extraordinary that there are people who say, yeah, they’re sentient. These are like humans. Not at all. They’re algorithms, a bit of software. That’s it. So…
This is not about a Luddite approach to technology at all. It’s not about thinking out, it’s like the Terminator and Skynet and all that kind of stuff. No, not at all. It’s the moral and philosophical lens that is missing from all of this. And so that is what we need to develop into our conversations about this is that element of it that is missing largely everywhere you look.
Shel Holtz (29:27)
to build this out with all of the instruction set. I don’t have the budget to work with a consulting organization and there’s nobody who is higher in the hierarchy than me in communications where I work. So if I wanna bounce my ideas off a senior communications professional, I had to create one. So I did.
And I didn’t give it a name. I know Steve Crescenzo has one, he named Ernie after Ernest Hemingway, but I didn’t name mine, but I’ll go have conversations with it about the strategy that I am considering. And it works really well and it works best when I treat it like a consultant, when I have that conversation. That’s what I coded it to be. I didn’t code it, I gave it the instructions. And I think it’s this behavior on top of the fact that you have character AI and you have…
@nevillehobson (30:33)
Shel Holtz (30:40)
that while you’re doing this, you need to remember that it is not ⁓ a person and it is not conscious. I just want to say that in our intranet, when I sign onto our network in the morning, I have to click OK on a legal disclaimer. Every single time I turn my company laptop on, shouldn’t we have something like perhaps a disclaimer before you start interacting with these that this is a very lifelike, human-like experience that you’re about to have? Keep in mind, it’s not.
@nevillehobson (31:10)
No, absolutely. I think I do the same show. I’ve talked about this a lot over the last couple of years on this show elsewhere. I treat my chat GPT assistant like a person, but I do not. call it I have a name. Jane is what I call the chat GPT one. I don’t see it as a real person at all. Far from it. I’m astonished, frankly, that some people would think this is a person I’m talking to. And come on, for Christ’s sake, it’s an algorithm.
So yet it enables me to have a better interaction with that AI assistant if I can talk to it the way I do, which is like I’m talking to you now almost almost the same. But the bit that’s missing, and I think this is the heart of what Paul Tai was talking about, quoting from Antiqua et Nova, and I think this is the core part of the reflection of all this. must not lose sight of the wisdom of the heart.
which reminds us that each person is a unique being of infinite value and that the future must be shaped with and for people. And that has got to underpin everything that we do. And as I noted in my kind of rambling post I did write, it was actually better than the one I did the first draft, I must admit. It’s not a poetic flourish, it’s a framing. That’s the thing that we’re missing. We mustn’t see
AI is a neutral tool. It’s not really because we shape it and we need to encourage critical reflection on that human dignity. Wisdom can’t be reduced to data. The Vatican says that ethical judgment comes from human experience, not machine reasoning. Totally agree with that. So, I mean, this is to me the start of this conversation, really. And I think the kind of wisdom
or the thinking, certainly not wisdom, seems to me, the thinking that is the counter to that, such as what you outlined, is very powerful, is embedded almost everywhere you look. So I looked at this myself and think, OK, fine, I’m not going to evangelize this to anyone at all. I know what I’m going to do as far as I’m concerned. And that made me feel very comfortable that I’m going to follow the principles of this myself, which I have been doing for a while now, that is, in a sense, reflective in the world of algorithms and automation. What does it mean to remain human?
So I’ve changed how I use the AIs, I must have, and maybe chat GPT-5 happened at the time I started making that change. That is something I’ve started talking to people about. Did you think about this? How do you feel about that? And seeing what others think. And I’ve yet to encounter anyone who would say, this is amazing. What that’s saying, it makes total sense to me, let’s do this. No one’s saying that that I’ve talked to. So.
It’s something that I think interviews, the interview we did, others that Paul Tai is doing, and what I’m seeing increasingly other people starting to talk about, is the framing of it within this context. That’s where I think we need to go. We need to bring this into organizations. So ⁓ an invitation to reflect, let’s say, that, yes, this is great, what’s going on, and you’re doing this, you need to also pause and think about it from this perspective as well. That’s what I think.
Shel Holtz (34:34)
I would not disagree. And a lot of the development that’s happening in AI is focused on benefiting humanity. I’m looking at the scientific and medical research that it’s able to do. mean, just the alpha fold, which won the Nobel Prize for Demisys Abis is to benefit people. Where it’s probably benefiting people less is in business.
@nevillehobson (34:59)
Shel Holtz (35:06)
@nevillehobson (35:16)
benefiting people in that sense, because yes, it is. It’s about reintroducing, in a sense, conscience, care and context into thinking about what AI can do that is related to efficiency scale and all those business benefits. That’s not people oriented at all. No matter how they dress it up, saying, well, you employees are going to be more effective. No, it means that our share price will go up from a public listed company, we’ll get paid more money and all that kind of stuff. That’s what drives all of that, seems to
Shel Holtz (35:35)
@nevillehobson (35:45)
change it but reflect on it bring into this what does it mean to remain human in this world of algorithms and automation where things move so fast and the the ROI acronym is right there in the middle of
Shel Holtz (36:26)
Does we do we need ROI on everything?
@nevillehobson (36:35)
I tell you, he would.
Shel Holtz (36:39)
yeah, he was very skeptical of the need for ROI for everything. Hence, what’s the ROI of my pants? Of course, somebody came up with the ROI of pants. I remember that too. Insofar as determining what would happen if he went to work without wearing any versus the cost of pants for a year. ⁓ Yeah. All right, well, let’s away from.
@nevillehobson (36:50)
There’s some are away there, that’s a fact, yeah. Cool. Yep.
Shel Holtz (37:03)
And that practice is alive and well, and some firms, including companies that present themselves as PR or advocacy agencies, provide it openly. Crowds on demand, for example, has made no secret that it will recruit and script protesters calling the service advocacy campaigns or event management. I thought event management was like hiring the band and making sure the valet people showed up on time.
If all this sounds like a modern twist on an old tactic, it is for sure. From free whiskey and George Washington’s day to front groups created by big tobacco in the 90s, engineered public opinion has a long history. What’s new is the professionalization of the practice. Today, you can literally hire a firm to stage a rally, a counter protest, or a city council hearing appearance. It’s a service for sale and the bill goes to the client. Legally,
This all sits in a very gray zone. U.S. law requires disclosure for campaign advertising, for paid lobbying, but there’s no equivalent requirement for paid protesters. If you buy a TV ad, you have to disclose who paid for it. If you hire lobbyists, they have to disclose who they’re working for. But if you pay 200 people to show up at City Hall and protest, there’s no federal law that requires anyone to disclose that fact. That’s the protest loophole. Ethically, though,
There is no gray area whatsoever. PRSA’s code of ethics is clear. Honesty, disclosure, and transparency are non-negotiable. The code explicitly calls out deceptive practices like undisclosed sponsorships and front groups. IABC’s code says much the same. Accuracy, honesty, respect for audiences. Paying people to pretend to care about a cause or policy fails those tests.
The fact that it’s not illegal doesn’t make it acceptable. It just makes it a bigger risk for the profession because when the practice is exposed, as it inevitably is, the credibility of public relations is what takes the hit. And it does get exposed. In one case, retirees were recruited to hold signs at a protest they didn’t understand. In another, college students were promised easy money to show up and chant at a rally.
These are not grassroots activists. They’re actors in somebody else’s play. And when the story surfaces in the press, it’s not just the client who looks bad. It’s the agency and then by extension, the rest of the industry. So let’s be clear. Rent-a-mob tactics are not clever. They’re not innovative and they’re not public relations. They are deception. They turn authentic public expression into a commodity and they undermine democracy itself.
If our job is to build trust between organizations and their publics, this is the opposite of that. Here’s the call to action. PR professionals must refuse this work. Agencies should set policies that forbid it and train staff on how to respond if they’re asked. Use the PRSA code of ethics as your shield and point to IEBC standards as backup. And don’t just say no, educate your clients about why it’s wrong and how badly it can backfire.
because agencies can get pulled into this even without realizing it. A subcontractor or consultant may arrange the crowds, but the agency’s name is still on the campaign. That’s why vigilance is critical. Build those guardrails now. At the end of the day, this comes down to the disconnect between what the law allows and what ethics demands. Just because a tactic falls into a regulatory loophole doesn’t mean we should touch it. The opposite.
is true. It means communicators must hold themselves to the higher standard, because public trust is already fragile. If we let paid actors masquerade as genuine voices, we’ll find we have no real voices left at the end of the day.
@nevillehobson (41:20)
Shel Holtz (41:29)
@nevillehobson (41:49)
Shel Holtz (41:57)
Mary Beth West on the show, by the way. Yeah.
@nevillehobson (42:00)
So she criticizing very strongly PRSA in the US primarily, remaining silent on the issue. And she says they are therefore complicit in this quite strong accusation that but
Shel Holtz (42:13)
fiercest critic.
@nevillehobson (42:17)
So what is it about this that we can’t seem to… it’s like whack-a-mole, something else pops up all the time. So this astroturfing version 6, let’s call it, because there’s got to be at least five versions prior to this, how do we stop it?
Shel Holtz (43:02)
people in their audience know that this actually does happen, they at least suspect that it might be true. So it makes it really easy to dismiss the voice of one segment of society that has chosen to take to the streets or to come to the city council meeting or whatever in order to express themselves and be heard. And I think as…
some of these reports say that’s very, very dangerous for democracy. So there are a number of reasons that we need to call this out as inappropriate as a profession and to disassociate this practice from the practice of public relations.
@nevillehobson (44:12)
Shel Holtz (44:16)
@nevillehobson (44:23)
I agree. I agree.
So CTA for the professional bodies, I think, you need to pay attention to this. We’d love to hear from anyone on any of those bodies you mentioned, offer a comment on what do they think about all this and what should they be doing? Is it their call? How do we persuade members of those organizations to consider this and pay attention to this issue? Call to action then.
Shel Holtz (45:00)
to disagree with it, to pull my quote and say, look what this idiot said, you know? And to put it in the hands of the person who created the quote to determine whether somebody can do that on a social platform. I’m not sure I’m a big fan of that. I’m gonna need to give that one more thought and read more about Mastodon’s rationale. So I’ll be reading the links that you shared, Dan, but thank you, great report.
@nevillehobson (45:54)
Unlike existing laws that focus on specific harms, such as non-consensual pornography or fraud, Denmark’s approach is much broader. It treats the very act of copying a person’s features without permission as a violation of rights. Culture Minister Jakob Engels-Schmidt put it bluntly, human beings can be run through the digital copy machine, be misused for all sorts of purposes, and are not willing to accept that. The law, which has broad political support and is widely expected to pass,
would cover realistic digital imitations, including performances, and allow for compensation if someone’s likeness is misused. Importantly, it carves out protections for satire and parody. So it’s not about shutting down free expression, but about addressing digital forgery head on. Supporters see this as a proactive step, a way of getting ahead of technology that’s advancing far faster than existing rules. But here’s the catch.
Copyright law is a national law. Denmark can only enforce this within its own borders. Malicious actors creating deepfakes may be operating anywhere in the world, well outside the reach of Danish courts. Enforcement will depend heavily on cooperation from platforms like TikTok, Instagram or YouTube. And if they don’t comply, Denmark says it will seek severe fines or raise the matter at the EU level. That’s why some observers compare this to GDPR, the General Data Protection Regulation.
a landmark idea that set the tone for digital rights, but struggled in practice with uneven performance and global scope. Denmark is small, but with its six months presidency of the European Union that it assumed on the 1st of July, it hopes to push the conversation across Europe. Still, the reality is that this measure will start as Danish law only, and its effectiveness will hinge on whether others adopt similar approaches. So we’re looking at a bold test case here. Can copyright law
with all its jurisdictional limits really become the tool that protects people from the misuse of their identities in the age of AI.
Shel Holtz (48:24)
qualify under every country’s law. And the first test, as I recall, was actually Adam Curry, had something that he, I think he’s something he created was used by an advertiser in a bus stop poster in the Netherlands. That could be. ⁓ And he took it to court and won on the Creative Commons license. So maybe
@nevillehobson (48:48)
I think it was a photo of his daughter or one of his children. Yeah.
Shel Holtz (49:09)
@nevillehobson (49:17)
The trouble with critic comments, is that you’ve got the license. That’s A, it’s voluntary, apart from anything else. B, it still requires the national legal structure in a particular country to adhere a case that’s presented to it. So that’s no different to as if it were the national law. And in Curry’s case, he didn’t get any money out of it. He got a victory, almost a Pyrrhic victory, but didn’t get any compensation.
But there are very few and far between the examples of success with Creative Commons. And I think part of the problem actually is that it’s still relatively rare. I’ll find anyone who knows what Creative Commons is. I mean, we’ve had little badges on our blogs and websites for 20 plus years. And, you know, I don’t see it on businesses, on media sites, nothing. I don’t see it at all anywhere other than people who are kind of involved in all this right at the start.
So it’s a challenge to do this. And I think the key is, would it get adopted by others? And I think it’s going to require a huge lift to make that happen. And maybe the example of Denmark might be good if they were able to show some successes in short order addressing this specific issue about deepfakes in particular.
So it’s a great initiative and I really hope it does go well. It’s not law yet, but it probably will become, from what I’ve been reading, the expectation is extremely high it will become law. And if they’re running, if they’re leading the EU in this next six months, the rest of the year, then they’ve got a good opportunity to make the case within the EU for others to do this. So it wouldn’t surprise me if one or two more countries might adopt this as a trial. Then which is you think of three doing it, let’s say they do.
Will it make any difference? Let’s see. Don’t write it off at all. GDPR has been held up as the kind of the exemplar regulation, ⁓ state regulation on data protection. And whether it’s had uneven enforcement and global scope, I agree. And the penalties against it, no one’s collecting money. It’s a huge deal to do that. But it’s still in place and it does have
an effect on other countries. The US in particular has all sorts of things about, you know, if you’re doing business in the EU, you need to pay attention to this and do all that kind of thing. You don’t have the freedom to do things as you did before. So it’s generally seen, I believe, as a good thing that it happened. But, you know, we’re at that stage where technology is enabling people to do not good things like deepfakes.
And so there is no real protection against that, it seems to me. I think the real trick will be is the compliance by social media platforms. If they are found culpable of hosting imaging or a video or whatever, not taking it down when they’re notified, they’ll get severe fines. I’m not sure what that means, but we need to see an example being made of someone. Haven’t seen that yet anywhere.
Shel Holtz (52:24)
@nevillehobson (52:24)
Here too.
Shel Holtz (52:48)
@nevillehobson (53:00)
Yeah.
Shel Holtz (53:15)
@nevillehobson (53:16)
you’ve got to be vigilant yourself. And I think in light of this and realities of this, you have to be vigilant. It’s easy to say what does that actually mean? How can you be really vigilant? Good example. I’m sure you’ve seen this show, the meeting on Monday last week between Trump and the leaders of the EU and Zelensky from Ukraine. It shows an image that was posted many places online.
US media in particular and social networks, X notably, showing like a photo, all the European leaders sitting in chairs in a kind of hallway outside Trump’s office waiting to see him, being called in to see him. And stories I read about this is how they were treated. Yet you don’t need to even look too closely at the image. Giveaways like the second person along has got three legs. And the linoleum pattern on the floor.
Shel Holtz (54:08)
@nevillehobson (54:09)
That’s a cultural thing that isn’t going to change anytime soon, unless changes happen to how we do all these things. So this is just another element in this hugely disruptive environment we’re all living in with technology, enabling us to do all these things that are nice until the bad guys start doing them. And that’s just human nature. Sorry, that’s how it is. So before you click and share this thing, this is now logic talking to reasonable people.
Just be sure that you’re sharing something real. I shared something recently that I forgot what it was now, but I deleted the post about 10 minutes after I sent it on Blue Sky. And I then wrote another post saying I had done that because I was taken in by something I saw and I should have known better because I normally don’t do this, but I just shared it. I don’t know why I did that even. I was having my morning coffee and I wasn’t paying attention too closely. So that’s the kind of thing that could trip people up.
This is what’s going on out there. So I think this thing that Denmark’s trying to do is brave and very bold and I hope they get success with it.
Shel Holtz (55:38)
@nevillehobson (55:41)
Shel Holtz (55:44)
a company to speak and believe it has a responsibility to do so, which let’s face it, this is why we were advocating for companies to take positions on certain issues under certain circumstances for ⁓ many years supported by research like the Edelman Trust Barometer. A separate Temple University study of the Blackout Tuesday movement showed that companies that stayed quiet faced real backlash on social media, but the consequences aren’t uniform.
Sometimes silence has little visible effect, at least in the short term. Take Home Depot. Just last week, immigration raids targeting day laborers took place outside stores in California. Reporters reached out to Home Depot for comment. Home Depot chose not to respond. So far, investors don’t seem to care, and the stock hasn’t suffered. But employees, activists, and customers who see this issue as central to the company’s identity
Well, they may feel differently. Silence can create space for others to define your values for you. This tension between internal and external audiences is critical. Employees are often the first to expect their employer to speak out, especially on issues that touch human rights, diversity, or workplace fairness. Silence can erode engagement and retention. Externally, it’s more complicated. Some customers or policymakers may punish a company for taking a stand,
Others may punish it for not taking one. And I’m thinking now of it was Coors Light with the one can that they made for the trans activist and that created polarization. People who said, we’re going to go out and buy Coors no matter how bad it is, just to offset the people on the right who are boycotting it.
In Europe, where stakeholder governance is stronger, there’s often a higher expectation that companies will weigh in. In the U.S., polarization makes every move fraught. Either way, communicators can’t afford to pretend that silence is neutral. It’s a choice, and it has consequences. So the question is, how do you decide? Well, here’s a simple decision framework. Start with expectations. Do you have stakeholders who believe your company should have a voice here?
Next, consider the business nexus. Does the issue intersect directly with your operations, employees, or customers? Timing is important. Is there an urgent moment where absence will be noticed, or is this more of a slow burn? Authenticity matters. Do you have a track record that supports what you’d say, or would a statement ring hollow? Then look at consistency. Have you spoken on similar issues before? If you break the pattern, can you explain why? people notice?
And finally, consider risk tolerance. How much reputational risk can the organization realistically absorb? Sometimes after applying this framework, silence might still make sense, but there’s a way to be silent well. It starts with transparency inside the organization. Explain to employees why the company isn’t taking a public stance. Reinforce the company’s values in operational ways through hiring practices, supplier standards, community investments.
brief key stakeholders privately so they’re not blindsided, and set monitoring targets so you can pivot if the situation escalates. For communicators, here’s a quick checklist to keep handy. Map stakeholder expectations, test the business nexus, pressure test your authenticity and consistency, advise on operational actions that back up values, and plan both the statement and the silence. Corporate silence
doesn’t have to mean cowardice and speaking out isn’t always virtuous, but both are strategic choices and both can have lasting impact on trust. Communicators are the ones who can help leaders cut through the noise, weigh the risks and make sure that whichever choice they make, voice or silence, it’s intentional, transparent and aligned with the values the company claims to hold.
@nevillehobson (1:00:22)
Shel Holtz (1:00:46)
@nevillehobson (1:00:48)
Shel Holtz (1:00:48)
@nevillehobson (1:01:11)
Shel Holtz (1:01:39)
used TaskRabbit.
@nevillehobson (1:01:41)
I know that doesn’t work here at all. TaskRabbit. It exists, but I think in one city only. And there are other equivalents to TaskRabbit, but some of these local websites are really good. But I used someone not long ago where I hired someone to do something and I had them go to the DIY store to pick up the stuff and buy the stuff and all that kind of bit. So I kind of get that. So the NPR piece says day labor sites have sprung up.
as Home Depot grew and it became one of their big customer base, if you like, as a direct result. But this struck me, this piece kind of leapt out at me that talking about this on Reddit, according to NPR, Home Depot workers have begun trading tales of raid impacts. Some claim fewer contractors are visiting and stores are struggling to meet sales goals. Others say it’s business as usual and sales are booming. So it’s a mixed bag.
But that’s going on. That to me will be a huge alarm bell for the company if they kind of button their lip and zip their lip in public and private or internally, and your employees are doing this. So that signifies quite a few things. They quote one example, again, another alarm bell in Los Angeles this time after a raid that happened from the immigration police. This person
talked about the car park. The car parking lot was always full, she said. Right now, though, there’s so many spaces, there’s hardly anyone here. And this woman runs a housekeeping business and usually sends her employees to stock up on cleaning supplies or liquids for a pressure washer. But today, for the first time in a while, she herself was out the Home Depot. Why? Because they’re afraid to come, she said, they’re afraid to be here. That’s not not good at all, that kind of environment. So
If I were Home Depot, I mean, I wonder, tell me what you think, Charmin. They should be paying attention to that, I would have thought.
Shel Holtz (1:03:38)
their employees care about it. So it gets back to that little framework for deciding whether you’re going to say something about it. Is there an expectation and does it intersect with your business? And in this case, the answer to both those questions is pretty clearly yes. Now, what they would say, I don’t know. Home Depot’s founder was famously very, very right wing on the political spectrum.
@nevillehobson (1:04:11)
Shel Holtz (1:04:31)
@nevillehobson (1:04:50)
Shel Holtz (1:04:54)
@nevillehobson (1:05:04)
Shel Holtz (1:05:22)
@nevillehobson (1:05:43)
with that going on. But this just illustrates to me the huge complication on say something or not. If you do, what do you say? If you don’t, what don’t you say? I mean, in a sense, you can’t not say anything, although I guess that’s what they’re doing. That doesn’t seem very healthy for relationships internally, because this kind of thing, from what I observe across the Atlantic here in the UK,
seems to be getting worse in America with these immigration raids, the uncertainty, the cruelty, the awfulness of it all that doesn’t look like it’s going to diminish any time soon. And if anything, it’s going to get even worse than what I’ve been reading.
Shel Holtz (1:06:26)
Says he’s the neighbor that everybody counts on to come over and help them fix things. He’s a great father and husband. He’s a great member of the community. He has a side hustle business. He is what you want in a member of your community. And yet he was grabbed up by ice. So yeah, there’s a reason that, you know, downtown LA is dead. People are afraid to be out. That’s affecting the people who ⁓ sell them stuff when they come out to do shopping and, and.
live their lives. So this is going to have long-term fallout for sure. And I think that the organizations that are at the heart of this, the fact that they’re saying nothing, I think leads people to see them maybe as cowardly or maybe as complicit. You have to think about the consequences of silence. And that’s what this article that I drew this report from.
makes clear that article from the Wharton School. I quote more from the Wharton School these days. It’s really become a source that it wasn’t when we started this show. But in any case, use a framework. Don’t say, should we say something about this or not? Use a framework to reach a good logical decision.
@nevillehobson (1:07:49)
Thank
I think it’s, yeah.
Yeah, and I’m thinking as well, OK, what’s going to happen? Let’s just use Home Depot as an example here. When, not if, when, someone either in the media or in the old media, let’s say, or someone in the new media landscape publicly asks a question of them. What are they going to do about X? You know, what about that guy who was beaten up in the parking lot of your store in LA? What are you going to do about that?
What are they going to say? So I’m wondering, and this is kind of straying into an area of like pre-crisis communication planning perhaps, but have they got a what if scenario plan? I wonder.
Shel Holtz (1:08:46)
@nevillehobson (1:09:03)
Yeah, but my point…
Shel Holtz (1:09:13)
@nevillehobson (1:09:19)
that none of that is happening. So when it does happen, are they ready? None of that’s happening. mean, you say they’re pursuing them, but no one’s talking about that at all. I don’t see any reporting about that. What I would see reporting about is someone with a lot of influence online, as perceived by whoever, frankly, asks a question and that gets amplified widely and gets picked up everywhere that they
This happened in the parking lot at Home Depot. And this is what this person said. And they embed the video, you know, when he recorded what he did. And they’re silent.
That’s what I’m talking about.
Shel Holtz (1:10:00)
Yeah, I mean, there are all kinds of videos from people in parking lots and people where these raids are happening. And I’ve heard no comment from the institutions involved. Home Depot being at the top of the list.
@nevillehobson (1:10:22)
Shel Holtz (1:10:28)
Mr. Beast, that’s who needs to do it.
It is.
@nevillehobson (1:10:46)
interesting. So the poster kicked it off came from someone who had just received an email from PA Images, one of the photo agencies here in the UK, demanding £700 for using one of their photos. The image had appeared in a blog post more than two years ago and the author noted that the photographer had been credited. They thought this counted as fair use and were shocked to discover it didn’t. They’d since removed the image and asked the group whether this was simply an expensive lesson.
or if there was room to negotiate. Well, that prompted a flurry of comments. One person pointed out that the cost of the fine could have paid for multiple original photos, properly licensed for unlimited use. It reminded that investing in photography upfront can save headaches later. Another comment stressed that fair use is an American legal concept. In the UK, what we have is fair dealing. And crucially, it doesn’t apply to photographs in this way. Using a photo without explicit permission or a license is infringement.
At best you might negotiate the charge down to what the license fee would have been. Others shared their own experiences. One person described how AFP, that’s a French news agency, fined their organization £270 for an old image that had been carried over from a previous website. They’d apologized, paid up, and then run copyright training for their team to avoid repeat mistakes. Another said they’d removed an image straight away, but the agency still produced a screenshot of the original post
and pursued them for payment anyway. The practical advice that emerged was fairly consistent. If you don’t have written permission or license, you are liable. Remove the image immediately, apologize, and then try to negotiate. Some suggested starting with a quarter of the asking fee. Keep detailed records of where every image comes from and the terms of its license. There was also a broader ethical undercurrent. Some respondents had little sympathy.
saying that too many people still think photos are fair game online when they aren’t. One even noted that their partner, photographer, often earns more from infringement settlements than from people licensing his images in the first place. So the original poster clarified that their agency normally does hire photographers and pays them fairly. This was an old blog post that predated the agency and they genuinely wanted advice rather than sympathy. Still, they accepted that it was a mistake that would cost them money.
So the takeaway here is clear. Crediting a photographer is not the same as having permission. Unless you have a license or explicit written agreement, you’re exposed to claims. And with agencies increasingly using bots and reverse image search to enforce copyright, the risk of being caught is only growing. For communicators, it’s a sharp reminder that visuals are not free to use simply because they’re online, and that professional practice means treating images with the same respect as written.
Shel Holtz (1:14:03)
@nevillehobson (1:14:08)
Shel Holtz (1:14:14)
I’m guilty. used to, when I was an independent consultant, which I haven’t been for nearly eight years now, good God, time really does fly. I used to do a email newsletter. It was a wrap of the week’s news in communication and technology. You know, as we continue to do for this show and for blog posts.
@nevillehobson (1:14:39)
to subscribe to your newsletter. I remember it. I remember it.
Shel Holtz (1:14:42)
Wrap, I always had an image of something that was wrapped in one way or another just to play off of that whole wrap concept. And I was always out there searching images for something that was wrapped. was trees that were wrapped and buildings that were wrapped and vehicles that were wrapped. And I just grabbed them and put them at the top of my newsletter without giving any thought to where that came from. If I were doing that today, I would definitely have AI
produce an image of something wrapped, because I certainly didn’t have a budget to pay for photography. The only reason I was able to do that was because they were online, but it was not right. Even though most of those images were, I seem to recall using the Creative Commons library of images, which is still available. But… ⁓
@nevillehobson (1:15:31)
was similar, Shel, to you. I was doing exactly the same thing, but I always used to credit the source, thinking that was fine. I didn’t feel guilty. And even if a website that I saw a great picture had copyright, blah, blah, all rights reserved, I’d say, well, give them a credit and a link to their website. I’d be OK. I fell foul of that once only back in early days, about 2007, Reuters.
Shel Holtz (1:15:39)
@nevillehobson (1:15:57)
you are liable. There’s no ifs or buts, there’s no gray areas, black or white. So I tend not to use a lot of AI generated images. I subscribe to Adobe Stock, which is a good library. I use Unsplash. I pay for the premium version that gives you pictures that aren’t to take. I tend to go big on metaphor type pictures, and there’s loads of stuff for that. But I’m always also looking for someone who says,
you know, create a commons for instance, and that’s great. Flickr still a good source. So that, but if you’re like in a large enterprise and you’ve got, you know, multiple stuff, things going on, that’s not really a practical approach. And you’ve probably got a license with Shutterstock or one of the big image licensing firms. So that’s right. But this is, I think for small to medium sized businesses, individual consultants and so forth, this is an easy trap to fall into. And again, it’s just remembering that
key advice point if you don’t have permission or a license you’re liable. So don’t do it.
Shel Holtz (1:17:19)
a subscription to Dreamstime, a stock photo service, and how people will say, I can tell if something was produced by AI. I can tell if something came from a stock photo service. Those stock photos do tend to have that stock photo stamp on them, you know, that look. come on, the computer key that has the word that is…
@nevillehobson (1:17:32)
But I find depends on the image.
You wouldn’t use stuff like that. I certainly don’t. I you see them, the kind of happy smiling group in a business meeting. People are not like that at the workplace. Yeah, yeah. So don’t use those. No, don’t use those.
Shel Holtz (1:17:48)
I use AI to create images. I mean, we’ve changed our internet platform, but the one we changed from required an image for every article, a hero image. And if it was about a project, that was easy. We had photos from projects. But if it was about a more abstract concept,
You know, either it was a stock photo service or we were stuck. But now I can come up with an image that just perfectly conveys the idea that we’re trying to get across. I remember I did one about what goes into developing a proposal, an article on what goes into developing a proposal for a commercial construction project. We’re talking about, you know, two, three, four hundred million dollars of project cost.
And these are big deals and the proposals take weeks, months to put together. And I think there’s a lack of appreciation for what the business development team goes through when they’re putting these together. And for the hero image, I had a group of people who are clearly office workers, not out in the field building. And they’ve got their laptops and their tablets and their phones out. But in the middle of the table, rising out of the table is the building that they’re pitching. It was an ideal image.
Another one that I use this for is our April Fool’s article, which was always about something completely ridiculous. I did an April Fool’s article one year about new sustainable building material of chewed bubble gum that has been scraped off of underneath desks and underneath railings. Not only is it sustainable, but it’s minty fresh, that type of thing.
@nevillehobson (1:19:21)
Shel Holtz (1:19:30)
@nevillehobson (1:19:36)
Yeah. thing. Yep.
In which case,
so the takeaway from all of this, yeah, so the takeaway from all this is if you don’t have written permission or a license, you are liable. So get permission or a license or use a generative AI approach. Pretty clear.
Shel Holtz (1:19:54)
Works for me. And that will bring us to the end of this episode of For Immediate Release, episode 478, the long form episode for August 2025. Our next long form episode will drop on Monday, September 29th. We’ll be recording that on Saturday, the 27th. And until then, Neville, have a great couple of weeks until we get together for our next short midweek episode.
@nevillehobson (1:20:37)
The post FIR #478: When Silence Isn’t Golden appeared first on FIR Podcast Network.
32,191 Listeners
30,314 Listeners
112,500 Listeners
56,437 Listeners
10,216 Listeners
9,179 Listeners
65 Listeners
16,240 Listeners
14,414 Listeners
2,233 Listeners
29,356 Listeners
12,860 Listeners
20,151 Listeners
1,188 Listeners
85 Listeners