In the long-form episode for November 2025, Shel and Neville riff on a post by Robert Rose of the Content Marketing Institute, who identifies “idea inflation” as a growing problem on multiple levels. Idea inflation occurs when leaders prompt an AI model to generate 20 ideas for thought leadership posts, then send them to the communications team to convert them into ready-to-publish content. Also in this episode:
A growing number of companies are moving branding under the communications umbrella, detouring around Marketing and the CMO. It’s all about safeguarding reputation.Quantum computing has been a topic of conversation in tech circles for years. Now, its arrival as a commercially viable product is imminent. Communicators need to prepare.AI’s ability to generate software code from a plain-language prompt has put the power to create apps in the hands of almost anyone. There are communication implications.Share some photos of yourself with an AI model, or companies that provide this as a service, and you can get an amazing likeness of yourself. But is it okay to use it as your LinkedIn profile?Research finds that leaders not only handle change management badly, but it’s also having an impact on employees who have to endure the process. Communicators can help.In his Tech Report, Dan York reports on WhatsApp launching third-party chat integration in Europe; X is finally rolling out Chat, its DM replacement, with encryption and video calling; Mozilla has announced an AI “window” for the Firefox browser; WordPress 6.9 offers new features, collaboration tools, and AI enhancements; Amazon has rebranded Project Kuper as Amazon Leo; and Open AI says it has “fixed” ChatGPT’s em dash problem. (We dispute that it’s a problem.)Why companies are merging communications and brand under one leaderWill quantum be bigger than AI?‘Vibe coding’ and other ways AI is changing who can build apps and how The market has spoken: Vibe coding is serious businessThe potential of vibe codingEverything Wrong with Vibe Coding and How to Fix ItVibe Coding: How to Avoid Over-Engineering and Build Smarter, Not HarderMastering Vibe Coding: How to Get Better AI-Generated Code Every TimeWhy AI Thought Leadership Hurts Content TeamsIs it Ok to use AI-generated images for LinkedIn Profiles?Your Staff Thinks Management Is Inefficient—They May Have a PointThe next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Shel Holtz: Hi everybody and welcome to episode number 489 of For Immediate Release. This is our long-form monthly episode for November 2025. I’m Shel Holtz in Concord, California.
Neville Hobson: And I’m Neville Hobson in Somerset in England.
Shel Holtz: We have a jam-packed show for you today. Virtually every story we’re going to cover has an artificial intelligence angle. That shouldn’t be a surprise — AI seems to dominate communication conversations everywhere these days.
We do hope that you will engage with this show by leaving a comment. There are so many ways that you can leave a comment. You can leave one right there on the show notes at firpodcastnetwork.com. You can even leave an audio comment from there. Just click the “record voicemail” button that you’ll see on the side of the page, and you can leave up to a 90-second audio.
You can also send us an audio clip — just record it, attach it to an email, send it to [email protected]. You can comment on the posts we publish on LinkedIn and Facebook and elsewhere, announcing the availability of a new episode.
There are just so many ways that you can leave a comment and we hope you will — and also rate and review the show. That’s what brings new listeners aboard.
As I mentioned, we have a jam-packed show today, but Neville, I wanted to mention before we even get into our rundown of previous episodes: did you see the study that showed that podcasting is very male-dominated as a medium?
Neville Hobson: I did see something in one of my news feeds, but I haven’t read it.
Shel Holtz: I heard about it on a podcast — I don’t remember which one — but I found it really interesting because the conversation was all about equity. And I’m certainly not in favor of male-dominated anything, but podcasting is not an industry where there is a CEO who can mandate an initiative to bring women into a more equitable position in podcasting.
This is a medium — let’s face it, even though The New York Times and The Wall Street Journal and other major media organizations have jumped into the podcasting waters — where it’s essentially a hobbyist occupation. You and I started this because we wanted to, and the tools are available to anybody who wants them.
I remember when we started this, one of the analogies we used was trying to walk into a radio station and say, “Hey, I want to have an hour-long show every day on public relations.” You’d be laughed out of the radio station because there’s not an audience big enough to support that kind of content. But here, if you can find an audience, you can have a podcast.
So I don’t know how you go about making this more equitable, but I found that to be an interesting perspective.
Neville Hobson: Yeah, I agree. There are some podcasts I’ve listened to that are hosted by women — which, frankly, are few beyond the realms of kind of “feminine-oriented” content. But there are a couple in our area of interest in communication that are. So they’re out there, but the majority, very much, are men.
Shel Holtz: Yeah. I mean, just in internal communications, there’s Katie Macaulay, and there are a lot of women doing communication-focused podcasts. Maybe if you’re going to look for somebody to make this a more equitable media space, it has to start with the mainstream media organizations that are producing podcasts — The New York Times, The Wall Street Journal of the world.
Neville Hobson: Yeah, over here you’ve got The Times and a few others who have women doing this. They are there in the mainstream media orientation, but the kind of homebrew content that we started out with, I don’t see too many.
Well, Neville, why don’t we move into our rundown of previous episodes?
Neville Hobson: Okay, let’s get into it.
So we’ve got a handful of shows. We’re actually recording this monthly episode about a week and a half earlier than we normally would. I think the reason for that, Shel, is something to do with U.S. holidays, your travel, and stuff like that.
Shel Holtz: Yeah, I’m going to be in San Diego next weekend, visiting my daughter and granddaughter because they’re not able to come up here for Thanksgiving. And then the next weekend is Thanksgiving weekend. So that’s why this is early this month.
Neville Hobson: Right. Okay, that explains it.
We are, we are. So, not too many episodes since the last one, but they’re good ones, though, I have to say.
Before we talk about those, let’s mention episode 485, which was prior to the last monthly. We had a comment.
Shel Holtz: We had two that we didn’t have when we ran down this episode in our last monthly episode. The first is from Katie Howell, who says, “Already reward return visits over one-off reach and the clever brands are catching up. If your brief still says ‘go viral,’ you’re chasing a metric that won’t help you keep your job. Repeat engagement with the right people is the proper goal. Less glamorous, miles more useful.”
And Andy Green says, “Good clarification over strategies, but you also need to recognize viral — also known as meme-friendly — is at the heart of effective communications. Also greater recognition of the impact of zeitgeist. Check out Steven Pinker’s latest book, When Everyone Notes.”
Neville Hobson: They were on LinkedIn, I think, weren’t they? That’s where most of them come in.
So, to the ones we did: we have the monthly of October that we did on the 27th of October, when it was published. The lead story we focused on in the headline was “Measuring sentiment won’t help you maintain trust.” Other topics — there were five others — including an interesting one: Lloyds Bank, the CEO and executive team learning AI to reimagine the future of banking with generative AI.
We talked about case studies in a piece that described, “Conduct, culture, and context collide: three crisis case studies,” reviewed in Provoke Media.
Shel Holtz: Yeah, they did 13 or 14 case studies. It was a very interesting article, so we highlighted a couple. And there was more content there too.
Neville Hobson: Episode 487, we published on the 5th of November. This was a really interesting discussion. You and I analyzed and discussed Martin Waxman’s LinkedIn post about slower publishing, deeper thinking, better outcomes — a pivot he’s made with his business and his newsletter.
He left a number of comments, but on the show notes post he left a long comment that was great. We don’t normally get comments on the show notes, so thank you, Martin.
Shel Holtz: Yeah, there were several comments from Martin. I’m going to run through these. He said, “Thank you for having me as a virtual guest once-removed on the episode, Neville. I just listened today and enjoyed your and Shel’s take on my post. You gave me a fresh perspective and I was honored and thrilled to be a conversation topic. And thanks to both of you for holding up the comms podcasting torch all these years and having a lot of fascinating and insightful ideas to share.”
You replied. You said, “Thanks so much, Martin. It was our pleasure. Your post struck a chord with many of us who feel the pace accelerating. It was a great springboard for our discussion, and I’m glad our take offered something new in return. Slowing down to think more deeply about how we use AI feels like the most human move we can make right now.”
But Martin also posted on his own LinkedIn account — and this isn’t short, so bear with us, everybody, as I read through this because I think it’s worth sharing:
“As the first and longest-running communications podcast — and one I’ve been listening to for a long time — this meant a lot. As I listened and heard Shel and Neville’s take on my observations, I gained a new perspective, one I didn’t see when I was writing and revising my post.
“Something I didn’t mention out loud is that it’s been getting more and more difficult to come up with fresh ideas on where AI fits in marketing and communications and the various implications around that, the kind that inspire a person to write. Like social media, it feels like we’ve tipped past the point of saturation.
“As Shel said, we’re now getting drenched by the all-too-familiar commentary and quasi-expert advice swirling around our feeds. That certainly doesn’t diminish the utility of AI or using it where it helps. And I appreciate Shel’s view on how AI helps speed up doing the good-enough tasks that are inherent in all work, to concentrate on the things you want to spend more time on.
“I could also relate to Neville’s comments about saying no to projects that don’t excite you so you can focus on the ones that do. And yes, the three of us are all fortunate to have reached that stage in our careers when we have a little more freedom to pick and choose. I also realize that many people aren’t in that situation.
“As someone who has spent my entire career writing, it’s exciting and a bit frightening to wonder what I’m going to write about next. Yet there’s energy in uncertainty. So thank you to Shel and Neville for having me back as a guest, albeit one who didn’t have to press record.”
Neville Hobson: Really, really super comments that Martin left. Thank you, Martin.
And then our final one before this episode, 488, we published on the 10th of November. I enjoyed this discussion a lot — about Coca-Cola’s generative AI Christmas video that they have done before, but this one got rid of all the people; it was full of bunny rabbits and sloths and all sorts of stuff and those red trucks.
There were plenty of opinions out there, ranging from “What a creative and technical masterpiece this is” to “Utter AI slop.” So we were quite impressed with it and stood back to look at what they were doing rather than being judgmental in any shape or form. But there were plenty of comments, and we had at least one we should mention, right?
Shel Holtz: Yes, from Barbara Nixon, who said, “Thanks for sharing this. I’ll use it as a basis of discussion in my PR writing class next week.”
Neville Hobson: That’s cool. So that’s the content leading up to this one. And of course, now we’re in the November episode that kicks off the next cycle of reporting for the next edition, when I can talk about what we did since this edition.
Shel Holtz: That’s right. And I also want to let everyone know that there is a Circle of Fellows coming up. I would be reporting on this if we were recording at the normal time of the month toward the end of the month, but it hasn’t happened yet.
It is coming up on November 25th, Tuesday instead of Thursday, because Thursday that week is Thanksgiving. So it’s happening at 6 p.m. Eastern Standard Time on Tuesday, November 25th. This is episode 122, and the topic is “Preparing Communication Professionals for the Future.”
It’s a larger-than-usual panel — there are five Fellows instead of four. It’s going to be a good discussion. I think the future — obviously AI factors in here, I think quantum computing does too, as we’re going to talk about shortly in this episode — but also changes in business trends. The zeitgeist is changing, and politics is going to have more of an influence on business. All of these are things that I’m sure we will be discussing.
We look forward to having you join us for that. Of course, if you can’t be there to watch it in real time, it is available both as a video replay on YouTube and as an audio podcast that you can subscribe to right here on the FIR Podcast Network.
And we will now jump into our content for the month — but not until we run this ad for you.
Neville Hobson: So, one of the most interesting shifts happening inside large organizations right now is the move to combine communication and brand under a single leader. We’re seeing this across companies as varied as IBM, GM, Anthropic, and Dropbox, and the trend is accelerating.
According to research cited by Axios, CCO-plus roles — where communication leaders take on brand or marketing responsibilities — have risen nearly 90% in recent years.
What’s driving this? The short answer is volatility, says Axios. AI is changing how people discover what a company stands for, and reputational storms seem to ignite faster and with far greater consequences. A marketing decision that once would have sparked a debate in a meeting room can now become a political flashpoint within hours. That forces the question of who should really own the brand narrative.
Communication leaders are increasingly being seen as the natural fit. They understand stakeholders. They have a risk mindset. And they are often the ones who know how to navigate the cultural and political sensitivities that shape reputation today.
In other words, this is not just about messaging. It’s about trust, judgment, and the ability to connect what a company says with how it behaves. There is still a need for specialist marketing functions, but for many companies, brand stewardship is shifting toward the people who are closest to reputation.
And in a world where AI can bend or reinterpret a narrative in seconds, bringing communication and brand together under one trusted voice feels less like a structural tweak and more like a survival strategy.
So the bigger question for us is what this means for the future of the communication profession. Are we seeing the emergence of a new kind of leadership role — or simply a correction to reflect the reality that brand and reputation have always belonged together?
Shel Holtz: That’s a very interesting trend, and I don’t disagree with it in general. If you look at the big picture, it does make sense. Public relations is all about reputation; it’s all about maintaining relationships with the various stakeholder audiences.
So, as a communicator, you tend to have a big picture. You understand what the reputation is among investors, among the local communities in which your organization operates, among the media, for example, among your customers.
Marketing is all about driving leads for sales in most industries, and they don’t necessarily have that big picture. So it makes sense. And to bring marketing into the communication fold means that you get the benefits of the things that marketing is exceptional at — and branding is one of those things.
Most communicators aren’t involved in developing the trademarks for the organization and the logos and the like — that tends to be marketing, and for good reason. But to have that within the purview of communications enables that chief communication officer-plus to ensure that what’s coming out of that operation aligns with and is consistent with the things that we know drive the reputation of the organization.
You can find some gotchas maybe in the outputs that they’re developing that they wouldn’t have thought of.
That said, I know in my industry, which is commercial construction, the marketing department is not doing traditional marketing. There’s not a lot of effort to drive leads. The relationships with prospective clients are driven through other means. It’s getting to know people through industry contacts and the like. It’s building those personal relationships with developers and owners and the like.
I’ve just celebrated my eighth anniversary where I work, so I’ve seen this in play for long enough to understand that it’s right and it works very, very well.
In my company, the marketing department is also the steward of the brand, and I am fine with that because I’m mostly doing internal communications. I’m also responsible for PR, as far as it goes — media relations and the like — but I don’t have that relationship with the client base. Not at all. It’s rare that I meet a client. Usually I’ll shake hands at a groundbreaking or something like that if I’m out covering it, but by and large, this is something that the marketing department does.
So I’m inclined to say I agree with this, but it depends. And I think there are probably exceptions, and my industry is probably one of them. I’m part of a group called the Construction Communicators Roundtable — 18 or 20 commercial construction companies represented there — and I get the impression that it is the same with all of them. So this may be an industry-by-industry thing.
I don’t disagree with it, but I do think it depends.
Neville Hobson: “I think it depends” is definitely the start point to the discussion on this, I would say. My thought when I read the article — and the reason I included it in the topics for this episode — was precisely that: it does depend.
I’m not sure it is strictly industry-by-industry, meaning that this industry is entirely this way and this one isn’t. It’s probably a mixture. But there are some compelling reasons, I think, why it makes sense to do this even with the argument you’ve made for not doing it, let’s say.
For instance, one interpretation I have from Axios’s research is that the argument is: brand is no longer just a marketing asset. It’s a reputational construct shaped by every stakeholder interaction. That squarely leans toward understanding the impact on reputation — particularly in that communicators are the ones for that, not the marketing person.
It also speaks to the need for a trusted, politically aware leader. This combined role, according to Axios, is shaped by the reality that brand crises are increasingly political. Organizations want leaders who bring judgment, sensitivity, and crisis literacy. And that, in my view, leans much more into the communication person than the marketing/brand person.
And the one I think that is most interesting is the broader reinvention of the communication function. Sorry, marketing folks — this is about communication. The trend echoes the ongoing elevation of communicators as strategic partners rather than support functions, reinforcing the argument that communication is increasingly a governance role, not just an executional one.
Now, that argument would apply to marketing too, but not in quite the same way. Taking into account all of that — particularly the connection with reputation, the political awareness, and I like this term “crisis literacy,” fair enough, it’s a good way of describing it — this is more likely to fit in the bucket where the communicator sits than the marketing one.
And by the way, I’ve seen a number of people’s job titles — communication and brand. And I saw someone recently on LinkedIn who is a Chief Communication Officer and Director of Brand and Reputation, playing exactly to what Axios’s point is.
So yes, “it depends,” but I think there’s a compelling reason why, if you’ve got to pick one person, it should be the communicator.
Shel Holtz: Yeah, and again, I don’t disagree. And still I am untroubled by the fact that marketing owns the brand where I work. And I should clarify: they’re not engaged in traditional marketing. This is not a marketing department like at, say, Procter & Gamble or Coca-Cola. They’re engaged primarily in business development.
So they’re putting together the proposals, they’re responding to the RFPs, they’re preparing the members of the team to go out and be interviewed by the owner or the developer who’s selecting the general contractor. So it is B2B. And, I mean, if they’re not concerned about the organization’s reputation, nobody is.
So this is why I say it depends.
The other point I will make is that even though we are not part of the same reporting structure, we’re pretty well joined at the hip. The VP of Marketing and I talk all the time. He’ll call me into his office to run stuff by me, I’ll run stuff by him. We meet regularly. We have a marketing director right now we are working with incredibly closely to develop a year-long recruiting campaign. We’ve won a ton of work and we need to staff up to support that work.
We’re going to take advantage of her expertise in branding and in marketing to recruits, and we’re going to take advantage of our expertise and the things that we do well. And that collaboration is probably going to produce a much better result than if it had just been one of us or the other of us.
So at the end of the day, I don’t think it matters who has the highest title, as long as everybody’s working together, they’re aligned, and they’re working toward the same goals. So again, I don’t disagree with the sentiment and the underlying foundation of the point that was made in this piece, but I think there are organizations where that is being done without having the communicator necessarily at the top of the food chain.
Neville Hobson: That’s the place where I think the communicator should be — which, of course, plays to the decades-old desire expressed by many in our profession that the communicator needs a seat at the top table.
I guess the concluding point I would say is: anyone listening to this discussion who occupies that joint function and would care to share his or her thinking about all of that — we’d love to hear a comment.
Shel Holtz: Yeah, a seat at the table, yeah.
We would always love to hear comments.
If you feel like AI is sucking all the oxygen out of the room, you’re not wrong. It seems like it was just last week we were talking about blockchain and the metaverse and a slew of other technologies. But while we’ve been fine-tuning prompts and governance, another technology has been quietly moving toward the comms agenda — and that is quantum computing.
The BBC recently framed it as potentially as big, if not bigger, than AI. It’s time to start paying attention to quantum computing and how it matters to communicators.
A quick primer: classical computers process bits, zeros and ones. Quantum computers use quantum bits, known as qubits, which can be zero and one at the same time. That’s called superposition.
If you read the book or watched the Apple TV series Dark Matter — I did, it was really good — you know about superposition, and it has been the foundation of a lot of other science fiction: this idea of being able to be in two places at the same time, quantum superposition.
Two, the zero and one in the same place at the same time can influence each other through something called entanglement — a phenomenon where two or more quantum bits, those qubits, become linked, sharing a single quantum state, so they cannot be described independently even when separated by vast distances.
In some problem classes — chemistry, simulation, optimization, factoring — this enables speed-ups that make the impossible suddenly possible. The machines we have today are still noisy, error-prone. But the security world is acting as if a capable quantum machine will arrive within the planning horizon, which is why standards bodies and platforms are shifting now.
You’ve already seen early signals in consumer tech: post-quantum cryptography, warnings from cybersecurity experts, and quantum-resistant messaging from big platforms. Quantum-resistant messaging uses new encryption algorithms to protect communication from both current and future quantum computers. It’s also called post-quantum cryptography and aims to safeguard data by using mathematical problems that are believed to be difficult for both classical and quantum computers to solve — unlike current algorithms, which can be broken by a powerful enough quantum computer.
In fact, I’m reading a really interesting book right now. It takes place about 150 years in the future, and everything that we today thought was encrypted and nobody would ever see — they’re seeing it all because they have access to quantum computing.
These aren’t just niche issues. They tie directly into how you tell stories, how you prepare for crises, and how you work.
So what does this mean for communicators beyond asking IT if we’re on top of it? I’m going to run through three buckets, and then we’ll tie in how quantum and AI overlap, because that’s where things get especially interesting.
First, storytelling and public understanding. Quantum is famously hard to explain, which makes it vulnerable to hype and confusion. Your job is to translate it without overselling it. “Quantum-safe” doesn’t mean “quantum-proof,” for example, and timelines remain uncertain — we don’t know when you’re going to be able to go to your local Best Buy and get a quantum computer.
You’ll want to build narratives now that help your audience support the idea that your organization is looking ahead, not getting caught flat-footed. Use everyday language. Say, “We’re updating encryption today to protect the data of tomorrow.” That works better than “We’re quantum resilient.” You’ll gain credibility when you help people understand what’s changing and why they should care.
Second, this is all about crisis preparedness and trust. If your organization holds long-lived sensitive data — health records, intellectual property, government contracts — then you need a communications plan for cryptographic agility. That means plain-language FAQs explaining why you are updating encryption, updates to stakeholders as you migrate to approved standards, and scenario planning for legacy data exposure.
Quantum computing introduces a new dimension of risk: the idea that what you publish or promise today could be decrypted or exposed years later. In a crisis, you’ll need to be ready to say, “We anticipated this risk, and here’s what we did.” That anticipatory positioning goes a long way toward preserving trust.
Third, it’s about how communicators can use quantum — and quantum plus artificial intelligence — in our work. Eventually, you’ll have new tools. For example, quantum computing may be able to provide far more advanced modeling of message flows, audience networks, and sentiment behavior, letting you identify optimal outreach paths or refine campaigns under dynamic conditions.
You could simulate scenarios in complex environments more quickly, refining your messages in a what-if matrix classical tools can’t easily handle. These scenarios might include things like stakeholder cascade effects, social media virality, and supply chain disruption.
And as quantum key distribution and quantum-resistant encryption mature, you’ll be in a position to tell audiences, “Our channels use the latest quantum-secure messaging,” which becomes a differentiator from your competitors.
Then there’s the overlap with AI. Quantum computing will amplify AI’s capabilities, helping it crunch deeper patterns faster and handle volumes of data plus complexity that classical systems struggle with. For communicators, that means the analytics layer you rely on — for sentiment, for influence mapping, for risk modeling — will evolve.
AI plus quantum means faster insights, more complex scenario modeling, and new ways to anticipate issues before they explode. So when you describe your comms strategy, you might say, “We use advanced modeling powered by AI today, and we’re tracking quantum-enabled tools so we’re positioned for the next wave.”
The fact is, quantum isn’t just a side story to AI — it’ll reshape AI. Research indicates that quantum computing and AI together massively increase computational speed and breadth of analysis. For example, quantum can remove some of the bottlenecks in data size, complexity, and simulation time that limit today’s AI systems.
For you as a communicator, that means three practical things.
First, what you pitch as “AI-enabled” today will evolve into “AI-plus-quantum-enabled,” and part of the story you tell stakeholders is, “We’re future-proofing so we don’t fall behind.”
Second, monitoring of reputational risk must extend to both AI misuse and quantum misuse — encryption break, advanced surveillance, things like that. The combination raises the bar for your “what could go wrong” list.
And third, your metrics and narrative signals will shift. When AI and quantum intersect, you’ll need to help people understand not just faster insights, but insights from a new class of computing. That means simplified metaphors and careful framing. The message no longer just flows faster — the infrastructure itself is changing. If AI rewrote the message, quantum will test the envelope it travels in.
You don’t need to wait until quantum has fully arrived. You need to start telling that story now. You need to show that your organization is looking ahead, educating stakeholders, and building trust today so that when the change arrives, you’re not scrambling.
Neville Hobson: Well, that’s heavy stuff, Shel.
It’s interesting how Zoe Kleinman, the BBC journalist who wrote this piece, started her article. She says, “You can either explain quantum accurately or in a way that people understand, but you can’t do both.” So I think this is very much in the “accurately” bucket, this discussion.
Shel Holtz: Isn’t it, though? I strive for accuracy.
Neville Hobson: Yeah, and she notes as well, it’s a fiendishly difficult concept to get your head around. I couldn’t agree more. I’ve tried to thoroughly understand this — and maybe I should get rid of the word “thoroughly” because I can’t thoroughly understand it. I need to understand the bits that matter.
So to me, on the one hand I’m thinking, “Fine, this has not arrived yet,” but your point about “get prepared” is a valid one. Although I wonder how many people are going to say, “Well, it hasn’t arrived yet, so what are we going to do? How am I going to do this?” That’s where communicators come in, by the way.
But I think she gives a great example that you really can grasp. Talking about how quantum computers could one day effortlessly churn through endless combinations of molecules to come up with new drugs and medications — a process that currently takes years and years using classical computers.
She says to give you an idea of the scale, in December 2024 Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest supercomputers 10 septillion years to complete — that’s 10 with 24 zeros after it.
I mean, just thinking about the number, you cannot imagine how long that would be. The sun would probably have died before it gets to it. This would do it in five minutes.
So it then talks about what it paves the way for — personalized medication, all that kind of stuff.
I don’t think we’re at the stage yet where you could equate this to, “Okay, in your average business, all the business processes they do will be materially impacted by this in a very powerful way.” We’re not there yet, because you can’t explain it like that, I don’t think — hence these very big-picture examples.
Everything I read about quantum talks about this: personalized medication, chemical processing, quantum sensors to measure things incredibly precisely. That’s all coming. It’s not here yet.
So it’s interesting. The examples they give are all wonderful, I have to say, but the mind boggles. My mind certainly does, when you look at so much information on this that you wonder: what on earth are you going to pay attention to in order to get a handle on how it’s going to affect my industry, my company, my job, how we live, my family — all these things? No one’s got that yet, and that’s probably what people want to know — but you can’t yet.
Shel Holtz: No, but it’s close enough that we need to start preparing for it and we need to start communicating about it, especially if you’re in an industry that is computing-intensive in its work. And I’m not talking about customer relationship databases and things like that; I mean in your R&D, for example. And certainly the cryptographic implications are severe on the risk side.
So being ready for that now, rather than scrambling to get ready once it’s actually here, is, I think, an imperative.
You do need to be a physicist or physics-adjacent to really understand this. But I’ll be honest: science has never been my thing. Science and math were my worst subjects in school. The humanities were where I rocked. And I struggle understanding the zeros and ones in fundamental computing — the opening of the gates and all that.
But you know what? I don’t need to know how my carburetor works in order to drive my car. The fact is that these tools are coming, and understanding how they work or not, people are going to be able to use them.
And as I say, it’s close enough. It’s probably within the next 10 years that companies are going to be able to buy quantum compute time, if not buy a quantum computer, that we really need to start thinking about it. We really need to start preparing for it, especially from a security standpoint.
Neville Hobson: Yeah, I get that. I think, though, that people — communicators, this is our area of interest and focus — would need to know: how are we going to do all this when so much of it is theory?
They’re talking about — I’m just looking at the piece here that goes into detail about how to break current forms of public key encryption. Hot topic: security of information. It says here it’s awaiting a truly operational quantum computer. That’s years away. But as the article notes, quoting a cybersecurity expert, “The threat is so high that it’s assumed everyone needs to introduce quantum-resistant encryption right now.” That’s not the case. So there’s probably a lot of hype.
Although it mentions earlier — and I think you might have mentioned — that there’s even more hype about AI. So this was the king of hype before AI emerged.
The prediction I’m reading is that an operational quantum computer could be around the year 2030. So that’s five years away. Okay, in that case, now is the time to get prepared for this, then.
Shel Holtz: That’s pretty fast. And there are operational quantum computers in labs.
Absolutely — there are operational quantum computers in research labs right now. They’re not commercially viable yet, but as you say, the projections run anywhere from five to 15 years. That’s fast; that’s soon.
When we were talking a lot about the metaverse, we were saying the fully operational metaverse was 10 years away — you need to start thinking about that now. Same thing here.
Neville Hobson: Did you notice the concluding paragraph? This is actually where it kind of fits in with the current status of alarm and concern from a political point of view about what certain countries are up to — China, which it calls out as an example.
It says the GCHQ — that’s the UK’s intelligence cyber agency — says that it’s credible that almost all UK citizens will have had data compromised in state-sponsored cyber attacks carried out by China, with that data stockpiled for a time when it can be decrypted and studied, and that you need a quantum computer for that.
For instance, the economic headline in the UK right now — the cause of the kind of unexpected dip in GDP — is caused specifically by the cyberattack on Jaguar Land Rover, the automaker. That cost nearly two billion in losses because of the cyberattack that compromised them and their supply chain.
So this brings it home to you: what are they doing with the data? They can’t do much with it until they’ve got the computing power to be able to. So these things add to… I’m not sure it really adds to understanding — it adds to confusion, adds to worry, probably.
So it’s helping people organizationally, in this context, understand why we need to be prepared for this. And it needs to be, I think, presented in terms they can more readily grasp and understand than is currently the case for what I’ve seen people talk about in quantum computing.
This is a good article, by the way, and I think Zoe Kleinman did a really good job. I’ve read another article — I think it was from Microsoft — where you truly need to have a degree in advanced physics just to understand the article. These are not designed for your average Joe to grasp. There’s a gap.
Shel Holtz: Absolutely. But I think the role of the communicator here isn’t to help people understand how quantum computing works any more than it is with classical computing. Our job is: what are the benefits and what are the risks? What do we need to prepare for? Where do we need to start building that foundation so that when it arrives, we’re ready and not suffering consequences or falling behind our competitors?
So I think that’s the role of the communicator: to say, “Look, you don’t need to understand how it works. These are the things that it’s going to be able to do, and these are the implications for us and our business and our reputation and our competitiveness.”
Neville Hobson: So I see an opportunity here for someone like Lee LeFever to come up with one of his really cool videos that explains in simple terms what quantum computing is.
Shel Holtz: I’ve got to go find myself a good explainer video — see if there is one out there that does a really great job of it. There probably is. Maybe Lee has, for all I know.
Neville Hobson: So, let’s continue on the theme of a computing topic which is not really connected to it, but it’s a similar theme. We’re going to talk about vibe coding and what it means for communication leaders.
Every so often a piece of technology comes along that seems small on the surface but signals a much bigger shift beneath the surface. Vibe coding is one of those moments.
On paper, it sounds like a technical trend: using AI to build software by simply describing what you want in natural language. No coding, no syntax, no engineering background needed. You just talk, and an AI generates a working prototype. Sounds wonderful.
In early November, it was named Word of the Year by Collins Dictionary. Of course it’s two words, but who’s counting? Anyway, it was chosen to reflect the evolving relationship between language and technology and how AI is making coding more accessible to a wider range of people.
This is not a coding story; it’s a future-of-work, future-of-skills, and future-of-organization story.
What makes this interesting for us is not the code; it’s what happens when anyone in an organization can create digital tools on the fly. A business analyst can build a workflow. Someone in HR can automate a process. A communicator can sketch out an app for an event or a campaign — all without waiting for IT.
Suddenly the boundary between people who solve business problems and people who write software starts to blur.
This has real implications for culture and communication. It empowers people in new ways, but it also introduces new risks. AI-generated code is fast, but it’s not always secure, compliant, or ready for production — or even necessarily working properly.
And as we know, when technology becomes more accessible, organizations need a stronger narrative on how to experiment safely, what the guardrails are, and when creativity gives way to rigor.
There is also a shift in skills. According to Cognizant, in the age of AI the most important capability is moving from problem-solving to problem-finding — being able to frame the right questions, articulate needs clearly, and work collaboratively with both humans and machines. That is a communication skill at its core.
So the story here isn’t about developers being replaced or apps being magically created. It’s about how work changes when AI becomes a conversational partner.
And it raises a bigger question: if every team can now build its own tools, what role do communicators play in shaping culture, governance, and the shared understanding of how organizations innovate? Big questions there, Shel.
Shel Holtz: It is a big question. There are big questions there.
I’ve been doing a lot of reading about vibe coding and listening to a lot of podcasts that talk about it. I have been so excited about it, I’ve been working on a proposal — completely unsolicited, no one at my company knows it’s coming — but it is for a vibe-coding training program for project engineers: the entry-level people on the building side of our industry.
Because right now, if they need something — say a dashboard, an app that creates a dashboard that pulls data in from various sources, or that allows you to plug data in and produce charts and graphs and the like — they have to open a ticket, and IT has to create it if they have the time. They’ll prioritize based on the urgency of the things that they’re working on, and you may not get what you want, and it may take a long time.
Now you can just do it yourself.
So I’m very excited about this, especially given the threat that entry-level jobs around all of the business world are facing from AI. They need to be redefined, because entry-level people have to be part of the mix — how do you develop those who are going to move into higher roles in the organization if they don’t start somewhere?
So it’s a rethinking of what those roles are, and enabling these people to create their own apps is one of them.
But now they would still have to submit that app for approval, because if you don’t have expertise in coding you may have done things that you’re unaware of that can create certain risks or problems, or it may stop working at some point. All types of things could go wrong.
I think vibe coding without any foundation in coding is fine for some very, very simple things. I think the more complex it gets, the more of that foundation you need.
While you were talking, I went and looked at what Christopher S. Penn had to say about it, because I’ve heard him talk about it a number of times both in his writing and in the video podcasting that he does.
He thinks that you do — if you’re going to be doing this in a serious way — need to have an understanding of the software development life cycle.
At a minimum, this is what he says: you have to be able to provide detailed instructions and guardrails to the machine. You have to know what you’re doing to prevent poor results, like a vague code request — it’s like asking an AI to write a fiction novel with little information. That would just result in slop, right? Same with code. You have to give it a precise enough series of prompts to get the output that you want.
You need to know not only when the solutions are right or wrong, but also whether they’re right or wrong in the context of the work.
He says best practices for vibe coding require a structured approach that relies heavily on planning, which maps to the Trust Insights Five P Framework — which is really good, go look it up at trustinsights.ai.
This structured method is essential to vibe-code well and includes steps like: spending three times as much time in planning as in writing the code, creating a detailed product requirements document and a file-by-file work plan, and integrating security requirements and quality checks.
And then, of course, if it doesn’t work right the first time, you can keep iterating — but you should have some understanding of debugging and know somebody who does in order to get it to do exactly what you want.
So I think for very simple stuff, yeah, you can just tell it, “Please create me an app that does X, Y, or Z.” But the more complex it gets, the more of a grounding in coding you’re going to need.
Neville Hobson: So that’s where guardrails and guidance and policies and procedures come into play.
But you know what’s going to happen — we saw it with ChatGPT — is that people are going to get hold of the tools to do this and just go ahead and do stuff themselves. That’s what’s going to happen, with the risks inherent in doing that for everything you’ve outlined.
I look at what I’ve done that I suppose you could call vibe coding. What I did on my websites, which run on Ghost — it’s not like WordPress in that you want a theme that you customize, dead easy, build a child copy and all that. Not with Ghost; you’re into the code.
So I used a combination of tools, including the excellent VS Code tools from Microsoft, but also a tool called Docker — astounding, running on Windows. But my “coding partner” was ChatGPT-5. I prompted it with what I wanted to do, and it wrote the code.
We tested it and nothing fell over, except where there were some things like CSS for styling — some dependencies didn’t work for some reason until it fell over and we went back and fixed it.
I was amazed, constantly, by being able to talk to the chatbot in plain English about what I wanted to achieve, and it then proposed how we find the solution to doing that, and then it wrote the code.
I couldn’t do that because I don’t know the code. I would have had to hire a developer or, if I were doing this properly in an organization, file a ticket to get support. I did this myself over a weekend, and I’m still truly amazed. It was an offline copy — all working, everything worked. We packaged it, uploaded it to the Ghost server, enabled it, and the live site just worked. All the changes were perfect; nothing was wrong by that point.
Now, that’s not necessarily the same as building a website for an event, or an app for an event. That would be interesting to see how that would work. So there are levels to all of this.
I think it’s finding the balance between: you have to follow these rigid guidelines if you want to build X for your role in the company, or you want to do something like a website or an app, where the guidelines are not so rigid — they’re still guidelines.
This is designed for a world where content creation and data analysis are becoming everyday skills, as is software creation. Yet I don’t disagree with your take on training at all, nor with what you quoted Chris Penn saying — they make complete sense to me.
But the reality, particularly in enterprise organizations and even more so in small- to medium-sized businesses, is: you’re just going to give it a go and see what happens. Risks and all.
Shel Holtz: Sure. Regardless of whether you’re trying to do something very simple, where you don’t need an understanding of the software development life cycle — you can just tell the AI, “Write me this app,” and if it’s simple enough you’re probably going to get something serviceable — or something more complex.
For the more complex stuff, you need to have a deeper understanding of the output you want, and you have to spend a lot of time planning so that you can give it the right information. It’s not that you sit back in your chair and say, “I think A, B, and C.”
You can work with the AI to develop that stuff, of course, but one thing I do at the end of virtually every prompt — not the really simple stuff like “How many Oscars did somebody win,” but the more complex prompts — is add, “Ask me questions one at a time that you need answered before you give me your answer.” Because it’s going to think of things that I haven’t thought of.
So yeah, I think my point is that whether you’re doing something simple that doesn’t require a lot of upfront work or you’re doing something more complex that does require a lot of upfront work, this is going to speed up the development of software immeasurably and have a big impact on how this gets done and by whom. That represents significant change in the current structures in organizations, I would say.
Neville Hobson: You’ve got it. So this is worth paying attention to as well. Get used to the phrase “vibe coding,” I would say.
Dan York: Greeting Shel and Neville and our listeners all around the worl. This is Dan York coming at you today from Los Angeles, California, where I have been attending an event by an organization called the Marconi Society that has been looking at internet resilience. Basically, how do we keep the internet functioning, uh, in the light of disruptions and things of various different forms?
Great conversations. Great, uh, thinking. I look forward to talking a bit more about it in the future when there’s some things that are useful to share with our listeners, but in the meantime, I wanna talk about chat. Specifically two different platforms. But first, if you’ve been paying attention for a while, you know the chat systems like WhatsApp, iMessage, uh, telegram, signal, whatever.
They’re all their own thing. You can only chat with people in there. Well, in the European Union, the EU passed something that’s called the Digital Markets Act, or DMA that required. Chat systems that operate in the EU to roll out, to have third party integration in some way that you, that other chat systems could interoperate with that.
So we’re starting to see a bit of what this could be with the announcement from Meta that they will very soon be launching third party integration between WhatsApp in Europe with two other messaging systems called Birdie Chat and hiit. Now, if you haven’t heard of them. Neither have I and neither did the writer of the Verge who was putting this together.
But the point, the point is they’re trying to make it so that chat systems could interoperate. We’ll have to see what this means. The important part to me. Was that WhatsApp is actually doing this to ensure that the end-to-end encryption continues to work, that your data can’t be seen by other people when you’re using the messaging system, which for me is critical for the privacy and security of any kind of conversation I’m having.
So that is preserved in the system. So we’ll have to see where that goes. But speaking of end-to-end encryption X, the formerly known as Twitter, which I don’t even use anymore, but I was pleased to see that they are rolling out a new system to replace what we’ve always called dms or direct messages.
They’re rolling out a new system that they call Brilliantly Chat, but it will have video calling. It will also have end-to-end encryption, and it will have other things. It’s coming first. It’s rolling out on iOS and the web, and then it will be coming on to Android, et cetera. So it looks interesting. So there’ll be a new messaging component there.
So for those who are still using X, stay tuned. Your messaging system may be changing around in what you’re looking at, moving to something completely different. Mozilla announced an AI window for Firefox. Uh, there’s been a slate of AI browsers. I think I talked about some last week. There’s last month. I mean, there’s been other different announcements, but this isn’t available yet.
But it appears to be an interesting thing that it would be a, a window that would sort of be separate from your main browsing experience that would allow you to engage with an ai um, assistant basically, while you are. Browsing and stuff. Stay tuned. Again, we’ll have to watch. It’s if you’re a Firefox user, this will be something that you’ll be able to go and work with as you go along.
Another, just switching gears again. WordPress is coming up on its final release for 2025. It’ll be WordPress 6.9. The target delivery date is December 2nd, and it’s got a couple of interesting things. A lot of the release is focused on, uh, a new APIs for developers on performance improvements and things like that, but there’s.
Two interesting parts for people listening to this podcast probably one is that it will be introducing something called notes or block notes that you can be able to add in a similar form to how you might do something with Google Docs, where if you’re editing a doc, you can leave an A note. And be able to go and, and respond to that, reply to it, you know, close it out, whatever else.
This capability is coming into WordPress so that if you’re doing collaborative editing with other people on your team, you would be able to leave notes about this and say, you know, I, I don’t like this text, or whatever, or, you know, that we should really include an image here, and then other people can reply to that and work with it.
This is all visible only within the. Editor interface. So one of the big pushes for WordPress right now is to look at collaboration. And so this is part of that enabling you to be able to work with your team and, and leave notes for each other and be able to work with that and, and use that. So stay tuned.
This is coming out December 2nd with WordPress 6.9. Another interesting piece about in this release. Is the ability to, to very quickly and without a plugin or anything, change the visibility of blocks mostly so that, so that they’re visible in the backend, in the editor, but not out on the front end. And the important part about this is if you’re testing things, if you’re, if you’re working on, on developing a new interface or a new pages and something and you want to try something out, you can get it all ready to go in the editor.
Then you can flip on the visibility, look at it, check it, whatever, flip that off if you don’t want to, or if you’re preparing for a new announcement, you can have all of that ready to go, uh, on a page. And then just toggle the visibility of the blocks. Now, yes, there are other ways you could do this as well inside of WordPress, but this is just a new way that you could work with this in doing that.
So two features I personally find interesting of the upcoming release. The ability to toggle the visibility of blocks and the ability to leave comments if you’re collaborating with other people. Switching to something completely different. Again, if you’ve been listening to me over the years, you know that I’ve been following what’s called low Earth orbit or LEO satellite systems, such as starlink and what they can do out there.
For the last, actually about seven years, one of the other competitors has been Amazon’s, what they’ve called Project Kuper. Now. The, one of the challenges, first of all, is they’ve had issues launching their rockets, but they’re getting there. They’re getting their satellites up, they’re getting ready to offer service.
One challenge has been, people haven’t known how to pronounce it. Is it Kiper Keeper Cooper? What is it? Well, Amazon’s, it was, it was a project name. It wasn’t really meant to be the out there, it was just an internal project name. So they’re solving this by calling it simply Amazon, Leo. That’s what they’re calling it.
Now, what’s I find fascinating is, I mean, first kudos to them. Good to have it. So now we’ll be hearing about SpaceX’s starlink and we’ll be hearing about Amazon. Leo, I find it rather clever because of course people have been talking about satellite systems in leo, which is low earth orbit. It’s a specific range from zero to 2000 kilometers or 1200 miles.
That’s this range, um, above Earth. But now when you talk about LEO systems, you’ll sort of be like, well, are you talking about LEO systems in general or are you talking about Amazon Leo? So. You know, kudos to them for being clever to take that name and, and run with it. So stay tuned, more tech, more things coming out.
With that, they’re gearing up to really launch this service and provide a competitor for starlink. So especially if you’re in an area that has poor internet access, this may be an option at some point soon. Finally. Chat, GPT just announced something, which is that they are making it so that you don’t, it won’t generate m dashes for all those people who work with typography and punctuation who have liked their M dashes.
One thing about chat GPT was that it was putting m dashes in a lot, which are the longer dashes, and it was being a, a signal that something was created by AI or by chat GPT. Well now. You can now turn that off so it’s becomes a little bit harder, but perhaps the people who like using M dashes will be able to start using them again.
We’ll see. That’s all I’ve got for you this month. This is Dan York. You can find more of my audience [email protected] and back to you shall Neville. I look forward to listening to this episode. Bye for now.
Neville Hobson: Thanks a lot, Dan. Great report as always.
There was some stuff in there that was really quite interesting. I think the one I would comment on was your last topic — what ChatGPT has done with the em dash.
Shel and I talked about this in a recent episode, and it is quite extraordinary how people get so exercised and excited about how it “indicates without any shadow of doubt” that an AI wrote something, no matter whether you did or not. There are lots of opinions flying about that.
But the thing you mentioned I found quite interesting, that OpenAI has done this so that you can tell the chatbot not to use an em dash and this time it’ll work.
Well, I started doing that about eight months ago in the custom personalization box. One of the things I’ve told it to do is to avoid using em dashes and instead use en dashes, with a space either side. That certainly goes against all the rules of grammar people talk about in terms of how you should use these things, but I like that. I prefer that.
I don’t like em dashes at all — particularly where they touch the preceding and following words in the sentence. It doesn’t look right to me. Yeah, I know, it’s been like that for centuries, I know all that. But they did that.
So I thought, okay, does that mean my personalization command will actually work properly now? Because sometimes it does, sometimes it doesn’t. I have to keep reminding the chatbot to do this.
What struck me as well is that when OpenAI announced this, both OpenAI and Sam Altman himself, there was no statement about, “Okay, this is what will happen now: if you put an em dash in, it’ll change it to a normal hyphen or an en dash,” or what. No one said anything. And I can’t find anyone with an answer to that question. So that’s still the question: what’s going to happen?
Shel Holtz: Yeah, I’m of mixed mind on the whole dash controversy. I’ve been using dashes as a writer for more than 50 years. I started using them extensively when I was setting type as a part-time gig in college, and a lot of the technical manuals that I was typesetting had dashes in them.
The reason — and I’ve said this before on the show — the reason AI is using dashes is because in all of the stuff on the web that it hoovered up as part of its training model, there were lots and lots of dashes that humans created.
I have no issue with an em dash. I think this whole attack on the dash is absurd, and it’s from people who don’t know punctuation.
On the other hand, I look at a lot of outputs I see from AI — and I’m not talking about stuff I plan to use in publications, just answers to questions or research that I’m going to factor into, say, a proposal — and I see the dashes misused. They’re put in places where commas belong.
So from that perspective, yeah, I’d rather do the placement of the dashes or en dashes myself. I mean, I don’t remember what the rules were, but there used to be rules around when to use an em dash and when to use an en dash, right? I think those have largely fallen by the wayside.
Neville Hobson: Yeah, no, there still are rules — largely ignored by my use completely against those rules.
But I find it’s a very good point you just made, because when I write — and this I find quite interesting — I’ll write a piece of text, say a first draft of an article, and I’ll run it through the chatbot to give me its opinion. And it will often “correct” commas and put en dashes in instead.
And I think: is this an American thing, or is it the kind of bastardization of the English language generally, that things are changing with variants of how people use the language, so it’s hard to know what correct syntax is now?
It doesn’t matter, in my view, as long as people understand what you’re trying to convey. Yet I recognize equally that to many people it is of significant importance. So this is not an argument that’s going to stop anytime soon, I don’t think.
Shel Holtz: No. And the other thing is, along with everything that was contained in the training sets that the models used, so were the rules of grammar and punctuation. So I suspect at some level it’s actually using them correctly, but not the way we use them in current modern English.
And that’s why I will change a lot of them to commas if I’m going to extract something from AI output and use it in a proposal or in a research document.
Neville Hobson: So I suppose people like authors — and others, not just authors, but anyone who feels strongly about using dashes and who uses ChatGPT — I would say to you: put in a custom personalization line that tells the chatbot to use dashes, not take them out.
Shel Holtz: Yes, that’s absolutely right. And especially in technical documents now, because that’s where I saw most of them.
I want to give a shout-out to Robert Rose over at the Content Marketing Institute, among other ventures. This Old Marketing is a great podcast — Robert Rose and Joe Pulizzi. If you’ve never listened to that, I highly recommend it.
Robert has written an article called “Why AI Idea Inflation Is Ruining Thought Leadership and Team Dynamics.” And if you lead a content team, it probably feels less like a think piece and more like a documentary.
His core point is pretty simple: gen AI has made it incredibly easy for senior leaders and subject matter experts to generate ideas for content. Not thoughtful, worked-through concepts — just lots and lots of “We should do something on this”-type ideas. It’s like we turned content strategy into Netflix. There’s always something new in the queue, but more often than not, you don’t feel great about picking any of it.
This isn’t hypothetical. The Content Marketing Institute’s latest B2B content marketing trends report found that 95% of B2B marketers now say their organizations use AI-powered marketing applications. Ninety-five percent — that’s pretty much everyone.
And going back a bit, a previous CMI study found 72% were already using gen AI, but 61% of their organizations had no guidelines for it.
So we have this perfect storm: nearly universal use, very little governance, and leaders with what Robert calls “idea superpowers” that they didn’t earn the hard way.
You’ve probably seen this movie inside your own organization: an executive spends a weekend playing with ChatGPT and asks for “20 provocative points of view we should publish this quarter.” And Monday morning, your content Slack channel lights up with screenshots. None of these ideas are attached to actual budget, resources, or strategy — but because they came from the corner office and because they looked polished, they land on the team like assignments.
Robert’s argument is that this idea inflation doesn’t just create more work; it erodes trust between leaders and content teams. The strategists and writers become order-takers, constantly reacting to an AI-fueled idea fire hose instead of shaping a coherent editorial agenda.
Over time, resentment builds. Leaders feel like, “I keep bringing you ideas and nothing gets done,” while teams feel like, “You keep throwing spaghetti at the wall and calling it thought leadership.”
The data backs up that this isn’t just a workflow annoyance — it’s starting to show up in audience behavior. One study from last year, from Bynder, found that about half of customers say they can spot AI-written content, and more than half say they disengage when they suspect something was generated by AI. We referenced this earlier, Neville.
Another study published this year looked at brands using gen AI for social content and found that overt AI adoption actually led to negative follower reactions unless it was blended carefully with human input.
So the idea treadmill doesn’t just burn out your team; it risks flooding your channels with content that audiences increasingly mistrust.
At the same time, we’re seeing a massive shift on the supply side. Axios, working with Graphite, reported that the share of articles online created by AI jumped from about 5% in 2020 to nearly half — 48% — by mid-2025.
In other words, the content universe is experiencing its own inflation problem: a lot more stuff, not a lot more meaning.
So where does that leave content marketing leaders? Robert’s prescription — and I think this is where communicators really earn their pay — is not “turn the AI off.” It’s to reassert our role as editors of the idea layer, not just the content layer.
That starts with reframing the relationship with your thought leaders. Instead of treating every AI-generated list as a backlog to be cleared, treat it as the raw ore. You sit down and say, “Great, let’s pick one of these and go deep. Which of these ideas would you still fight for if AI hadn’t made it so easy to generate 20 others?”
This is where the leadership part comes in.
The CMI 2026 Trends Report — yes, we’re at the point where we’re looking at 2026 trends — makes the point that the teams who are winning aren’t the ones shouting “AI” the loudest; they’re the ones doubling down on fundamentals like relevance, quality, and team capability, and letting AI breathe more life into those efforts.
In practical terms, what does this mean?
It means putting a simple idea filter in place. If an idea doesn’t align with your documented content mission, target audience, and a defined business goal, it doesn’t make the calendar — no matter how clever the AI prompt was.
It means creating a shared point-of-view backlog where leaders can park AI-assisted concepts, but agreeing that only a small number graduate into actual content every quarter.
And it means being transparent with your team about volume: “We’re going to say no to more ideas faster, so we can say yes to a few that matter.”
There’s also a morale decision here. Other research shows a weird tension: a majority of marketers say AI makes them more productive and even more confident, but a lot of them also fear it could replace parts of their role.
If you’re leading a content team, how you handle idea inflation becomes a signal about your priorities. Are you using AI to respect people’s time and focus on better work — or are you using it to flood them with tasks they can never realistically complete?
And while Robert’s article is squarely aimed at content marketing, I don’t think it stops there. The same dynamics are starting to show up in internal communications, executive comms, even investor relations. Anywhere a leader can spin up “10 talking points for our next town hall” with a prompt, you’re going to see this idea inflation in practice.
If communicators don’t step in to slow that down and curate it, we risk overwhelming every stakeholder group with more, faster, shallower content — and training them to tune us out.
So I read Robert’s piece less as a complaint about AI and more as a leadership challenge. In a world where ideas are cheap and infinite, can you be the person who protects your team, your audience, and your brand from inflation?
Neville Hobson: Yeah, it’s a very good piece. I agree. I’m not sure I like the phrase “idea inflation” — it sounds pretty gimmicky to me, I must admit — but it does capture it quite well.
I found it really interesting reading Robert’s article where he talks about “why AI feeds the engagement crisis.” Now that’s a phrase I can get my head around: engagement crisis.
There are some to-the-point views here which make you think, “Absolutely right.”
For instance, he says when people hear the phrase “employee engagement” they tend to picture enthusiasm — people who are motivated, satisfied, and inspired by their work. But he says employee engagement means more than how people feel about their jobs; it’s also about how much meaning they find in the relationships that shape their work, and AI is causing those relationships to fracture.
“The dynamic between marketing leader and content practitioner, once a creative dialogue, has become transactional. The leader produces ideas, the practitioner packages them.” That’s in line with what you were saying. “Each side feels overextended, underappreciated, and increasingly indifferent.”
And I like how he progresses the thinking here, because you can picture this. “Nobody challenges ideas anymore because nobody loves or hates them enough to care about getting them right.” That’s a hell of an indictment, but I think it’s quite spot-on about some of the behaviors that are happening.
“In that scenario,” he says, “there is no right. When the origin of the idea and the expression of it both come from a machine, neither side can recognize originality or craft when they see it.”
These are alarm bells, to me, that are ringing — that leaders need to really pay attention to.
It’s difficult, though, because it reminds me of two or three cartoons I’ve seen recently on LinkedIn — different, but broadly the same — which show a kind of flowchart of an idea for a press release or an announcement you need to make, and it needs to be this.
So it goes to the next step; then someone says, “Actually, we need to make sure we include that.” Okay, fine, you do that. Then the CEO chimes in with six things he reckons need to be in there as well. And so it goes around all these various steps until you’ve got this bloated thing that gets to the final point in the cycle, with the person at the top of the loop saying, “This is terrible. This is rubbish. This isn’t telling a story. We need to be a lot simpler than this.”
The communicator — by the way, the smart person in this story — had kept the original draft: short, concise, three bullet points. So she submitted that, and everyone said, “Oh, that’s what we need; we’ll approve that.”
To me, that’s a great analogy for this. But someone’s got to recognize the bloat — the inflation of ideas, let’s say — that arises in situations where you’ve got so many people who’ve got their own stakes in the ground and their own agendas they follow.
This isn’t a criticism; it’s a recognition of reality in organizations.
So the communicator in that little story I just told was the smart person here. You’ve got to navigate this sort of thing when it happens. The marketing folks who get dumped on from the corner office — someone at that stage has got to recognize the likely trajectory of all of this and plan accordingly, so that when it gets around to the top of the circuit, they go back to the smarter idea.
It sounds easy saying that, doesn’t it? In reality, it’s not quite like that. Nevertheless, this is a leadership issue. This is not a marketing or content issue — this is a leadership and management issue, it seems to me.
Shel Holtz: It is, and I think it’s an opportunity for communicators to demonstrate some leadership. Because, as Robert says, it’s really easy for an executive over a weekend to get a model to produce 20 ideas for “thought leadership.”
That’s not thought leadership. We’ve talked about thought leadership fairly recently on FIR. This is subject matter expertise that brings new thinking, new angles, sheds new light on the situation. You have a unique perspective to share — that’s thought leadership. It’s generating content that makes people go, “Wow, I hadn’t thought about it that way before; now I’m thinking of it.” You’re leading thinking — that’s what thought leadership is.
I’m not sure that, “Here’s a list of 20 things Gemini came up with for me,” is anywhere near thought leadership unless you see one that you actually have unique perspective and expertise on. And you say, “That’s a great one to talk about.”
If that’s what you’re using AI for, great. But if you’re just copying and pasting that list and sending it to your communications team and saying, “Write these, these are thought leadership pieces,” that’s just going to erode trust in that leader and the organization they represent.
I’ve got no issue with generating lists in AI models — I do it all the time.
On our intranet we have a “Construction Term of the Week,” and I exhausted the list that our engineers sent us, and they’re not inclined to add more. So now I’ll say, “Give me 20 terms related to MEP” — that’s mechanical, electrical, plumbing — and I’ll pick one and that’ll be the term of the week. That saves me a lot of research. It’s a great use of AI, I think.
But if I were to say, “Give me 20 ideas for thought leadership that I can propose to my CEO so we can get a thought leadership article up on LinkedIn,” man, I would never do that. That’s a terrible idea — but evidently, lots of people are.
Neville Hobson: Plenty to think about here. So let’s move on to another topic with plenty to think about.
Question: is it okay to use AI-generated images for LinkedIn profiles?
Over the past few months, we’ve seen AI-generated headshots spreading across LinkedIn. I certainly have. The ultra-polished portraits with perfect lighting, perfect posture, and, in many cases, a slightly uncanny resemblance to the person they represent — these are typical. That description defines most of what I see.
Your first thought when you see it is, “They’re clearly AI-generated,” and you don’t necessarily have a critical take on it, but you note it.
I have to admit, I tried this myself recently — a few months ago, in fact. For a while, my own LinkedIn profile featured an AI-generated photo. It looked professional enough with uncanny realism, but the longer it sat there, the more uncomfortable I felt about it.
It wasn’t quite me. What if people thought it was me and later realized it wasn’t? I hadn’t said it was an AI-generated image created from a selection of actual photos of me. What would the effects be?
Eventually, I removed it and used a real photo.
That personal hesitation is exactly what Anna Lawler explores in a thoughtful LinkedIn article about the ethics of AI headshots. Lawler is Director and Head of Digital and Social Media at Greentarget, a corporate comms agency based in London.
She describes the pressure to have a sharp, executive-style image ready the moment a new role is announced — something many of us will recognize. AI offered her a quick, clean solution. But then came the real question: should she use it?
Her piece gets to the heart of what communicators are wrestling with right now — well, many communicators, I would say.
What does authenticity look like when technology can generate a version of you that is polished and accurate, but still artificial? Does using that image strengthen your professional brand, or does it introduce a small crack in trust?
What if you don’t disclose how the image was created? And does it matter if no one can tell the difference?
Anna’s LinkedIn post attracted many comments about whether to do this. One comment was blunt: “Just no. Not at all. Never.”
Another explored the idea a bit: “You’ve used an AI image of yourself which looks dead like you — so much that your dad couldn’t tell the difference, other than to say you look well. How different is this to putting a filter on a real photo of you? So no major harm done for a personal LinkedIn photo. But what happens when PRs and marketers start doing this on behalf of others?”
Another analyzed the situation: “Most images — portrait or otherwise — are subject to some form of post-production. It is similar to editing a paragraph of text. You take the original content and adapt it to fit the requirements of the medium, ensuring the tone and voice are appropriate. In the case of a photograph, a human may use Photoshop. In the case of text, they can do it in Word or use Grammarly. If the final decision of whether or not to accept the edits lies with the human, does it matter what method was applied to make them?”
If the purpose of a profile photo is to represent who you are, does an AI-enhanced or AI-created version cross a line? Does “close enough” count?
Anna makes a thoughtful distinction between personal use and corporate use — on websites or official materials, where misrepresentation risks are far greater. She also highlights the reputational and ethical factors that communicators must now weigh, because our profile photos are no longer just photos. They signal identity, credibility, and intent.
It raises a bigger question for all of us: as AI becomes more deeply woven into our professional lives, where do we draw the line between convenience and authenticity? And how do we guide our organizations through those decisions when the norms are still being formed?
Now I know you’ve got some views about this, Shel, so what do you think?
Shel Holtz: Hell yeah, I have some views on this.
I’ve stated before on the show and elsewhere that I think the line is around deceit. Are you trying to deceive somebody? And if the use of AI could lead somebody to be deceived, then I think you need to disclose. If not, I don’t think there is any compulsion to disclose.
What if I have a photo of me — and that’s what I use — but I use a service like Canva or Photoshop to remove the background and put in an AI-generated background? Is that okay?
AI is a tool. It’s just a tool.
We use tools for… I mean, we use photos — there was a time when there were no photos available. You had to hire an artist to paint your portrait if you wanted somebody to know what you looked like.
I think the utter prohibition that some people are suggesting on AI images on LinkedIn is, frankly, stupid. I disagree with it wholeheartedly.
My profile picture on LinkedIn is AI-generated. Now, why did I do that?
When I started at Webcor in 2017, there was a professional photographer who was taking everybody’s photo, so your profile photo on the intranet directory was consistent and professional. I used that everywhere for about six and a half years. Then I lost 70 pounds, and frankly, I didn’t look like that anymore.
I didn’t have access to a professional photographer through work, and I didn’t have the time to go sit for a portrait. So I did that thing where — and I didn’t use one of the paid services; I think it was Gemini — I gave it 20 headshots of me looking the way I do now, post-70-pound loss, and I said, “Aggregate these into a professional headshot.”
I had to do it eight or 10 times before I got one that actually looks like me, where you can’t tell the difference. And that’s the one I’m using.
Is it misrepresenting me? No, it’s not. It looks like me, and I am fine with that. I don’t think I’m deceiving anybody. I don’t think I’m pulling the wool over anybody’s eyes. It’s me.
I don’t have any issue with that at all, and I can’t imagine an argument that would convince me otherwise.
Neville Hobson: No, I get you 100% on that, Shel.
In my case, I mentioned I had an AI-generated image as my LinkedIn profile picture, which I removed. There’s now a normal shot; it’s not as good, in my view, as the one I took down.
But that same picture, large size, I’ve got on my About page on my website. And there’s nothing there saying it’s AI-generated. People I’ve shown it to — only about four or five — couldn’t tell the difference that it wasn’t real when I told them it was AI-generated.
So your point about deceit is a very valid one.
If I put a picture of me up there looking slightly thinner maybe, with fewer age-driven gray hairs appearing, and I made myself blond maybe, changed my eye color or something — that’s not me at all. That, to me, would cross the line.
But I also, on the other hand, get entirely the illogic — if I can use that word — of people who are critical about this. But that is part of the platform you’re on, and people will judge you.
Now, I’m of strong belief myself that I really don’t care much about what people think about me in the sense of that, but this can have impacts.
I don’t want to do something that stimulates that kind of discussion or opinion-forming or commenting. And people are doing that a lot.
So to me, it’s simple: this is not a huge deal, to have an AI-generated image up there, when I can just have a normal pic that I take with my webcam and touch it up in Photoshop — which I do. I had one previously where I changed the background because I didn’t like the background.
That happens all the time. That’s not deceit. Nevertheless, there are some things you might want to take a stand on. This isn’t one of them for me — “I’m not going to use it” or “I am going to use it.”
So why have I kept it on my blog, you might ask? That’s part of a simple experiment. No one’s noticed or commented, and it actually fits the way I want to portray myself in the context of what I’ve written about myself on that page.
Shel Holtz: Thematic consistency.
Neville Hobson: Yeah, that’s different from using it on LinkedIn, because that’s a wholly different description on LinkedIn. So I’ll keep it up there until someone screams loud enough, saying, “You’re a fake, you’re deceitful,” which I don’t believe is going to happen.
Shel Holtz: The camera is a tool. A photo of you is not you; it is a representation of you that was captured by the camera. What if the white balance was off? What if the depth of field was off? There are so many things that a camera captures that are inaccurate or inconsistent.
AI is a tool. In five years, no one’s going to be having this discussion. It’s going to be so common, and the outputs are going to be so spot on that this isn’t even going to be an issue.
I just think if people are talking about this, they need to find more fruitful things to spend their time talking about.
Neville Hobson: This is always going to be here, and it depends on how you want to judge it.
But to me, there’s another thought to throw into the mix here, which we’ve touched on previously: this is not just about a photo. It has more about it than just a photo of someone. This is about your identity. This is about your credibility. This is about how others perceive you. That does matter — to varying degrees, depending on the industry you’re in and how you portray yourself and the people you’re connected with.
So it’s a preview, I suppose you could argue, of wider ethical decisions that we must make as AI is embedded everywhere — until it gets to the point, as you say, where no one’s talking about this anymore. We’re not at that point yet.
Shel Holtz: Maybe I’ll take my LinkedIn portrait and have the AI generate it in the style of a Pixar 3D animated movie and see what people say.
Neville Hobson: Well, you used to have a cartoon up there back in the early days.
Shel Holtz: I did. That was a service that would take your photo and turn it into a cartoon, an illustration. It was a service that used freelance artists. They would parcel it out to one of them. It was pretty cheap; you got it back in multiple file formats. It was great.
Neville Hobson: There you go.
I think I can answer my own question about why I’ve kept it on my blog, because the blog serves multiple purposes. It’s no longer a business site — I’ve changed what I do. It’s much more a personal site that’s intermingled with business. That’s different to LinkedIn, which is a social network with a business focus — that’s different. So that’s why I keep it up there, I guess.
Shel Holtz: All right. So if an executive has their photo taken and they have a makeup artist work with them, is that an accurate representation of them? Do they need to disclose that they were wearing makeup for this photo?
Come on. Let’s talk about more serious things, folks.
Neville Hobson: Like I said, logic is not part of this discussion; it’s emotion-driven. This is again a reflection, I think, of accessibility to ways to voice your opinion if you have one — and everyone has one, and they are voicing it.
Shel Holtz: Clearly. Well, let ’em.
Neville Hobson: I say thank you to Anna Lawler because that prompted this. She wrote the piece at the beginning of the year, but it did prompt all of this in my mind. I think it’s worth reading, so there’ll be a link to it in the show notes.
Shel Holtz: Well, I read an article recently with a pretty brutal headline: “Your Staff Thinks Management Is Inefficient. They May Have a Point.” This was in Inc. magazine.
It’s just the latest in a long string of big changes that employees feel are being done to them rather than with them.
The article by Bruce Crumley leans on new data from Eagle Hill’s 2025 Change Management Survey. In the past year, 63% of U.S. workers say that they’ve been through significant change: tech like AI, new products, return-to-office shifts, headcount changes, cost-cutting, cultural changes, acquisitions. But only a third of them think those changes were worth the effort.
A lot of them say their efficiency actually went down, their workload and stress went up, and the supposed innovation never really materialized.
Now, when Eagle Hill digs into the “why” around this, the picture gets even more familiar. Employees say management is picking the wrong priorities, not managing the rollout well, not supporting people as they adapt, and not monitoring how the change actually lands.
Only about a third feel leaders really listen to their input on what needs to change. Forty percent say they’re basically ignored.
The line that jumps out for communicators is Eagle Hill’s conclusion that the key to successful change is not what you change, but how you change — and that change is experienced at the team level, not somewhere on the org chart.
Now, layer AI on top of that. From the employee perspective, there’s a pretty consistent story emerging: they’re interested in AI, but they don’t feel included or supported.
Eagle Hill’s tech and AI research found that 67% of employees aren’t using AI at work yet, but more than half of those non-users actually want to learn about it. At the same time, 41% say their organization isn’t prepared for the rise of AI.
Workday’s global survey paints a similar picture. Only about half of employees say they welcome AI in the workplace, and nearly a quarter aren’t confident their organization will put employee interests ahead of its own when implementing it.
Leaders are more positive about AI than employees are, but they share that same lack of confidence about whether the rollout will be done in a people-first way.
And there’s a trust gap on top of that. Gallup finds only 31% of Americans say they trust businesses to use AI responsibly. Over two-thirds say “not much” or “not at all.”
Let’s make it even spicier. A recent global study from Dayforce found that 87% of executives are using AI at work compared with just 27% of employees. Execs are out ahead, using AI heavily, while a big chunk of the workforce is still on the sidelines — worried, undertrained, or just not invited in.
So if you’re an employee sitting in the middle of all this, what does it look like?
You see leadership trumpeting AI as the future. You get more tools, more dashboards, more “transformations,” as they call it. Your workload goes up during rollout. Your voice doesn’t seem to shape the priorities. And you’re told it’s all about efficiency and innovation while your own day-to-day experience feels more chaotic.
“Management is inefficient” starts to sound like a very reasonable conclusion.
That’s where communicators can earn their keep, especially around AI.
First, we can make the “why” legible. A lot of AI change stories stop at “This is cutting-edge” or “This will make us more efficient.” The Eagle Hill findings are basically a giant flashing sign that says that’s not enough.
We need to tell a story that starts with the team: What pain point is AI solving for you? What are you going to stop doing because this is now available? What does success look like in your specific function, not just on an earnings slide? Helping leaders anchor AI messaging in outcomes people actually care about is step one.
Second, we can bring employees into the design of the change rather than just leaving them on the receiving end. That means building in genuine listening — pulse surveys that ask, “What’s getting harder as we roll this out?” Small-group sessions where teams can talk about how the AI actually fits into their workflow.
Storytelling that highlights not just the shiny pilot, but the tweak that came from frontline feedback. And then — and this is the part we skip so often — closing the loop and saying, “Here’s what you told us, and here’s what changed.”
Same as surveys, right? We issue surveys, we get the feedback, and maybe changes are made — but we don’t tell anyone. If 40% of people feel unheard during change, that loop is our job.
Third, we can equip managers to be translators instead of amplifiers of confusion. Most people don’t experience “the organization”; they experience their manager. So when Eagle Hill says the team should be the core unit of change, that’s a giant invitation to communicators to build manager toolkits around AI.
Simple talk tracks: “Here’s how to explain this change in two minutes.” “Here’s what to say if people are worried about their jobs.” “Here’s how to be honest about the short-term workload bump.”
FAQs, slides, even suggested phrases that sound human instead of legalistic — that’s all in the comms wheelhouse.
Fourth, we can push for pacing that matches reality and help leaders talk about trade-offs. A lot of the resentment in these surveys comes from people feeling like change is something piled on in addition to their regular day jobs.
Eagle Hill’s advice to slow down, phase changes, and temporarily ease workloads isn’t just an HR tactic; it’s a narrative opportunity.
Imagine the difference between: “Here’s another AI tool, please adopt it,” and: “For the next eight weeks, we’re pausing X reports and Y meetings so you have time to learn this new workflow. Here’s the schedule. Here’s where to get help.”
We communicators can frame that pacing as a deliberate, respectful choice.
And finally, we can insist that AI change stories include trust as a first-class citizen, not a footnote. That means naming the concerns, not dancing around them.
Employees are reading headlines about bias, surveillance, job loss. They’re seeing that most people don’t fully trust businesses on AI. We can help leaders say out loud, “Here are the guardrails; here’s what we will use AI for, and here’s what we will not. Here’s how we’ll measure the impact on workload. Here’s how you can challenge a decision if you think AI got it wrong.”
That transparency is the only way to close the trust gap.
If we don’t do any of this, AI just becomes the latest exhibit in the “management is inefficient” file — another transformation employees experience as stress without payoff.
If we do our jobs well, AI can actually become a proof point that this time, the organization learned from the last wave of change — that it listened, it paced itself, it treated teams as the unit of change, and it used communication as a way to share power, not just spin the story.
Neville Hobson: I have to admit, I’m quite shocked to hear the picture you’ve painted there — that it’s so bad. Is it truly that bad?
Because this is actually, to me, like what you just said, particularly your concluding part — this is Leadership 101, for Christ’s sake, and yet so many people aren’t doing this.
Shel Holtz: Well, if the research is accurate, then it really is that bad.
Neville Hobson: What the hell is going on?
This actually touches on everything we’ve said so far in this episode — what leaders need to do in certain situations. Don’t allow it to be like this.
The whole idea of “management” being all up to speed with AI while employees are completely in the dark and don’t have a clue how to use the tools — I find that truly hard to believe as a significant factor across the board.
That doesn’t gel with some other research I’ve seen here in the UK — and mostly in the US — where the issue is getting leaders to embrace it, while employees are out there experimenting, which is why there aren’t guardrails or guidance properly.
So this is a pretty shocking state of affairs, it seems to me, Shel.
Some of the things here are so obvious that I just wonder why people enable this situation to be the norm, if it is as portrayed in this article.
There are a lot of tips though — I have to say everything you need to know about what to do is here. So pick this up and read it, for God’s sake, please.
Shel Holtz: I remember early in my career, I was at a Ragan Communications conference and a CEO was speaking. He said he believes that every CEO, as soon as they sit in the CEO chair, gets hit by a “stupid ray” aimed right at their head — because they stop listening.
They think, “I’m the CEO. I’m here because I know everything, and I can make these decisions in a vacuum. I am at the top of the food chain.”
I think that’s happening right now. If you look at the number of layoffs that are happening, and AI is a factor in these — they’re coming right out and saying it. They’re not hiding it; they’re saying, “AI can make us more efficient.”
They’re not talking to the teams that do the work, to find out, “If we end up with three people instead of 10 because you think AI can do the work, we happen to know that’s not the case, and this is going to make us less efficient.”
There’s not a lot of listening going on in these decisions being made. There’s not a lot of querying of the teams to find out exactly how they can use AI to be more efficient and what that means for the staffing of the team.
I think there are executives who say, “I have this tool, I’m in charge, I’m slashing the workforce.” I think that’s what’s happening. And I think that’s why so many employees think that the leaders are now inefficient.
Neville Hobson: Well, it’s missing completely the voices that — as we’ve discussed in previous episodes, and indeed thinking back to our interview with Paul Tighe from the Vatican — it’s missing the humanity.
It’s missing, “How does AI augment and improve how people do their jobs, not replace them? Not ‘become more efficient, therefore we don’t need so many people here.’” That voice is missing.
To me, that’s the essential part. You could extend that thought: that voice is not just about AI, although that’s a huge element because it is permeating organizations, and in many cases not in a good way, because the conversations are all about becoming more efficient and not needing people. It’s part of that bigger picture.
This article does talk about, in its concluding parts, “Change is experienced collectively, not individually.” That means a team, not the org chart, must be the core unit of change.
They talk prior to that paragraph about how the majority of modern workplaces are shaped by the teams that drive most activity and success. Initiatives come from the top, but success relies on the base embracing them. These are the fundamentals of leadership, surely.
I’ve noticed here — as an aside — that in some of these things you hear about that are going wrong in organizations, the people leading are just damn incompetent. Some of the speeches and things they say in public exhibit nothing but utter incompetence, and they should be fired.
That’s a bigger story, frankly, but it’s part of it. The most useless people leading these organizations are dragging them down, and the employees are the ones who are suffering. I’m straying into big-picture politics and opinions, but nevertheless, that’s what you see.
Shel Holtz: I’ll join you in that string.
It seems clear to me that a lot of leaders are abdicating the principles of leadership to the exuberance they feel about the potential for AI, and they’re just running with it. I don’t think that’s going to bode well for the performance of their organizations, especially when they’ve lost the trust and confidence of the people who are expected to execute on all of this.
Neville Hobson: So on that note, we would say that the next episode will have a lot of good news.
Shel Holtz: I sure hope so.
But that’ll be a 30 for this episode of For Immediate Release. Our next long-form monthly episode — we will return to doing this toward the end of the month. We’re planning on recording that on Saturday, December 27th, and releasing it on Monday, December 29th.
Until then, go back to the beginning of the episode and learn about all the ways that you can comment. And we will have our midweek short episodes beginning in a week or so.
And until then, that will, in fact, be a 30 for this episode of For Immediate Release.
The post FIR #489: An Explosion of Thought Leadership Slop appeared first on FIR Podcast Network.