The 2026 Edelman Trust Barometer focuses squarely on “a crisis of insularity.” The world’s largest independent PR agency suggests only business is in a position to be a trust broker in this environment. While the Trust Barometer’s data offers valuable insights, Neville and Shel suggest it be viewed through the lens of critical thinking. After all, who is better positioned to counsel businesses on how to be a trust broker than a PR agency? Also in this episode:
Research shows employee adoption of AI is low, especially in non-tech organizations like retail and manufacturing, and among lower-level employees.CEOs insist that AI is making work more efficient. Do employees agree?Organizations believe deeply in the importance of alignment. So why aren’t employees aligned any more today than they were eight years ago?Mark Zuckerberg changed the name of his company to reflect its commitment to the metaverse. These days, the metaverse doesn’t figure much in Zuckerberg’s thinkingIn his Tech Report, Dan York reflects on Wikipedia’s 25th anniversary.2026 Edelman Trust BarometerSociety Is Becoming More InsularExclusive: Global trust data finds our shared reality is collapsingInsularity is next trust crisis, according to the 2026 Edelman Trust BarometerEmployers are the most trusted institution. That should worry you – StrategicThe 2025 Edelman Trust Barometer has landed, and everyone in comms is about to spend the next six months quoting the same statisticHIS THEORY IS LITERALLY: The human beings of the earth don’t like each other, don’t trust each other, won’t talk to each other, won’t listen to each other.Richard Edelman Has No Clothes. (Nobody Does.)Trust amid insularity: the leadership challenge hiding in plain sightEmployees say they’re fuzzy on their employers’ AI strategyJP Morgan’s AI adoption hit 50% of employees. The secret? A connectivity-first architectureHow Americans View AI and Its Impact on People and SocietyOnly 14% of workers use GenAI daily despite rising AI optimism: SurveyOffering more AI tools can’t guarantee better adoption — so what can?Only 10 Percent of Workers Use AI Daily. Getting Higher Adoption Depends on LeadersLeaders Assume Employees Are Excited About AI. They’re Wrong.Meta is about to start grading workers on their AI skillsCEOs are delusional about AI adoptionCEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story.The Productivity Gap Nobody Measured.FIR #497: CEOs Wrest Control of AIThe Alignment ParadoxWhat Mark Zuckerberg’s metaverse U-turn means for the future of virtual realityMeta Lays Off Thousands of VR Workers as Zuckerberg’s Vision FailsMeta Lays Off 1,500 People in Metaverse DivisionFIR episodes that featured metaverse discussionsLinks from Dan York’s Tech Report
Celebrating Wikipedia’s 25th Birthday and Reflecting on Being a wikipedia for 21 YearsAt 25, Wikipedia faces its biggest threat yet: AIWikipedia at 25: A Wake-Up CallThe next monthly, long-form episode of FIR will drop on Monday, February 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Shel Holtz: Hi everybody and welcome to episode number 498 of For Immediate Release. This is our long-form episode for January 2026. I’m Shel Holtz in Concord, California.
Neville Hobson: And I’m Neville Hobson, Somerset in the UK.
Shel Holtz: And we have a great episode for you today, lots to talk about. I’m sure you’ll be shocked, completely shocked that much of it has a focus on artificial intelligence and its place in communication, but some other juicy topics as well. We’re going to start with the Edelman Trust Barometer, but we do have some housekeeping to take care of first and we will start with a rundown of the short midweek episodes that we have shared with you since our December 2025 long form monthly episode. Neville?
Neville Hobson: Indeed. And starting with that episode that we published on the 29th of December, we led with exploring the future of news, including the Washington Post’s ill-advised launch of a personalized AI-generated podcast that failed to meet the newsroom standard for accuracy and the shift from journalist to information stewards as news sources. Other stories included Martin Sorrell’s belief that PR is dead and Sarah Waddington’s rebuttal in the BBC radio debate. Should communicators do anything about AI slop? And no, you can’t tell when something was written by AI. Reddit AI and the new rules of communication was our topic in FIR 495 on the 5th of January, where we discussed Reddit’s growing influence. Big topic, and I’m sure we’ll be talking about that again in the near future. On that day, we also published an extra unnumbered short episode to acknowledge FIR’s 21st birthday. Yes, we started out on the 3rd of January 2005 and that’s a lot of water under the bridge in that time, Shel. And I think we had quite a few bits of feedback on that episode.
Shel Holtz: People dropped in and shared their congratulations. There were way too many of them to read and many of them were very, very similar. Just to share one, this is from Greg Breedenbach who said, “Congratulations, what a feat. I’ve been listening since 2008 and never got bored because you managed to keep it engaging and relevant. Thanks for all the hard work.”
Neville Hobson: Great comment, Greg, thank you. So for FIR 496 on the 13th of January, we reported on the call by the PRCA, the Public Relations and Communications Association for a new definition of public relations. We explored the proposal’s emphasis on organizational legitimacy, its explicit inclusion of AI’s role in the information ecosystem, and the ongoing challenge of establishing a unified professional standard that resonates across the global communications industry. That had a few comments.
Shel Holtz: That got a few comments. Gloria Walker said, “Attempts have been made from time to time over the decades to define and redefine PR. Until there is a short one that pros and clients and employers can understand, these exercises will continue. Good luck.” And Neville, you replied, you said, “You’re right, Gloria. This debate comes around regularly. One interesting precedent was the Public Relations Society of America led effort in 2011 in a public consultation to redefine PR. That process was deliberately open and received broad support from professional bodies and their members around the world.” And Philippe Borremans out of Portugal had a comment. He said, “Thanks for the mention of my comments. Hope it helps in the definition exercise.” Philippe, of course, wrote a LinkedIn article in response to the definition. There were some other comments in this episode, including one from Marybeth West. You can go find that on LinkedIn. This was a rather lengthy exchange between Marybeth and you that is just too long to include here.
Neville Hobson: Great. And then in FIR 497 on the 19th of January, that’s just a week ago before we record this current episode, we unpacked the latest AI radar report from BCG, used to be known as Boston Consulting Group, that says AI has graduated from a tech-driven experiment to a CEO-owned strategic mandate. We examined this evolution that places communicators at the center of a high-stakes transition as AI moves from pilot phase into end-to-end organizational transformation. One comment we had to that:
Shel Holtz: From our friend Brian Kilgore, who said, “Haven’t read the report yet, but will soon. Sometimes when I read a link first, I can’t get back to the comments.” But he continues to say, “I once took a job that was structured by Boston Consulting Group. My employer used the BCG report as the basis for the job description. It worked out well.”
Neville Hobson: Excellent. So that’s where we’re at. Some good stuff since the last episode. And of course, now we’re about to get into the current.
Shel Holtz: And yesterday I published the most recent Circle of Fellows, the monthly panel discussion with members of the class of IABC Fellows. This one was on mentoring. It was a fascinating conversation featuring Amanda Hamilton-Atwell, Brent Carey, Andrea Greenhouse, and Russell Grossman. The next Circle of Fellows—mark it in your calendar because this one’s going to be very interesting and maybe even controversial—this is going to be at noon Eastern time on Thursday, February 26th and it’s all about communicating in the age of grievance. This will feature Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh.
Neville Hobson: You’re such a tease, Shel, with that intro, I have to say. So yeah, go sign up for it, folks. I’d also like to mention that in December, IABC announced the formation of a new shared interest group, or SIG, that Sylvia Cambier and I are leading. It’s called the AI Leadership and Communication SIG. And I’m delighted that we have attracted 70 members so far. I’m also delighted to share that our first two live events are scheduled for February. On the 11th of February, we’re hosting a webinar for IABC members to introduce the SIG, explain why we formed it, what it stands for, and how it approaches AI through a leadership and communication lens. Then on the 25th of February, as part of IABC Ethics Month, we’re hosting a webinar on AI ethics and the responsibility of communicators. This is a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. We’ve included links in the show notes so you can learn more about these events and sign up as well if you’d like to.
Shel Holtz: Sounds great, I’m planning to attend those, schedule permitting. And that wraps up our housekeeping. Hooray! It’s time to get into our content, but first you have to listen to this.
Neville Hobson: Our lead discussion this month is the 2026 Edelman Trust Barometer, which landed last week at the World Economic Forum in Davos, Switzerland, with a stark framing: trust amid insularity. But before we get into the findings, a quick word on what the Edelman Trust Barometer actually is. Many of you literally may not know why this is significant. The Edelman PR firm has published the Trust Barometer every year since 2000, making this its 26th edition. It’s based on a large-scale annual survey across 28 countries, tracking levels of trust in four core institutions: business, government, media, and NGOs, alongside attitudes to leadership, societal change, and emerging issues. Over time, it has become one of the most widely cited longitudinal studies of trust globally, not because it predicts events, but because it captures how public sentiment shifts year by year.
After more than two decades of tracking trust globally, Edelman’s core finding this year is that we are no longer just living in a polarized world, but one where people are increasingly turning inward. That’s that word “insularity” I mentioned earlier. The report suggests that sustained pressure from economic anxiety, geopolitical tension, misinformation, and rapid technological change is reshaping how trust works. Rather than engaging with difference, many people are narrowing their circles of trust, placing greater confidence in those who feel familiar, local, and aligned with their own values, and withdrawing trust from institutions or people perceived as “other.” At a headline level, overall trust is broadly stable year on year. The global trust index edges up slightly, but that masks important differences. Trust continues to be significantly higher in developing markets than in developed ones, where trust levels remain flat or fragile. As in recent years, employers and business are the most trusted institutions globally, while government and media continue to struggle for confidence in many countries.
What is notably sharper this year is the distribution of trust. The income-based trust gap has widened further, with high-income groups significantly more trusting than low-income groups. Edelman also finds growing anxiety about the future. Fewer people believe the next generation will be better off, and worries about job security, recession, trade conflicts, and disinformation are at or near record highs. A defining theme running through the report is what Edelman calls insularity. Seven in 10 respondents globally say they’re hesitant or unwilling to trust someone who differs from them, whether in values, beliefs, sources of information, or cultural background. Exposure to opposing viewpoints is declining in many countries, and trust is increasingly shifting away from national or global institutions towards local personal networks: family, friends, colleagues, and employers. Compared with last year’s focus on grievance and polarization, the 2026 report suggests a further step from division into retreat. The concern is not just disagreement, but disengagement—a world where people are less willing to cross lines of difference at all.
In response, Edelman positions trust brokering as a necessary answer to this environment—the idea that organizations and leaders should actively bridge divides by facilitating understanding across difference rather than trying to persuade or convert. This concept sits at the center of the second half of the report. It’s also worth noting that Edelman’s framing, particularly around trust brokering and the role of institutions, has attracted a number of critical responses. We’ll highlight some of those critiques in our discussion alongside our own perspectives and what this year’s findings mean in practice. Taken together, the 2026 Trust Barometer paints a picture of a world where trust hasn’t collapsed, but it has narrowed, becoming more conditional, more local, and more shaped by fear and familiarity than by shared institutions or common ground. That raises important questions about leadership, communication, and the role organizations are being asked to play in society. So let’s unpack what Edelman is telling us this year. What stands out in the data where it feels like a continuation of recent trends and where this idea of insularity marks something more fundamental in how trust is changing? Shel?
Shel Holtz: Well, we would be remiss if we didn’t acknowledge that this annual ritual has attracted a torrent of criticism over the years. Criticism raises some uncomfortable questions about what we’re actually measuring and, more importantly, whose interests the barometer serves. Now, none of this minimizes the value of the data that has been collected. For the eight years that I have been working for my employer, I have extracted points that I think are relevant and share these with our leadership. I’ve already undertaken that exercise this year. So what I’m about to share with you is a critique, but I don’t want anyone thinking this means you should ignore the report. It just means you should apply some critical thinking as you go over this information.
And let’s start with the most fundamental critique: the methodology and sample selection. Clean Creatives, which is a climate advocacy organization, has documented how Edelman’s country selection appears strategically aligned with the firm’s client base. The United Arab Emirates, for instance, was only added to the trust barometer in 2011, conveniently right after they became an Edelman client in 2010. And wouldn’t you know it, the barometer regularly finds that trust in the UAE government remains among the highest in the world. And by the way, that’s a quote, “remains among the highest in the world,” findings that are then dutifully promoted by state media.
Consider the question: is the trust barometer measuring trust or is it manufacturing it for the C-suite? The issue gets even more problematic when you look at the top of the leaderboard. Six of the highest-ranked governments in recent editions—China, the United Arab Emirates, Saudi Arabia, Indonesia, India, and Singapore—are rated by Freedom House as either not free or partly free. Researchers studying authoritarian regimes have identified what they call autocratic trust bias. It’s a phenomenon economist Timur Kuran calls “preference falsification.” In other words, people don’t exactly feel free to reveal their true opinions when they might face some sort of prosecution for indicating that they don’t trust their government.
And here’s where David Murray’s recent critique hits the nail on the head. David is a friend of mine. He’s a friend of the show and he has been an FIR interview guest. And he published a takedown of what he calls this wearying annual ritual. David points out the sheer absurdity of Edelman’s latest focus: insularity. The 2026 report claims that seven in 10 people are insular, as you mentioned Neville, retreating into familiar circles. Edelman’s solution, as you mentioned again, is the trust brokers. And of course, the report finds that employers are the ones best positioned to scale this trust brokering skill set. But as Murray observes, there’s something deeply hollow about a global PR machine using AI and always-on monitoring to lecture us on the human skill of listening without judgment. It’s a case of “human hires machine to reassure self he is human.”
Now consider Edelman is a $986 million global PR firm whose stated purpose is to evolve, promote, and protect their clients’ reputations. So when the research concludes year after year that business must lead and that my employer should be the primary trust broker, you have to ask: is this research or is this a pitch deck? Is Edelman documenting a phenomenon or are they selling a solution that just happens to require companies to hire more communications consultants to teach conflict resolution training? There’s also the question of academic rigor. Despite its massive influence, Edelman hasn’t made the full data set available to independent researchers. When their 2023 findings about polarization were criticized for lumping democratic and authoritarian countries together, they produced a reanalysis, but only after removing data from China, Saudi Arabia, and the UAE. And, surprise, the core finding—that business must lead—remained intact.
The conflict of interest concerns extend even further. Edelman has been documented working with fossil fuel giants like Shell, Chevron, and Exxon Mobil. They were one of the largest vendors to the Charles Koch Foundation, yet the barometer presents findings about climate change and business ethics without disclosing these relationships. Peer-reviewed research found Edelman was engaged by coal and gas clients more than any other PR firm between 1989 and 2020. When a firm with that client roster tells us that business is the only institution that is both ethical and competent, we should probably raise an eyebrow. Look, I’m not saying the underlying trends—polarization, information chaos, erosion of truth—aren’t real. These are very serious shifts in our reality, but we need to be critical observers of the research. We need to ask who benefits from the conclusion that employers should step into the void left by failing democratic institutions and who profits from the narrative that CEOs, not citizens, should lead societal change? The Edelman Trust Barometer has become the ultimate gathering of elites at Davos telling each other what they want to hear. It provides a veneer of data-driven legitimacy to corporate overconfidence. But if we’re serious about rebuilding trust, we might want to start by questioning the research that so conveniently serves the interests of those who are producing it.
Neville Hobson: Yeah, that’s quite a scathing analysis. I read David Murray’s blog post, really, really good, entertaining read in his inimitable style. One that actually mentions some points that are really right up there with some of the critiques you raised from your narrative—a post by Sharon O’Day. Sharon’s a digital communication consultant based in Amsterdam. I think she’s on the button with most of what she writes. I read her content on LinkedIn frequently. She’s got about 82,000 followers on LinkedIn, so she’s got some credentials and credibility. She talked about this, where her headline is the one that kind of sets the scene for what she writes in an article for Strategic Global. She says, “Employers are the most trusted institution—that should worry you,” says Sharon. She goes into a description of what the report is and what the big finding is about “my employer is now the most trusted institution.”
She warns before internal communicators rush to embrace “trust brokering”—Edelman’s proposed solution to all this—we should ask what kind of trust are we actually talking about? She goes on to summarize what, in her view, Edelman gets right about this. The trust barometer lands strongly, she says, because it tells people what they already suspect, but with graphs. I did like that little bit there. So she talked a bit about the seductive appeal of trust brokering. And I thought this was a sharp analysis. Edelman’s solution is trust brokering: help people work across difference, acknowledge disagreement, translate perspectives, surface shared interests. Employers as the most trusted institution should facilitate this. You can see why this resonates, she says; it offers organizations a constructive role without being overtly political. For internal communicators, it suggests evolution from message delivery to dialogue facilitation. It fits our existing narratives nicely, she says.
But the problem isn’t that this is wrong. It’s that it treats trust as primarily a relational challenge, when in most organizations it’s fundamentally structural. The core weakness, says Sharon, is assuming trust is an emotional state that can be rehydrated with better listening. She says trust is a systems problem, in fact. Workplace mistrust is often entirely rational, she says. People distrust organizations because they’ve watched restructures framed as “growth,” AI introduced without safeguards, workloads expand as headcount contracts, risk pushed downwards while control stays at the top. That’s a pretty keen assessment, I think, of reality in most organizations. And she notes being asked to engage openly feels less like inclusion and more like exposure. Frame trust as sentiment and the solution defaults to messaging. Understand trust as system behavior and the role shifts towards making systems legible: how decisions are made, where constraints sit, what won’t change.
And she then talks about when insularity becomes moral judgment, reminding us this now applies to 70% of people globally, according to the trust barometer. The danger: this subtly relocates responsibility. If trust is low because people are insular, help them become more open. But what if mistrust is entirely rational? And she warns again that trust isn’t a moral virtue; it’s a calculation people update based on what organizations do, not what they say. Trust in an employer is not the same as trust in a democratic institution. It’s shaped by dependency as much as belief. Your employer controls your income, your professional identity, and often your healthcare and visa status. That changes the dynamic.
So she winds up talking about the hard truth. The most worrying thing is that people trust their employer more than anything else—is that they may not have anywhere else left to put it. That’s not a mandate to become society’s repair shop, says Sharon. It’s a warning about what happens when you’re the last institution standing and you cock it up. For communications, a task isn’t to become trust brokers. It’s to tell the truth about the system people are inside, how it works, where constraint sits, what won’t change and why. Trust collapses when people stop expecting honesty about how decisions get made and who benefits.
I think, though, that last bit in particular is a hard truth dose of reality. I suspect where in a sense she’s saying—I’m interpreting her words here—that communicators are part of the game, let’s say. They are not telling the truth about the system people are inside. And that’s quite an indictment to slam that down on the table in the midst of this. Yet, I think it’s a valid point to raise for discussion, whether you disagree or agree. It’s worth considering what she says. Are we all who work in large organizations in particular to communicate what the organization is doing, what the leaders are saying, what’s happening… are we simply regurgitating the top-down perspective of an untruth? Maybe that’s one way of putting it. So it adds to the questioning of Edelman’s motives or their responsibilities. I think what you noted—people like David Murray saying—have done a pretty good job at that. I’m not questioning that aspect of it all.
I have found, largely, what Edelman talks about to be valid, notwithstanding those questions about their motives and often undisclosed relationships. Because after all, they interview each year 20,000-plus people in God knows how many countries. And these aren’t folks who have axes to grind themselves in the same way, let’s say, if it’s alleged that Edelman does. So I think it has credibility in that regard. I’m equally aware of a lot of the criticism about this that questions the credibility. I don’t do that the same way others do. I have found, and indeed the same with this current report, value in the information that Edelman have put together that they are sharing. So it’s useful to get a sense of this, particularly the annual changes in sentiment that we’ve reported on this in For Immediate Release throughout the years. I can remember actually being at the very first Edelman Trust Barometer when Richard Edelman was in London—that was in 2000, I think, or 2001. Beginning of the century, 26 years ago anyway. So it is interesting, Shel. And I think the criticisms are worthy of debate, not dismissing them unless you are quite clear you’ve got something else to say. The report is a dense document. It’s quite detailed. I found a good place to start to get a sense of what it’s all about is the top 10 findings, the snapshot views of each of the top points that is under the heading “Trust Amid Insularity.” So it’s definitely worth paying attention to, putting it in the context of what the critics say.
Shel Holtz: And frankly, the longitudinal nature of this research—what David Murray called “wearying”—is actually where much of the value comes from: the ability to track change over time in any research. I mean, you look at engagement studies that companies do among their employees. If you couldn’t see how any element of that survey has improved or declined over time, it’s of far less value than getting this one snapshot in time for a single survey. So there’s great value there. And like I say, I think there’s great value in a lot of the data in this survey. I mean, the fact that the focus is on insularity should not be any surprise. We’re seeing this every day. It’s interesting that… I think it had to be 35, 40 years ago, IABC’s Research Foundation, the lamented long-gone IABC Research Foundation, did a study on trust. And I remember the definition that they gave trust. We were talking earlier about the definition of PR; the definition of trust is pretty fixed. It’s the belief that the party in question is going to do the right thing. And so it’s that simple. And the question becomes: what is the right thing? Among people who are inside their bubbles, that insularity, what do they believe the right thing is? And that is probably very different from people who are in a different bubble.
And this, I think, is where that “trust brokering” idea has some legitimacy, even if it may not be presented in the best way. I think telling the truth is… that’s not what we need to be doing in order to address this. If we’re not telling the truth, then we simply have no stake in this game. You can’t go anywhere from there. But if you’re telling the truth, how do you get that into the heads of the people who are not paying attention to you? They’re listening to people who say you can’t trust them. And I think that comes through engagement, not through publication, not through telling. To some extent through listening—you must do that to find out what their issues are, what they do believe. But at some point you have to start engaging with people. I mean, the profession is called Public Relations, not Public Content Distribution. And those relations have to have some give and take, some two-ways. So if you have people who don’t trust you and are misinterpreting or are listening to false information being delivered by people who have an interest in taking your organization or your institution down, you need to reach out to those people and start to engage them. And I absolutely agree with whoever it was said that this is the direction that we need to be heading in. I think they were talking about internal communications being more dialogue, but I think that’s true of the external side too.
Reaching people who are in bubbles is extremely difficult. I’ll tell you, I was having a conversation—this is a friend of mine who I have learned is on the opposite political spectrum from me. And I told him, “You know, I watch Fox News on a fairly regular basis. I find it important to know what the people on the other side of the political spectrum are hearing, what they believe, what they think, so it can inform my view of things.” It doesn’t change it, but it certainly informs it when I’m having conversations or I’m considering how to reach somebody. I said to him, “You ought to be doing the same. You ought to be watching some of the media that presents the views that are contrary to your own and understand them.” And his answer to me was, “Stop watching Fox News.” He felt that I should stay in my bubble. So this is a pretty entrenched perception that people have. And it’s become very ingrained in the cultures of these insular regions, if you want to call them that. How do you reach people? I think that’s the challenge for people in communications right now: how do you reach the people who just are not interested in hearing what you have to say? They want to hear what your critics have to say, and that’s all they’re listening to.
Neville Hobson: Yeah. That makes sense. And indeed, that I think supports one of the key elements of this latest report, which is that traditionally in organizational communication, part of your goal is to get everyone lined up with the same message. We’re all singing off the same sheet and it’s all unified and we go forward. This is a change. This is not about that. It’s not about aligning people who are different; it is about understanding the differences and still being able to engage with them, recognizing their differences. And that makes complete sense to me in the current geopolitical environment, because I believe that what we’ve seen over the past few years—and the driver for this unquestionably is what’s happening in the States since Donald Trump became president for the second term—that as Mark Carney spoke in his speech at Davos at the World Economic Forum, that this isn’t a transition, we’re going through a “rupture.” I’m not sure I… it was very good, very good. I’m not so sure it is that—maybe it is a transition, it doesn’t matter what you call it—but the reality is that people are afraid in many countries. Just watch the TV news and you’ll be scared most days, particularly when you see things that you couldn’t imagine happening in some of the countries where it is happening, notably in the US, what’s happening there with crackdowns in various parts of society. It’s truly extraordinary.
I think that is a big influence on this insularity, people withdrawing. Yet I think where it talks about people wanting to engage with people with similar views, similar beliefs and so forth, not different beliefs… I seem to remember a few years back—I’ve forgotten which year it was—when the Edelman Trust Barometer of that particular year published something that was quite radical, where the most trusted person in the world, if you will, is “someone like me.” I remember that. This is that, is it not? It’s someone like me, except the dynamics are very, very different to what it was back then. And I think one of the things I feel that this is a thing to really pay close attention to, which is aligned with what you said about engaging with people outside of different individual bubbles, is that recognition of difference. It is the fact that people need pushing in the right way. And it is the fact—and again, this comes back perhaps to Sharon O’Day’s critique—that we’re not telling the truth. That we need to tell a different version of the truth, if that doesn’t sound kind of weird. There is always more than one version of the truth. And I think: which one do you trust? And that’s, I think, a big challenge for communicators because it surely would be easy for a senior-level communicator, particularly if they’re an advisor to the C-suite, to see when the messaging coming out of the C-suite is simply not the right messaging. Not saying that they’re not telling the truth, far from it. What they believe as the truth may not actually reflect what is happening. And that’s where listening really becomes key.
So it means, I suppose, that communicators can rethink this whole structure in light of what Edelman’s saying, but not exclusively because of this. But take a look at this: one of the key findings, the first one that Edelman mentions, “insularity undermines trust.” And that’s something that I grabbed from this when I wrote my blog post about this—a kind of reflective post I wrote a few days ago—of what insularity, when people withdraw into themselves and stop engaging with others with different views, they often can undermine the authority within an organization of what the leaders are trying to do by not cooperating, by just simply not doing it or even actively dissing it or whatever it might be. Is that a new thing? Maybe it’s not, but it certainly has a mass impact if you see that sort of thing going on. “Mass-class divide deepens” is another one that they talk about—the gap between high and low-income groups. So these are the bigger picture issues in our society. And yet we’re seeing things going on because of these changes in geopolitics, I suppose, that this is not a good thing. And institutions are falling short. The four big institutions I mentioned at the start are falling well short on addressing this.
The phrase “trust brokering”—I really don’t like that, to be honest, Shel. It sounds gimmicky. It sounds like a catchphrase that someone’s come up with, which I suspect is what’s prompted a lot of the criticisms of it. I’ve even seen some people say, “Wait a minute, trust broker… isn’t that what communicators have been doing for years?” Now we’re calling it trust brokering. So we need to get past this kind of labeling confusion, I think, and look at what we must do to help leaders in particular do the right thing in their organizations and how they’re communicating things and enable, if you like, empower properly communicators to take all this forward. But there’s lots to pick from this report, I think, Shel.
Shel Holtz: Yeah, I wonder how many PR agencies are going to announce soon that they are launching their “trust brokering units” now available to engage in your organization. I’m going to invoke the IABC Research Foundation one more time. Their seminal work was the Excellence Study—Excellence in Public Relations and Communications Management—outstanding effort. And the primary work that came out of that was a review of the literature on all of this. So a lot of academic stuff. It’s a rather lengthy book. I’ve read it; I still have it; I still refer to it. But one of the things that I learned when I was reading this way back when it came out is this notion of “boundary spanning.” It’s an academic term from PR in the academic world. And it suggests that public relations people really need to understand the perception and the perspectives of the opposition so well that when they talk about it in the organization, people are going to be suspicious that the PR people have switched sides. You understand it so well that you can basically talk like the opposition does and convey their concerns and their critiques as if you were one of them. I don’t know how many public relations people are doing that these days. Given the results of this research, it seems to me that boundary spanning is becoming a necessary tactic for public relations practitioners. I think it’s important that if that’s not something that you have looked into and this is the work that you do as a communicator, something to pay attention to.
Neville Hobson: Yeah, I would agree with that. So there’s lots to absorb in this. We’ve touched on the kind of prominent points, but there’s one that struck me as an interesting one on Edelman’s list of the top 10 issues. There’s a tenth one. There’s a last on their list: “Trusted voices on social media open closed doors.” And I thought that’s an interesting take on that. They say people who trust influencers say they would trust or consider trusting a company they currently distrust if it were vouched for by someone they already trust. Think about that. That’s interesting, because we’re seeing separately to the Trust Barometer, influencers as a group, let’s say broadly speaking, under threat for lack of credibility in many cases. Some of the face-palming things that I’ve read about influencers doing or saying in recent months has been, you know, face-slap—you whack your hand on your head. But this is true, in my view, that that makes sense to me. And maybe that is an easy way for communicators to engage with people, maybe in slightly more open ways than they have in the past to enable that kind of thing. So again, it’s a thought point, if you like, that’s worth considering, even though it’s not high up on Edelman’s top 10 list—it’s the 10th. Worth paying attention to though, I think.
Shel Holtz: Absolutely. And you see the opinion polls showing a shift in support or lack of support for one thing or another based on what some of the prominent influencers are saying when they change their view. Looking at the “bro-verse” in the podcast world—people like Joe Rogan, for example—who were very supportive of Donald Trump when he was running for president, and you look at the independent vote and it was very supportive of Donald Trump. And the bro-verse has shifted with what’s going on in Minneapolis and some other cities. You’re hearing Joe Rogan say, “What is this? The Gestapo in the streets now?” And now you’re starting to see that shift in opinion among independent voters away from Trump. Now this is a correlation, not a causation. But still, it’s interesting and seems to validate that 10th point among those top 10 from Edelman.
Neville Hobson: Agree. So lots to unpack here. We’ve touched… we scratched the surface basically and shared some opinions of our own. There’ll be links to the report and some other content in the show notes if you want to dive into it.
Shel Holtz: And we’re going to switch gears now and talk about artificial intelligence for at least the next two reports. These are very complementary reports—the one I’m about to share, then after Dan York’s report, Neville, your story, very, very complementary. So let’s get started. There is a striking disconnect happening in corporate America right now, and it comes down to a shift in perception. Leaders think their AI rollouts are going great, while the view from the cubicle is “not so much.” Let’s start with the numbers. A Gallup survey of over 23,000 workers found that 45% of American employees have used AI at work at least a few times. Sounds encouraging, doesn’t it? But wait—only 10% use it every day. Even frequent use sits at just 23%. So despite a year of this breathless hype and massive corporate investment, actual day-to-day adoption remains marginal. And here’s what may be the most telling statistic: 23% of workers, including 16% of managers, don’t even know if their company has formally adopted AI tools at all. Now, think about that. Nearly a quarter of your workforce is so disconnected from the organization’s strategy that they can’t say whether one even exists. This gap suggests that shadow IT problem where employees are using personal tools like ChatGPT while remaining completely unaware of their employer’s official path forward is what we’re probably seeing in a lot of organizations.
The adoption pattern breaks down along predictable and frankly troubling lines. Usage is concentrated where you would expect: Technology organizations (76% of employees are using AI), Finance companies (58%). But in retail and manufacturing, those numbers crater: 33% in retail and 38% in manufacturing. AI is languishing in the same place as it always does—among the people already closest to the technology. Now, contrast this with JPMorgan Chase, which has become the poster child for successful enterprise AI adoption. When they launched their internal LLM suite, adoption went viral. Today, more than 60% of their workforce uses it daily. That’s six times the national average. Now, what did JPMorgan do differently? Their chief analytics officer, Derek Waldron, says they took a “connectivity-first” approach. Instead of giving employees a login to a generic chatbot and calling it a day, they built AI that actually connects to the bank’s internal systems—their customer relationship management package, their HR software, their document repositories. An investment banker can now generate a presentation in 30 seconds by pulling real internal business data. The bank also understood the Kano model of satisfaction. They made the tools genuinely useful and voluntary. They didn’t mandate usage. They bet that if the tool solved a problem, word would spread organically. They also ditched generic literacy training for segment training—that is, teaching people how to use AI for their specific work.
Now here’s where things get a little uncomfortable. JPMorgan has been candid about the consequences. Operation staff are projected to decline by 10%. While new roles like context engineers are emerging, the bank hasn’t promised that everyone will keep their job. Meanwhile, at most other organizations, we’re hitting a “silicon ceiling.” BCG, formerly Boston Consulting Group, found that while three-quarters of leaders use generative AI weekly, use among frontline employees has stalled at 51%. The problem is a leadership vacuum. Only 37% of employees say their organization has adopted AI to improve productivity. A separate Gallup study found that even where AI is implemented, only 53% of employees feel their managers actively support its use. Then there’s the trust issue. Nine in 10 workers use AI, but three in four have abandoned tasks due to poor outputs. The issue here isn’t access; the issue is execution. People don’t know how to prompt or critically evaluate the results. Worse, 72% of managers report paying out-of-pocket for the tools that they need to do their work using AI. In response, some companies are taking a hard line. Meta has announced that starting in 2026, performance reviews will assess AI-driven impact. In other words, AI use is no longer optional at Meta. So where does this leave us? We have bullish leaders making massive investments while their workers are either unaware of the strategy or worried that using AI makes them look replaceable. The fundamental problem is that companies are deploying AI as if it’s just another software rollout. And it is not. It requires rethinking workflows, investing in specific training, and building tools that connect to real business data. The gap between AI hype and actual adoption isn’t going to close until organizations figure that out.
Neville Hobson: There’s a lot in there, Shel, that is interesting, I have to say. I think JPMorgan is a use case that’s definitely worth studying what they’ve done. I’m reading the article that appeared in VentureBeat talking about that. It talks about “ubiquitous connectivity”—great, two words put together—plugged into highly sophisticated systems of record. You mentioned how integrated this was to all their internal systems. So you can see some things there that you don’t hear some other companies explaining things that way. The forward-looking approach… so they’ve got leaders who are treating this the right way. It talked about, as you said, they didn’t just enable this and then say, “here you go.” They literally developed it as an ongoing thing in conjunction with employees, which is really good. I think, though, that the alarm bells ring in the first part of your report, when you were talking about how employees say they’re fuzzy on their employer’s AI strategy, with many not knowing whether their employer has one or not. I’d like to think that that’s not the majority, but I fear I may be misplaced with that view, because the ones that don’t do this—in other words, they do it the right way—are the ones who are reaping the benefits. And there are lessons, simple lessons, to learn from that.
Workers who use AI tend to be most likely to use it to generate ideas and consolidate information, Gallup says in introducing their survey report. That makes sense, doesn’t it? That they are… so you’ve got to enable that in an organization. I think there’s more we’ll talk about this when we get to the report you mentioned that we’ll talk about after Dan’s report that expands on this quite significantly. But there are some lessons to be learned from some of the things we discussed on this podcast in recent episodes. You mentioned Boston Consulting Group, where we’ll talk a bit more about the survey they did that paints a very different picture on this. Still, I have to say I’ve seen other reporting, including some of the ones you shared here, where it does talk about the huge gap between the views of leaders and organizations compared to the opinions of employees in those organizations on the state of AI and the benefits it’s supposed to bring. I think the Harvard Business Review report you shared as well—there’ll be links to that in the show notes—that says, “Leaders assume employees are excited about AI; they’re wrong,” says the Harvard Business Review. And they’ve got some really good credible data here to back up that review. The higher you sit in the organization, the rosier your view. Is that not true of many things in an organization, I wonder, that you’re insulated from some of the reality? Is there something communicators can do to alleviate that little problem? I suspect so. These are disconnects that do not help the organization if you really do have blind spots like that, I think. So it’s good to see this. The HBR talks about a survey they did—1,400 US-based employees. 76% of execs reported their employees feel enthusiastic about AI adoption. But the view from those employees was not that at all—just 31% of them expressed enthusiasm. That’s a bit different to what the execs are saying. So I wonder how we get to that reality and then add that to the climate of trust we discussed in the Edelman Trust Barometer and the landscape’s looking like a very tricky one for communicators in a wide range of areas. Add this to that list of concerns.
Shel Holtz: Yeah, this report has really been focused on adoption among employees. You’re going to take a different spin on this after Dan’s report around the perception gap between executives and employees. But I think it comes down to mismanagement of the rollout of AI in, I would have to say, most organizations. And I think it’s a lot of different factors contribute to this, but leaders need to be paying more attention to what they want from AI. I mean, is it really just evaluating tools that have AI baked into them that we can bring into the organization? Or is it rethinking the organization writ large based on what AI can do in a more organic way? I love the point out of JPMorgan that an analyst can now create a deck in 30 seconds because the AI has access to all the internal data. That’s valuable. An employee can say, “That is something that is worthwhile to me.” Whereas you give them access to Copilot because you have an Office 365 contract in your organization and everybody has access to it and say, “Here’s Office 365, godspeed.” And you provide basic training to everybody that says, “Here’s how you write a prompt and here’s how you look for hallucinations and blah, blah, blah.” But it doesn’t tell somebody in a particular role what this can do for them. They’re going to leave that saying, “Okay, I think I can craft a good prompt now. Why would I want to do that? What would I prompt for?” I think this requires much more attention on the part of leadership and much more commitment to viewing this as a change initiative that has to be led from the top.
Neville Hobson: Yeah, I think your point you mentioned earlier about this being to do with adoption and rollout as opposed to perception… but they’re both connected according to Harvard’s report anyway. They talk about: “When organizations see AI adoption as a way to make work better for employees and communicate that as opposed to as a pursuit of efficiencies and productivity, AI efforts gain traction.” And that’s repeated in many of the surveys that we could talk about. They communicate a shared purpose, involve employees in shaping the journey, and move people from resistance to enthusiasm. Makes total sense to me. The report also, the Harvard report, talks about employee-centric firms. I thought every firm was an employee-centric firm, but maybe I got that wrong. Employees on average are 92% more likely to say they are well-informed about their company’s AI strategy and 81% more likely to say that their perspectives are considered in AI-related decisions. That’s a huge percentage, I have to say. 70% more likely to feel enthusiastic and optimistic about AI adoption, reporting emotions such as empowerment, excitement, and hope rather than resistance, fear, or distrust. Communication, execution—that’s the kind of pathway, I suppose, or execution and communication, both hand in hand. So it’s… slow employee adoption of AI is clearly the norm, by my judgment, based on what you’ve been saying, what I’ve listened to, what I’m seeing in some of these reports. Makes me wonder: surely it’s a known status, if you like, that communicators can get a hold of and do something about, I would have thought. So would we expect to see a change in that area? I hope so.
Great report, Dan. Thanks very much indeed. I enjoyed listening to your assessment of Wikipedia over the past 25 years. I’m a huge user of Wikipedia and I’m as conscious as you are and many others of some concern about the challenges Wikipedia is facing with misinformation, disinformation, AI, the works getting involved. Looking with interest at how Wikipedia is addressing some of those things. I receive a lot of communication with you; I’ve been a donor for years to support Wikipedia. I’m pleased to see them, I guess, recognizing the shifting landscape and doing something about including AI in some form in terms of the editorial or the editing elements of content on Wikipedia. It’s a challenge without question. So your take on being an editor all those years is interesting, Dan. I’ve done a bit of that, nowhere near as much as you have. And it is interesting… I come across things I read on Wikipedia—I do read it quite a bit when I’m looking for information—that I will see something and think, “That’s not right.” And I might propose an edit in the talk pages. Rarely do I dive in and edit unless it’s something so obviously wrong, unless I’ve got… if I don’t have a source I can cite. So yeah, it’s interesting. And I remember you mentioning before your live editing streams on Twitch. They’re pretty cool. Yeah.
Shel Holtz: I remember watching those during the pandemic. That was fun.
Neville Hobson: Yeah, so great recap, Dan, thanks very much, worth listening to. So let’s continue the conversation then on the views of CEOs, how they differ from employees in AI introduction. I’m going to reference a Wall Street Journal story that talks about a survey seeing that “CEOs say AI is making work more efficient; employees tell a different story.” Much of the public narrative around generative AI in organizations has been framed as a productivity story—one where AI is already saving time, streamlining work, and delivering efficiency at scale. We’ve touched on a lot of that in your earlier report, Shel, our conversation there. But a recent Wall Street Journal report suggests there’s a growing disconnect between how senior leaders perceive AI’s impact and how employees are actually experiencing it day to day. So the Journal’s reporting draws on a survey by the AI consulting firm Section, based on responses from 5,000 white-collar workers in large organizations across the US, UK, and Canada. The headline finding is stark: two-thirds of non-management employees say AI is saving them less than two hours a week or no time at all. By contrast, more than 40% of executives believe AI is saving them eight hours a week or more. There’s a disconnect, it seems to me.
Beyond time savings, the survey highlights a clear emotional divide. Employees are far more likely to describe themselves as anxious or overwhelmed by AI, while senior leaders are more likely to say they feel excited about its potential. Many workers say they are unsure how to incorporate AI into their roles, and that whatever time is saved is often offset by having to check outputs, correct errors, or redo work. At the same time, companies are continuing to invest heavily in artificial intelligence, betting that it will drive future productivity and profit growth, even as evidence of near-term financial returns remains limited. Separate CEO surveys cited by the Journal suggest that only a small minority of leaders say AI has yet delivered meaningful cost or revenue benefits. The Journal also points to real-world examples where ambitious AI deployments have required human correction or reversal, reinforcing the idea that in practice, AI adoption is uneven, unpredictable, and highly dependent on context, skills, and judgment. This picture sits alongside other research we’ve discussed on For Immediate Release. In FIR 497, we talked about BCG’s AI radar report—that’s Boston Consulting Group—which argues that AI has moved beyond experimentation and is now a CEO-owned strategic mandate. That same research also places communicators at the center of managing expectations, trust, and organizational change. We’ve also seen consistent findings, including in the report you highlighted, we discussed literally a few minutes ago on slow employee adoption of AI, showing that while awareness of AI is high, employee adoption and understanding lag well behind leadership ambition.
Taken together, this raises an important tension. At the top of organizations, AI is increasingly seen as transformational and inevitable. On the ground, many employees are still grappling with how it fits into their work and whether it’s genuinely helping them do their jobs better. So what does this divergence between executive optimism and employee experience reveal about how AI is being introduced, communicated, and governed within organizations? Is the human side of AI adoption the real constraint on its promised productivity gains?
Shel Holtz: It’s all of this data is fascinating. And one of the things that strikes me is that we tend to look at data from research, surveys, reports, studies about AI in business. What about people just generally—what do they think about AI just as people living their lives? And there was a study that came out from Pew—this was just last September, so this is current data. I know four months is a million years in AI life, the AI life cycle. But still, this is fairly recent data. And what they found—and I’ll skip numbers and just give you some highlights here—that, and this is a study out of the US, Americans are much more concerned than excited about the increased use of AI in daily life. A majority say they want more control over how it’s used in their lives. Far larger shares say AI will erode rather than improve people’s ability to think creatively and form meaningful relationships. People are open to letting AI help them with their day-to-day tasks, but they don’t support it playing a role in personal matters: religion, matchmaking… more open for data analysis, like weather forecasting, things like that. But they also think it’s important to be able to tell if pictures, videos, or text were made by AI and humans, but they don’t trust their own ability to spot AI-generated stuff. Now, what you have to think about when you hear that this is how people who are just out there living their lives and outside of the context of work, this is how they feel—then they go to work. And they’re told, “AI, it’s going to be great.” And they bring all of these perceptions from their regular lives into the office, into the workplace. And that’s an impact, too. And I think this is something that internal communicators and leaders have to take into account when… I mean, this expectation that AI is going to make your job easier, it’s going to make your product better, whatever your output is, it’s going to make that better… it’s just going to make everything rosy. And you’ve already got these biases based on perceptions just from life. We have to take that into account in our communication. I don’t think this was something anybody was thinking about when we were first introducing it because there was no research yet. It was as new to people living their lives as it was to people doing their jobs. But now you have these perceptions that have been formed about AI as just something that’s there as part of life. And if we don’t factor that into the communication that we do around AI in the workplace, we’re going to struggle to get people to trust this and to figure out how to employ it to make their work better and to support the goals of the organization.
Neville Hobson: Yeah. So the big question then is: what needs to move the needle for communicators then to grasp this challenge, let’s say? I think when we discussed BCG’s AI Radar report, where the clear message there was “AI is now no longer just experimenting and experiments and is now CEO-owned strategic mandate.” So CEOs are taking over control of that. Certainly, with the investment going into AI and organizations. And indeed, one of the findings in BCG’s report was how success in deploying AI and the ROI on that deployment is now a core measure for CEO performance, it said. Well, that takes it up to a whole different level. So it presents opportunities, I think, for communicators to help that CEO achieve the goals he’s going to be measured on by communicating to employees. So this kind of circles back to what we were discussing earlier. And I think if employees on the ground are still grappling with how it fits into their work—and let’s set aside the survey saying CEOs are in charge of all this now, this is great, everything’s going to be wonderful… reality right now today, if they’re still grappling with how it fits into the work, then that needs to be addressed. And you’ve introduced an interesting element to that picture, Shel, where employees of an organization are exposed to all the negative commentary about this externally in their lives generally. They bring that to work with them and encounter what they see there. And they hear the CEO saying, “this is all great.” So these are genuine issues that must be addressed, otherwise, as you say, trust is going to be lacking all the way. And you put that in the context of Edelman’s Trust Barometer and trust and shifts in that… and it’s not a pretty picture at all, I don’t think.
Shel Holtz: Yeah, in the framework for internal communications that I developed—it’s the subject of the book that I’m working on—one of the key roles on a day-to-day basis for internal communicators is consultation, and that’s consultation up the organization. And I think this is an instance where that role is paramount. We need to be talking to our leaders about this. The fact that the BCG radar report says that this has become a CEO issue doesn’t mean that every CEO has done that. I think there are a lot of CEOs who see this—still see this—as an IT issue. And even if they’re using it in their jobs, they don’t think it’s something that they need to be leading; it’s something they think that their CIO needs to be leading. And I think we need to present this data to our leaders. I think we need to talk about why this needs to be led by the business and not one of the support teams. Consultation is what we need to be doing at this stage in addition to maintaining the drumbeat of why this is effective and how you can use this with the frontline employees who are actually going to be using these tools to make a difference in the organization.
Neville Hobson: Yeah, I would agree with that. So therefore the answer to the question I posed when they finished the intro to this—is the human side of AI adoption the real constraint on its promised productivity gains?—I guess would be yes.
Shel Holtz: I would say absolutely yes, and I think that’s where organizations need to be shifting their investments. And I think I saw data that says they are shifting their investments. I think something like 60 or 70% of what organizations are investing in AI is now focused on the people in the organization.
Neville Hobson: That’s a good move.
Shel Holtz: Yep. Well, let’s leave AI behind for a bit and talk about something a little more strategic in the internal communication world. And that’s “alignment,” which has become one of those corporate North Stars that everyone nods at, but few organizations actually achieve. And there’s a paradox at the heart of this. The very act of trying to force alignment—meetings, memos, check-the-box town halls—can make the disconnect worse. Let me start with three simple definitions that frame the alignment problem. Let’s start this discussion with these definitions, courtesy of Stephen Waddington, whose PR credentials are far too many to list here. He says that leadership is the role of setting strategy and goals. Management is the process of measurement and continual improvement against those goals. Execution is delivery against those goals. All right, a pretty simple model there, right? Leadership, management, and execution. But here’s what happens in practice: one of these three almost always breaks down and more often than not, it’s alignment—the invisible thread meant to tie strategy, management, and execution together is what frays first.
Now Zora Artis, who has been a guest on FIR interviews, and Wayne Asplund have been studying the alignment problem for years. Their latest research, the CLEAR Leaders Project, revisits strategic alignment because despite how important it is, the same problems keep recurring. They conducted confidential interviews with senior leaders across communications, HR, strategy, and operations to explore how alignment is understood, practiced, and experienced in organizations today. And here’s the uncomfortable finding: seven years after their previous benchmark study, the gap between alignment in principle and alignment in practice is just as wide as it always has been. It’s universally valued yet almost never achieved. Now think about what this means. We’re not getting better at this and we should. I mentioned before that consultation is one of those daily activities in the ring around my framework circle. So is alignment. And despite all the strategy decks, the town halls, the carefully crafted vision statement, this problem persists. Why? Because senior leaders live inside the strategy. They’ve shaped it, debated it, and refined it, but that proximity breeds a dangerous assumption: the closer you are to a strategy, the more you assume its clarity is shared. What leaders often hear as consensus is actually silence. And in too many organizations, silence is misread as buy-in. Now here’s the thing about strategy: it doesn’t cascade like water; it distorts as it moves. It’s shaped by language, culture, experience, and hierarchy. A strategy that’s crystal clear at the top becomes a muddled set of ideas by the time it reaches teams on the ground. Ownership gets lost, accountability blurs, execution slows.
Now, Zora and Wayne’s original 2018 study of more than 200 senior communicators found that only 35% felt their organization was aligned to its corporate purpose. Only 40% used corporate purpose as a key part of employee communications. Think about that. We define the purpose of the organization; only 40% use that purpose as a part of their communication with employees. Now, fast forward through a pandemic, through massive technological disruption, through all the lessons we supposedly learned about clarity and communication, and the numbers still haven’t meaningfully improved. So what is this paradox? The very mechanisms we use to create organizational scale—you know, we subdivide work, we create functional specialization, we establish hierarchies—these are the mechanisms that fragment the information, decision rights, and incentives that guide individual decisions. We create silos to manage complexity, and those silos then work against our ability to align. Research from Strategy and Business frames it differently but arrives at the same place. When strategies aren’t implemented effectively, leaders tend to view their people as “irrational.” But workers and managers are actually rational actors. Their choices reflect sensible decisions in the context of what each of them knows and understands. The problem isn’t the people; it’s the organizational environment that’s encouraging decision-making that conflicts with overall objectives.
Fortune magazine estimates 70% of CEO failures are caused not by flawed strategic thinking, but by failure to execute. Most management teams don’t fully appreciate the role of the organization in undermining performance. They lack time or resources to understand how the organizational models actually work. They’re frustrated by their inability to realize objectives, but they rarely identify the interacting assumptions and misaligned incentives built into their own structures as the root cause. Here’s where the research gets really interesting. Artis and Asplund’s work reveals that alignment isn’t a noun, it’s a verb. It happens through repeated behavior, not bold declarations. The temptation is to treat alignment as a messaging issue: clearer cascades, sharper narratives, better packaging. But alignment isn’t about communication tactics; it’s about leadership behavior. In today’s environment, the traditional playbook of strategy decks, town halls, and posters on the wall simply doesn’t work anymore. The challenge is how consistently leaders live and lead the strategy every day. That requires holding the tension between spread and clarity, decisiveness and dialogue, direction and dissent. It means slowing down when speed tempts shortcut thinking, inviting challenge when comfort suggests consensus, being consistent in action as well as intent, and most critically, checking for understanding, not just repeating messages.
There’s also the “shallow versus deep” alignment problem. Shallow alignment is tactical: agreeing on plans, checking boxes. Deep alignment is about the fundamental “why.” Organizations need both, but they often confuse one for the other. They think because everyone showed up to the strategy offsite and nodded along that they have alignment. Six months later, they’re baffled when nothing has changed. Artis’ research through the pandemic showed that organizations that thrived had articulated a strong sense of purpose and used it to guide decision-making. Airbnb’s Brian Chesky spoke about their purpose as their North Star, giving them permission to morph their business strategy in response to threats and opportunities. Yet a McKinsey study found that while 82% of companies affirm the importance of purpose, only 42% thought their purpose statements had any actual impact.
So what’s the way forward? Well, first we have to stop treating strategic alignment as a communication challenge to be solved. It’s an ongoing act of leadership that demands humility, curiosity, and deliberate behavior. The best leaders aren’t the ones who shout the strategy the loudest; they’re the ones who stay aligned when pressure hits, who listen when they assume, and who practice alignment as part of their everyday leadership. Second, communicators need to fundamentally shift their role. As Zora and Wayne’s research shows, communication professionals have an enormous, mostly untapped opportunity here, but only if they move from being seen as tacticians to being seen as strategic advisors. That means being the function that surfaces misalignments, the conflicting incentives, the information gaps, the unclear decision rights, and working with leadership to fix them. Third, leaders need to acknowledge that you can’t communicate your way out of structural problems, but you can use communication to identify them. Alignment requires examining the organizational environment, not just restating aspirations or exhorting people to do better, but actually changing the conditions under which people make decisions. The alignment paradox won’t be solved by better PowerPoints. It requires recognizing that leadership is a practice, not a position. And it requires understanding that the very things that make organizations functional at scale are the same things that make alignment extraordinarily difficult. The question for every leader is whether you’re willing to confront this paradox honestly or whether you’ll keep mistaking silence for consensus and proximity for clarity.
Neville Hobson: Yeah, that’s a very interesting analysis, Shel. I think Zora and Wayne have done some good work here from just reading Zora’s Substack post about this. A couple of things struck me from this, which I guess puts it in perspective for people perhaps who don’t work in large organizations, because this is clearly geared to that. And yet alignment doesn’t require it to be a large organization. And I say that because I’ve gone through an alignment myself as a sole person last year. I did a webinar for IABC for the consultants group on this exact topic. It’s kind of swapping balance—it’s not about balance; it’s about alignment. They’re two different things. I did like a couple of things that leapt at me from Zora’s post. She talks about: “Alignment doesn’t fail because leaders lack intent. It weakens when shared clarity, ownership, and accountability diverge and commitment isn’t strong enough to hold together. When that happens, effort increases, but traction declines.” That is to the point precisely about eight years later where nothing much has moved well. She also says, “Alignment demands humility,” and I think this is the bit that resonated most with me: “Vulnerability and sustained commitment. Requires leaders and their teams to slow down, invite challenge, and stay open to perspectives that complicate the narrative.” And that to me was the kind of bottom line of this whole argument: slow down. It’s examine things with better purpose than you have had before. Choose to do things if you can because they matter, not just because they’re available or because they’re going to make you a lot of money, although that’s hard in a large organization, I think.
I think it’s something each one of us needs to pay attention to, not just if you’re an employee in a large organization, to start your own shift—to look at what you are doing as a consultant or as a communicator in an organization and how aligned is it with your own values and those of your organization. I don’t think people do that properly—maybe “effectively” might be the better word than properly. So this is a valuable piece of research that has some great points to zone in on and consider in your job as a communicator, indeed, as you as an individual person. To me, the biggest one is velocity. Get rid of velocity; slow down. Take more time on things. Resist the temptation where velocity equals “busy.” Well, it doesn’t. Busy-ness may be not the same thing at all. Indeed, in Zora’s article, she quotes someone saying, “Ego, fear, hubris, and speed push in the opposite direction.” That relates to humility, vulnerability, and sustained commitment. And absolutely true. You see it in large organizations in particular. So there’s a lot to learn from this. It talks about “Why is this intensifying now?” Dynamics aren’t new; becoming more consequential, says Zora in her piece. Strategy cycles are shorter. The context leaders operate in is more complex. Decisions are made faster, with less time for shared sense-making. Misalignment is more likely, therefore, in which case, slow down. If you say it enough, you will slow down. Resist pressure to speed up even. Not always easy. It depends on many factors. If you’ve got a leader you’re working with who subscribes to this view of “it’s not about velocity, it’s about taking the time to consider things and discuss it with others, shared sense-making,” as the article says, it’s worth doing. So this is something that, like you, I would say I’d look forward to reading this report when it comes out in February.
Shel Holtz: One of the things that jumped out at me as I was researching this for the report was the whole idea of structure of the organization being a hindrance to alignment. And I don’t know how many organizations have ever undertaken a “structure audit.” And the structure was created in order to have work that is similar done in one place. It does create those silos of… I think leaders are confident that the structure that they have created is the right one for the organization, but have they tested that structure against other things that are important to them? And I don’t know if there’s such a thing as a structure audit. I’ve never heard those two words used together. It may be time to develop one and say, “Yes, we understand the structure works for our process of getting our product out, for example, but what does it do for these other four things that are priorities in the organization? Are they hindrances? And do we need, as a result of this, to make changes to our structure so these four other priorities gain more traction? Or do we need to rethink how we are implementing these priorities so that they will be effective given the structure that we want to maintain?” But I don’t think anybody’s thinking about that right now at all. And I think it’s something to be raised.
Neville Hobson: Yeah, indeed. I think the Substack article talks about that a bit, saying, “Alignment is a discipline for leaders and teams. Without commitment, it shows up in moments and disappears when it’s tested.” So yeah, plenty to pay attention to here, Shel, I think. So let’s talk about something I think is quite an interesting topic. One we’ve talked about before on For Immediate Release, but not for a few years probably. And this is all about Mark Zuckerberg, head of Meta as it was renamed some years ago from just Facebook, and his recent “U-turn,” as the media are describing it—what that means for the future of virtual reality. So for several years, Zuckerberg placed a bold bet on virtual reality and the metaverse as the next major computing platform. That vision reshaped the strategy and even the name of Meta as the company poured tens of billions of dollars into Reality Labs, launched VR headsets, and promoted immersive virtual worlds as the future of work, social connection, and everyday computing. That vision now appears to be undergoing a significant reset.
In January, multiple reports confirmed that Meta is making deep cuts to its Reality Labs division, laying off around 1,500 employees, roughly 10% of the unit. According to the Wall Street Journal, the move reflects a deliberate shift in investment away from the metaverse and towards AI, particularly AI-powered wearables such as smart glasses. Reality Labs has reportedly lost more than $77 billion since 2020. Eye-watering numbers here, Shel. And consumer-facing platforms like Horizon Worlds have struggled to attract sustained engagement. I’m sure Zuckerberg’s glad he’s got that clause in the contract that says they can’t fire him for any reason whatsoever. But coverage from Futurism is even more blunt than the Wall Street Journal. It’s framing the layoffs as a clear signal that Meta’s consumer metaverse ambitions are being wound down after years of underperformance. Entire VR game studios have been shuttered. And while some platforms remain active, they are doing so at a far smaller scale as capital and leadership attention pivot decisively towards AI.
A more nuanced perspective comes from The Conversation in an analysis by Per-Ola Kristensson, professor of interactive systems engineering at the University of Cambridge. He argues that this apparent U-turn does not mean immersive technology itself has failed. Instead, it reflects the limits of fully immersive virtual reality as a mass-market everyday computing platform. Drawing on years of academic research and user studies, Kristensson notes that while VR works well for specialist use cases—such as training surgeons, engineers, or pilots—it performs poorly as a general-purpose work environment. Extended use is associated with higher workload, lower perceived productivity, increased fatigue, anxiety, and usability problems. In short, VR can be impressive, but it is often too immersive, uncomfortable, and impractical for routine daily work. Crucially, The Conversation suggests that what we’re seeing is not the end of immersive computing, but a shift away from VR towards augmented and mixed reality—less immersive technologies that layer digital information onto the physical world rather than replacing it entirely. Products such as Microsoft’s HoloLens are cited as examples of this approach, where virtual information supports real-world tasks rather than pulling users into a separate virtual space. This distinction matters because much of the current retrenchment is about the consumer metaverse—the idea of mass adoption of shared virtual worlds for socializing, working, and entertainment. On that front, the hype has clearly run ahead of reality. By contrast, business and enterprise use of immersive technologies are not disappearing. Credible reporting and research continue to show steady, if unspectacular, adoption in areas such as training and simulation, product design, digital twins, remote maintenance, healthcare, and specialist education. In these contexts, immersive tools are judged by whether they improve safety, accuracy, learning, or cost efficiency, not by whether they attract millions of daily users.
In other words, what appears to be collapsing is a grand consumer vision of the metaverse, not the underlying technologies themselves. The center of gravity is shifting from spectacle to practicality, and increasingly towards combinations of AI, augmented reality, and task-specific immersive tools, rather than all-encompassing virtual worlds. Shel, I know you’re a fan and a user of VR headsets. Why don’t we look at this moment—what this moment really represents—whether Meta’s pullback marks the end of virtual reality as a serious platform or simply the end of a particular story about it. And what this tells us about how emerging technologies mature once the hype cycle collides with everyday reality.
Shel Holtz: Yeah, until I developed this back problem and I’m confined from doing a lot until it’s addressed, I was pretty much a daily user of VR with the Meta Quest 3. I use several apps and they’re all workout apps. I have to say that the only thing I’ve been using it for the last few years is working out. I don’t exclusively work out with the headset, but that’s always how I start. And I always start with the same app. It’s called Supernatural. Supernatural was an app that was in the Meta app store, but it was a separate company and Zuckerberg wanted it. He wanted these companies to be part of Meta so that he could showcase them as part of his effort in the Metaverse. Now Supernatural employed a bunch of people. It had what they call choreographers—these are the coders who create the workout routines so that they work right and they’re synced with the popular music that are in these workout sets and they’re in categories: it’s rap, it’s classic rock, it’s metal, it’s classical jazz, soul, R&B. And you would pick either a boxing or what they call a flow workout. And there are coaches—there are six coaches who would guide you through these, lead you through the warmups and the cool downs. And there’s a Facebook group for people who use the app; it has about a hundred thousand people using it. The estimate is that there are about a hundred and thirty thousand people who use the app, Supernatural, on, I think, a monthly basis—active monthly users.
And most of those 1,500 who were fired from Reality Labs at Meta were the coaches and the choreographers and the people who make Supernatural go. They’ve made the point that the app is going to stay and all the workouts that have been created up to this point—and there are, I think, thousands of them—will remain available. But I don’t know how long it’s going to stay because they’re going to have to renew the music licensing. This isn’t the music you hear on other workout apps from artists you’ve never heard of that isn’t requiring licensing through the big music licensing organizations. This is popular music. This is today’s top artists and the top artists of the classic rock era and the like. So it’s expensive to license that music. And when that rolls around, I don’t know if we’re going to continue to see this music available. And I think the whole thing is going to fall apart. There’s a tremendous effort among the users of this to get Zuckerberg to bring it back. There’s a petition and all kinds of other efforts going on. It’s all being discussed in the Facebook group. But what’s important to keep in mind is the 1,500 people who were cut are working for apps that either Meta created or acquired that are consumer-facing. There are still 15,000-plus people on the payroll there. So this is not an exit from the metaverse or the virtual reality world; this is a refinement of their approach.
It’s also important, I think, to consider the broader landscape because Meta is not the only one doing this. And by the way, you mentioned Horizon Worlds, their metaverse. It’s awful. You know, if people go in there and say, “Is this what the metaverse is?” Forget that. I mean, I can absolutely see that, but it’s not. It’s not the only effort out there. Apple is still refining the Vision Pro ecosystem to define this whole spatial computing space. Nvidia is doubling down on the industrial metaverse with their Omniverse platform—this is for digital twins for global manufacturing. Digital twins are going to be huge, and that’s definitely an element of the metaverse. Epic Games is building a massive persistent universe in partnership with Disney, which will probably be more appealing than Horizon Worlds. I mean, I’ve got to believe between Epic and Disney, you’re going to get something better than Meta was able to conceive. The pivot to AI isn’t a distraction or a move away from the metaverse and VR. It’s actually the fuel for it. Generative AI is finally solving the two biggest hurdles the metaverse faced: the massive cost of 3D content creation and the “empty world” problem. By using AI to populate and build these spaces instantly, Meta and its competitors are finally making the tech scalable. They’re not retreating; they’re just waiting for their AI tools to finish building the world they promised us. So I still remain bullish on the metaverse and virtual reality. The fact that it seems to be going through a decline right now is just a dip in the chart. I think you’re going to see that trend rise again. And I think AI is going to play a big part in this. And by the way, NPCs—non-player characters in video games—AI is going to be jet fuel for non-player characters. So I think: watch this space. I think it’ll probably be a few years. I remember Matthew Ball, who wrote the book on the metaverse, said we were 10 years away. That was what, three years ago? So, I mean, we’re still seven years in his timeframe. And because of what’s happening with AI, that may accelerate it, but I think it’s actually going to extend it to probably 12 or 13 years because the focus has shifted. But I think as these two factors, AI and the metaverse/VR converge, you’re going to see an explosion of this stuff down the road.
Neville Hobson: You could be right. I think there are other players, you’re right. For the time being, though, for the moment, it appears that Meta is ditching this to concentrate on the current thing that so many people are putting their focus on: AI generally and wearables is what they’re now going to pay attention to, according to these reports. I did like Kristensson’s analysis of it all, particularly his view that this doesn’t really work for business use and certainly not for consumer use without competitive technology that appeals to people and others who are doing it, such as you’ve outlined. The reality, though, is that they have announced these layoffs and their shifts that they’re making to their division, and they’re not supporting it anymore at the moment, according to all the media reports that I’ve seen about all of this. Doesn’t mean to say that couldn’t change—that may change—but that’s the picture right now. And I think the limited research that I’ve done on particularly business use of virtual reality has far more promise. And indeed, the way in which the mention of that was couched from one of the reports about this: “pretty unsexy stuff going on with business use of all this.” Yet there are excellent results being reported by a number of companies. I remember reading a few months ago, which I posted about, I think, on LinkedIn, what BMW is doing with its car-building metaverse, where they’re modeling new models in a metaverse where the guys on the production lines are increasingly robots, but they’re to be run by humans, and the designers and the marketers and others all get together in a virtual world to discuss planning of a new model. That’s definitely something that they’re seeing results from, that kind of thing. I remember, as you will, Shel, let’s go back into the deep mists of time to a place called Second Life. Hey, all the auto companies—all of them, literally the big ones, particularly the American ones—were there with the virtual cars. I’ve still got a virtual Pontiac somewhere up there on Second Life, which is still there probably in 2026. I haven’t logged in since, so I’m going to make a point of doing that this weekend to see what’s going on, to see if I need to upgrade my fashion, clothing, or whether it’s still valid. But that was the early stages of things that we now call a metaverse. And the tech has moved on significantly and Second Life has moved on significantly with improvements to their platform. But it is one platform that doesn’t appeal to everyone, yet it’s there still with thousands of users. So there is room for all of these things. And you mentioned the book and projecting 10 years—it might be longer than that. I think it might be quicker than that even, because things are moving so fast with all of this. It’s hard to tell, but it is worth paying attention to both from a communication and business perspective. But also if you’re interested in how this tech is moving along generally, keep an eye on this because I think we are likely to see AI playing a bigger role, such as you suggested, than has been the case to date. So it’s kind of: watch this space, basically.
Shel Holtz: Yeah, and in terms of the consumer use, one of the things that I was pointing out in a conversation I was having on the Supernatural Facebook group is that Meta in particular has done just a god-awful job of marketing these apps. When Supernatural was shut down, there was an article written by one of the users who’s also a Bloomberg reporter, so it got a fair amount of attention. He thought the move was fairly stupid to shut it down, given that it has a hundred thousand paying users. And I made the point in a discussion around this that, well, you know, they have not done a good job at all of marketing this. This is an app… I mean, you look at what people were saying when it was shut down in the group, and a lot of them were saying, “I never exercised before this. And I was skeptical when I tried it. But now here’s my before and after picture, right? And I weighed 250 pounds here and I’m 145 now.” There was a lot of that—people saying, “I never worked out before this and this is what led me to it.” And it’s because it’s fun and it’s because of the affinity we have with the coaches and blah, blah, blah. And Meta never took advantage of any of this. They never went out there and talked about how this can change your life. And there are other workout apps out there—Les Mills Body Combat and Fit XR and several others. So it’s not just a Meta problem. It’s the companies that make these, because these other apps are not owned by Meta; they’re just in the store. And there’s no marketing that I see for any of these that would bring people in. And you have to believe that somebody who’s never used this… and there are a number of people who said, “I bought my Quest headset so I could do Supernatural after a friend showed it to me.” That’s the gateway to other apps and to other tools and people finding, “Well, maybe there is some utility here. Maybe I do like playing VR games,” or what have you. And they just haven’t done this. And I have to say, it’s not surprising because Meta’s marketing has never been good for anything. But it seems to me that this whole virtual reality space, the marketing has been poor from the beginning.
Neville Hobson: Yeah. So let’s also throw into the pool here of memory lane stuff… I was a huge fan and a regular user literally on a daily basis of Microsoft’s Kinect—K-I-N-E-C-T—that I used totally and only for fitness: jogging in place, all the exercises, the works. And I was really unhappy when they canned it and got rid of it all and that whole ecosystem building around that. That must have been around 2009, 10, 11—that kind of timeframe. But my Kinect worked brilliantly on the TV I had, the Sony TV at the time. Absolutely super. I miss that because I’ve never really used any of this technology since then for exercise. Whereas that’s what I used it exclusively for—my Kinect. Running alongside my virtual trainer—I could see it on the screen, both of those. That was really cool. So things go on. But it is interesting though, Shel… you wonder why on earth did the company shut this thing down that was making tons of money, they had a community, etc. There are other forces at work here that lead to those kinds of decisions. And it may not make sense, but if you’re inside that organization, it probably does because there’s something else going on they haven’t announced publicly or whatever it might be. So hence my own view: I’m not as bullish as you are about this from a consumer point of view, any of this. Not yet, anyway. I think something’s got to work out further on this.
Shel Holtz: But my bullishness is along a long horizon. It’s not something imminent. Yeah.
Neville Hobson: No, I get it. I get it. And yet I wonder if we might see something happening. And I’m now thinking more of the other forces at work in the world generally about the changes that are going on in trust, stuff like that. What impact will this have? We’ve got you mentioned Nvidia doing some stuff. We’ve got other players, particularly in China, who are working with technologies that can do this kind of thing in China where they’ve got what—a billion people who could take advantage of all this. So there’s so much going on. Worth paying attention to all of it, I think.
Shel Holtz: Yeah, I’m looking forward to checking out this persistent universe that Epic Games and Disney are working on because you know that AI is going to factor into that and the ability to keep the world creating new places and wherever you turn, there’s going to be something new. It’s going to be good. I would put money on that. That’ll bring this episode of For Immediate Release to a close. Just a couple of quick notes before we go. First of all, later this week, we’re dropping our FIR Interview for January. It was a really good interview with Philippe Borremans, who we mentioned earlier. He left one of the comments that we read early in the show. Philippe specializes in crisis communication and we talked to him about crisis and AI. Really interesting interview. He’s got tremendous subject matter expertise. So if you deal with crisis communication, this is one that you don’t want to miss. In terms of today’s episode, we do hope that you will leave comments. Most of the comments we get are on our LinkedIn posts announcing the episode, and we’re grateful for you sharing your comments there. You can also email them to us at fircomments at gmail.com.
We would still love to get an audio comment one of these days. We used to get those all the time. They actually drove our discussion for much of the show. We haven’t had one in probably a couple of years, but you can actually record one right on the FIR website at firpodcastnetwork.com. There’s a link on the right-hand side—it says “send voicemail.” Just click that. You’ve got 90 seconds to get your message across. Record more than one, I’ll put them together. You can leave comments directly on the show notes on the FIR website. You can also leave comments in the Facebook FIR group or the FIR page or to either of our posts on Facebook or on BlueSky or on Threads because we share the release of each episode in all of those places. Also your ratings and reviews on Apple or wherever you get your podcasts are greatly appreciated. Our next episode will be next week. That’ll be a short midweek episode. We’ll continue to produce those, but our next long-form monthly episode will be released on Monday, February 23rd. Until then, that will be a 30 for this episode of For Immediate Release.
The post FIR #498: Can Business Be a Trust Broker in Today’s Insulated Society? appeared first on FIR Podcast Network.