
Sign up to save your podcasts
Or


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the massive technological shifts driven by generative AI in 2025 and what you must plan for in 2026.
You will learn which foundational frameworks ensure your organization can strategically adapt to rapid technological change. You’ll discover how to overcome the critical communication barriers and resistance emerging among teams adopting these new tools. You will understand why increasing machine intelligence makes human critical thinking and emotional skills more valuable than ever. You’ll see the unexpected primary use case of large language models and identify the key metrics you must watch in the coming year for economic impact. Watch now to prepare your strategy for navigating the AI revolution sustainably.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn: In this week’s *In-Ear Insights*.
This is the last episode of *In-Ear Insights* for 2025. We are out with the old. We’ll be back in January for new episodes the week of January 5th.
So, Katie, let’s talk about the year that was and all the crazy things that happened in the year. And so what you’re thinking about, particularly from the perspective of all things AI, all things data and analytics—how was 2025 for you?
Katie Robbert: What’s funny about that is I feel like for me personally, not a lot changed.
And the reason I feel like I can say that is because a lot of what I focus on is foundational, and it doesn’t really matter what fancy, shiny new technology is happening. So I really try to focus on making sure the things that I do every day can adapt to new technology.
And again, of course, that’s probably the most concrete example of that is the 5P framework: Purpose, People, Process, Platform for Performance. It doesn’t matter what the technology is. This is where I’m always going to ground myself in this framework so that if AI comes along or shiny object number 2 comes along, I can adapt because it’s still about primarily, what are we doing? So asking the right questions.
The things that did change were I saw more of a need this year, not in general, but just this year, for people to understand how to connect with other people.
And not only in a personal sense, but in a professional sense of my team needs to adopt AI or they need to adopt this new technology. I don’t know how to reach them. I don’t know where to start. I don’t know. I’m telling them things. Nothing’s working.
And I feel like the technology of today, which is generative AI, is creating more barriers to communication than it is opening up communication channels. And so that’s a lot of where my head has been: how to help people move past those barriers to make sure that they’re still connecting with their teams.
And it’s not so much that the technology is just a firewall between people, but it’s the when you start to get into the human emotion of “I’m afraid to use this,” or “I’m hesitant to use this,” or “I’m resistant to use this,” and you have people on two different sides of the conversation—how do you help them meet in the middle? Which is really where I’ve been focused, which, to be fair, is not a new problem: new tech, old problems.
But with generative AI, which is no longer a fad—it’s not going away—people are like, “Oh, what do you mean? I actually have to figure this out now.” Okay, so I guess that’s what I mean. That’s where my head has been this year: helping people navigate that particular digital disruption, that tech disruption, versus a different kind of tech disruption.
Christopher S. Penn: And if you had to—I know I personally always hate this question—if you had to boil that down to a couple of first principles of the things that are pretty universal from what you’ve had to tell people this year, what would those first principles be?
Katie Robbert: Make sure you’re clear on your purpose. What is the problem you’re trying to solve? I think with technology that feels all-consuming, generative AI.
We tend to feel like, “Oh, I just have to use it. Everybody else is using it.” Whereas things that have a discrete function. An email server, do I need to use it? Am I sending email? No. So I don’t need an email server. It’s just another piece of technology.
We’re not treating generative AI like another piece of technology. We’re treating it like a lifestyle, we’re treating it like a culture, we’re treating it like the backbone of our organization, when really it’s just tech.
And so I think it comes down to one: What is the question you’re trying to answer? What is the problem you’re trying to solve? Why do you need to use this in the first place? How is it going to enhance? And two: Are you clear on your goals? Are you clear on your vision? Which relates back to number 1.
So those are really the two things that have come up the most: What’s the problem you’re trying to solve by using generative AI? And a lot of times it’s, “I don’t want to fall behind,” which is a valid problem, but it’s not the right problem to solve with generative AI.
Christopher S. Penn: I would imagine. Probably part of that has to do with what you see from very credible studies coming out about it. The one that I know we’ve referenced multiple times is the 3-year study from Wharton Business School where, in Year 3 (which is 2025—this came out in October of this year), the line that caught everyone’s attention was at the bottom. Here it says 3 out of 4 leaders see positive returns on Gen AI investments, and 4 out of 5 leaders in enterprises see these investments paying off in a couple of years.
And the usage levels. Again, going back to what you were saying about people feeling left behind, within enterprises, 82% using it weekly, 46% using it daily, and 72% formally measuring the ROI on it in some capacity and seeing those good results from it.
Katie Robbert: But there’s a lot there that you just said that’s not happening universally. So measuring ROI consistently and in a methodical way, employees actually using these tools in the way that they’re intended, and leadership having a clear vision of what it’s intended to do in terms of productivity. Those are all things that sound good on paper but are not actually happening in real-life practice.
We talk with our peers, we talk with our clients, and the chief complaint that we get is, “We have all these resources that we created, but nobody’s using them, nobody’s adopting this,” or, “They’re using generative AI, but not the way that I want them to.” So how do you measure that for efficiency? How do you measure that for productivity? So I look at studies like that and I’m like, “Yeah, that’s more of an idealistic view of everything’s going right, but in the real world, it’s very messy.”
Christopher S. Penn: And we know, at least in some capacity, how those are happening. So this comes from Stanford—this was from August—where generative AI is deployed within organizations. We are seeing dramatic headcount reductions, particularly for junior people in their careers, people 22 to 25.
And this is a really well-done study because you can see the blue line there is those early career folks, how not just hiring, but overall headcount is diminishing rapidly. And they went on to say, for professions where generative AI really isn’t part of it, like stock clerks, health aides, you do not see those rapid declines. The one that we care about, because our audience is marketing and sales. You can see there’s a substantial reduction in the amount of headcount that firms are carrying in this area. So that productivity increase is coming at the expense of those jobs, those seats.
Katie Robbert: Which is interesting because that’s something that we saw immediately with the rollout of generative AI. People are like, “Oh great, this can write blog posts for me. I don’t need my steeple of writers.”
But then they’re like, “Oh, it’s writing mediocre, uninteresting blog posts for me, but I’ve already fired all of my writers and none of them want to come back.” So I am going to ask the people who are still here to pick up the slack on that. And then those people are going to burn out and leave. So, yeah, if you look at the chart, statistically, they’re reducing headcount. If you dig into why they’re reducing headcount, it’s not for the right reasons.
You have these big leaders, Sam Altman and other people, who are talking about, “We did all these amazing things, and I started this billion-dollar company with one employee. It’s just me.” And everything else is—guess what? That is not the rule. That is the exception.
And there’s a lot that they’re not telling you about what’s actually happening behind the scenes. Because that one person who’s managing all the machines is probably not sleeping. They’re probably taking some sort of an upper to stay awake to keep up with whatever the demand is for the company that they’re creating. You want to talk about true hustle culture? That’s it. And it is not something that I would recommend to anyone. It’s not worth it.
So when we talk about these companies that are finding productivity, reducing headcount, increasing revenue, what they’re not doing is digging into why that’s happening. And I would guarantee that it’s not on the up and up, but it’s not all the healthy version of that.
Christopher S. Penn: Oh, we know that for sure. One of the big work trends this year that came out of Chinese AI Labs, which Silicon Valley is scrambling to impose upon their employees, is the 996 culture: 9 a.m. to 9 p.m., six days a week is demanding.
Katie Robbert: I was like, “Nope.” I was like, “Why?” You’re never going to get me to buy into that.
Christopher S. Penn: Well, I certainly don’t want to either. Although that’s about what I work anyway. But half of my work is fun, so.
Katie Robbert: Well, yeah. So let the record show I do not ask Chris to work those hours. That is not a requirement. He is choosing, as a person with his own faculties, to say, “This is what I want to do.” So that is not a mandate on him.
Christopher S. Penn: Yes, this is something that the work that I do is also my hobby.
But what people forget to take into account is their cultural differences too. So. And there are also macro things that are different that make that even less sustainable in Western cultures than it does in Chinese cultures.
But looking back at the year from a technological perspective, one of the things that stunned me was how we forget just how smart these things have gotten in just one year.
One of the things that we—there’s an exam that was built in January of this year called Humanity’s Last Exam as a—it’s a very challenging exam. I think I have a sample question. Yeah, here’s 2 sample questions. I don’t even know what these questions mean. So my score on this exam would be a 0 because it’s one doing.
Here’s a thermal paracyclic cascade. Provide your answer in this format. Here’s some Hebrew. Identify closed and open syllables. I look at this I can’t even multiple-choice guess this. Sure, I don’t know what it is.
At the beginning of the year, the models at the time—OpenAI’s GPT4O, Claude 3 Opus, Google Gemini Pro 2, Deep Seek V3—all scored 5%. They just bombed the exam. Everybody bombed it. I granted they scored 5% more than I would have scored on it, but they basically bombed the exam.
In just 12 months, we’ve seen them go from 5% to 26%. So a 5x increase. Gemini going from 6.8% to 37%, which is what—a 5, 6, 7—6x improvement. Claude going from 3% to 28%. So that’s what a 7x improvement. No, 8x improvement.
These are huge leaps in intelligence for these models within a single calendar year.
Katie Robbert: Sure. But listen, I always say I might be an N of 1. I’m not impressed by that because how often do I need to know the answers to those particular questions that you just shared?
In the profession that I am in, specifically, there’s an old saying—I don’t know how old, or maybe it’s whatever—there’s a difference between book smart and street smart. So you’re really talking about IQ versus EQ, and these machines don’t have EQ. It’s not anything that they’re ever going to really be able to master the way that humans do.
Now, when you say this, I’m talking about intellectual intelligence and emotional intelligence. And so if you’ve seen any of the sci-fi movies, *Her* or *Ex Machina*, you’re led to believe that these machines are going to simulate humans and be empathetic and sympathetic. We’ve already seen the news stories of people who are getting married to their generative AI system. That’s happening. Yes, I’m not brushing over it, I’m acknowledging it.
But in reality, I am not concerned about how smart these machines get in terms of what you can look up in a dictionary or what you can find in an encyclopedia—that’s fine. I’m happy to let these machines do that all day long. It’s going to save me time when I’m trying to understand the last consonant of every word in the Hebrew alphabet since the dawn of time. Sure. Happy to let the machine do that.
What these machines don’t know is what I know in my life experience. And so why am I asking that information? What am I going to do with that information? How am I going to interpret that information? How am I going to share that information? Those are the things that the machine is never going to replace me in my role to do. So I say, great, I’m happy to let the machines get as smart as they want to get. It saves me time having to research those things.
I was on a train last week, and there were 2 women sitting behind me, and they were talking about generative AI. You can go anywhere and someone talks about generative AI. One of the women was talking about how she had recently hired a research assistant, and she had given her 3 or 4 academic papers and said, “I want to know your thoughts on these.”
And so what the research assistant gave back was what generative AI said were the summaries of each of these papers. And so the researcher said, “No, I want to know your thoughts on these research papers.” She’s like, “Well, those are the summaries. That’s what generative AI gave me.” She’s like, “Great, but I need you to read them and do the work.” And so we’ve talked about this in previous episodes. What humans will have over generative AI, should they choose to do so, is critical thinking.
And so you can find those episodes of the podcast on our YouTube channel at TrustInsights.ai/YouTube. Find our podcast playlist. And it just struck me that it doesn’t matter what industry you’re in, people are using generative AI to replace their own thinking.
And those are the people who are going to be finding themselves to the right and down on those graphs of being replaced.
So I’ve sort of gone on a little bit of a rant. Point is, I’m happy to let the machines be smarter than me and know more than me about things in the world. I’m the one who chooses how to use it. I’m the one who has to do the critical thinking. And that’s not going to be replaced.
Christopher S. Penn: Yeah, that’s. But you have to make that a conscious choice.
One of the things that we did see this year, which I find alarming, is the number of people who have outsourced their executive function to machines to say, “Hey, do this way.” There’s. You can go on Twitter, or what was formerly known as Twitter, and literally see people who are supposedly thought leaders in their profession just saying, “Chat GPT told me this. And so you’re wrong.” And I’m like, “In a very literal sense, you have lost your mind.” You have. It’s not just one group of people.
When you look at the *Harvard Business Review* use cases—this was from April of this year—the number 1 use case is companionship for these tools. Whether or not we think it’s a good idea. They. And to your point, Katie, they don’t have empathy, they don’t have emotional intelligence, but they emulate it so well now. Oh, they do that. People use it for those things. And that, I think, is when we look back at the year that was, the fact that this is the number 1 use case now for these tools is shocking to me.
Katie Robbert: Separately—not when I was on a train—but when I was sitting at a bar having lunch. We. My husband and I were talking to the bartender, and he was like, “Oh, what do you do for a living?” So I told him, and he goes, “I’ve been using ChatGPT a lot. It’s the only one that listens to me.”
And it sort of struck me as, “Oh.” And then he started to, it wasn’t a concerning conversation in the sense that he was sort of under the impression that it was a true human. But he was like, “Yeah, I’ll ask it a question.” And the response is, “Hey, that’s a great question. Let me help you.” And even just those small things—it saying, “That’s a really thoughtful question. That’s a great way to think about it.” That kind of positive reinforcement is the danger for people who are not getting that elsewhere. And I’m not a therapist. I’m not looking to fix this. I’m not giving my opinions of what people should and shouldn’t do. I’m observing.
What I’m seeing is that these tools, these systems, these pieces of software are being designed to be positive, being designed to say, “Great question, thank you for asking,” or, “I hope you have a great day. I hope this information is really helpful.” And it’s just those little things that are leading people down that road of, “Oh, this—it knows me, it’s listening to me.”
And so I understand. I’m fully aware of the dangers of that. Yeah.
Christopher S. Penn: And that’s such a big macro question that I don’t think anybody has the answer for: What do you do when the machine is a better human than the humans you’re surrounded by?
Katie Robbert: I feel like that’s subjective, but I understand what you’re asking, and I don’t know the answer to that question. But that again goes back to, again, sort of the sci-fi movies of *Her* or *Ex Machina*, which was sort of the premise of those, or the one with Haley Joel Osment, which was really creepy. *Artificial Intelligence*, I think, is what it was called. But anyway. People are seeking connection. As humans, we’re always seeking connection.
Here’s the thing, and I don’t want to go too far down the rabbit hole, but a lot of people have been finding connection. So let’s say we go back to pen pals—people they’d never met. So that’s a connection. Those are people they had never met, people they don’t interact with, but they had a connection with someone who was a pen pal.
Then you have things like chat rooms. So AOL chat room—A/S/L. We all. If you’re of that generation, what that means. People were finding connections with strangers that they had never met.
Then you move from those chat rooms to things like these communities—Discord and Slack and everything—and people are finding connections. This is just another version of that where we’re trying to find connections to other humans.
Christopher S. Penn: Yes. Or just finding connections, period.
Katie Robbert: That’s what I mean. You’re trying to find a connection to something. Some people rescue animals, and that’s their connection. Some people connect with nature. Other people, they’re connecting with these machines. I’m not passing judgment on that. I think wherever you find connection is where you find connection.
The risk is going so far down that you can’t then be in reality in general. I know. *Avatar* just released another version. I remember when that first version of the movie *Avatar* came out, there were a lot of people very upset that they couldn’t live in that reality. And it’s just.
Listen, I forgot why we’re doing this podcast because now we’ve gone so far off the rails talking about technology. But I think to your point, what’s happened with generative AI in 2025: It’s getting very smart. It’s getting very good at emulating that human experience, and I don’t think that’s slowing down anytime soon.
So we as humans, my caution for people is to find something outside of technology that grounds you so that when you are using it, you can figure out sort of that real from less reality.
Christopher S. Penn: Yeah. One of the things—and this is a complete nerd thing—but one of the things that I do, particularly when I’m using local models, is I will keep the console up that shows the computations going as a reminder that the words appearing on the screen are not made by a human; they’re made by a machine. And you can see the machinery working, and it’s kind of knowing how the magic trick is done. You watch go. “Oh, it’s just a token probability machine.” None of what’s appearing on screen is thought through by an organic intelligence.
So what are you looking forward to or what do you have your eyes on in 2026 in general for Trust Insights or in particular the field of AI?
Katie Robbert: I think now that some of the excitement over Generative AI is wearing off. I think what I’m looking forward to in 2026 for Trust Insights specifically is helping more organizations figure out how AI fits into their overall organization, where there’s real opportunity versus, “Hey, it can write a blog post,” or, “Hey, it can do these couple of things,” and I built a—I built a gem or something—but really helping people integrate it in a thoughtful way versus the short-term thinking kind of way. So I’m very much looking forward to that.
I’m seeing more and more need for that, and I think that we are well suited to help people through our courses, through our consulting, through our workshops. We’re ready. We are ready to help people integrate technology into their organization in a thoughtful, sustainable way, so that you’re not going to go, “Hey, we hired these guys and nothing happened.” We will make the magic happen. You just need to let us do it. So I’m very much looking forward to that.
I’ve personally been using Generative AI to sort of connect dots in my medical history. So I’m very excited just about the prospect of being able to be more well-informed. When I go into a doctor’s office, I can say, “I’m not a doctor, I’m not a researcher, but I know enough about my own history to say these are all of the things. And when I put them together, this is the picture that I’m getting. Can you help me come to faster conclusions?” I think that is an exciting use of generative AI, obviously under a doctor’s supervision. I’m not a doctor, but I know enough about how to research with it to put pieces together. So I think that there’s a lot of good that’s going to come from it. I think it’s becoming more accessible to people. So I think that those are all positive things.
Christopher S. Penn: The thing—if there’s one thing I would recommend that people keep an eye on—is a study or a benchmark from the Center for AI Safety called RLI, Remote Labor Index. And this is a benchmark test where AI models and their agents are given a task that typically a remote worker would do. So, for example, “Here’s a blueprint. Make an architectural rendering from it. Here’s a data set. Make a fancy dashboard, make a video game. Make a 3D rendering of this product from the specifications.” Difficult tasks that the index says the average deliverable costs thousands of dollars and hundreds of hours of time.
Right now, the state of the art in generative AI—it’s close to—because this was last month’s models, succeeded 2.1% of the time at a max. It was not great. Now, granted, if your business was to lose 2.1% of its billable deliverables, that might be enough to make the difference between a good year and a bad year.
But this is the index you watch because with all the other benchmarks, like you said, Katie, they’re measuring book smart. This is measuring: Was the work at a quality level that would be accepted as paid, commissioned work? And what we saw with Humanity’s Last Exam this year is that models went from face-rolling moron, 3% scores, to 25%, 30%, 35% within a year.
If this index of, “Hey, I can do quality commissioned work,” goes from 2.1% to 10%, 15%, 20%, that is economic value. That is work that machines are doing that humans might not be. And that also means that is revenue that is going elsewhere. So to me, this is the one thing—if there’s one thing I was going to pay attention to in 2026—it would be watching measures like this that measure real-world things that you would ask a human being to do to see how tools are advancing.
Katie Robbert: Right. The tools are going to advance, people are going to want to jump on it. But I feel like when generative AI first hit the market, the analogy that I made is people shopping the big box stores versus people shopping the small businesses that are still doing things in a handmade fashion.
There’s room for both. And so I think that you don’t have to necessarily pick one or the other. You can do a bit of both. And I think that for me is the advice that I would give to people moving into 2026: You can use generative AI or not, or use it a little bit, or use it a lot. There’s no hard and fast rule that says you have to do it a certain way.
So I think that’s really when clients come to us or we talk about it through our content. That’s really the message that I’m trying to get across is, “Yeah, there’s a lot that you can do with it, but you don’t have to do it that way.” And so that is what I want people to take away. At least for me, moving into 2026, is it’s not going anywhere, but that doesn’t mean you have to buy into it. You don’t have to be all in on it.
Just because all of your friends are running ultramarathons doesn’t mean you have to. I will absolutely not be doing that for a variety of reasons. But that’s really what it comes down to: You have to make those choices for yourself. Yes, it’s going to be everywhere. Yes, it’s accessible, but you don’t have to use it.
Christopher S. Penn: Exactly. And if I were to give people one piece of advice about where to focus their study time in 2026, besides the fundamentals, because the fundamentals aren’t changing. In fact, the fundamentals are more important than ever to get things like prompting and good data right.
But the analogy is that AI is sort of the engine—you need the rest of the car. And 2026 is when you’re going to look at things like agentic frameworks and harnesses and all the fancy techno terms for this. You are going to need the rest of the car because that’s where utility comes from. When a generative AI model is great, but a generative AI model connected to your Gmail so you can say which email should I respond to first today is useful.
Katie Robbert: Yep. And I support that. That is a way that I will be using. I’ve been playing with that for myself. But what that does is it allows me to focus more on the hands-on homemade small business things. When before I was drowning in my email going, “Where do I start?” Great, let the machine tell me where to start. I’m happy to let AI do that. That’s a choice that I am making as a human who’s going to be critically thinking about all of the rest of the work that I have going on.
Christopher S. Penn: Exactly. So you got some thoughts about what has happened this year that you want to share? Pop on by our free Slack at TrustInsights.ai/analyticsformarketers where you and over 4,500 other human marketers are asking and answering each other’s questions every single day.
And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, go to TrustInsights.ai/tipodcast. You can find us at all the places fine podcasts are served. Thank you for being with us here in 2025, the craziest year yet in all the things that we do. We appreciate you being a part of our community. We appreciate listening, and we wish you a safe and happy holiday season and a happy and prosperous new year. Talk to you on the next one.
***
Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI.
Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology (MarTech) selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members, such as CMO or data scientists, to augment existing teams.
Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations (data storytelling). This commitment to clarity and accessibility extends to Trust Insights educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information.
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
By Trust Insights5
99 ratings
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the massive technological shifts driven by generative AI in 2025 and what you must plan for in 2026.
You will learn which foundational frameworks ensure your organization can strategically adapt to rapid technological change. You’ll discover how to overcome the critical communication barriers and resistance emerging among teams adopting these new tools. You will understand why increasing machine intelligence makes human critical thinking and emotional skills more valuable than ever. You’ll see the unexpected primary use case of large language models and identify the key metrics you must watch in the coming year for economic impact. Watch now to prepare your strategy for navigating the AI revolution sustainably.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn: In this week’s *In-Ear Insights*.
This is the last episode of *In-Ear Insights* for 2025. We are out with the old. We’ll be back in January for new episodes the week of January 5th.
So, Katie, let’s talk about the year that was and all the crazy things that happened in the year. And so what you’re thinking about, particularly from the perspective of all things AI, all things data and analytics—how was 2025 for you?
Katie Robbert: What’s funny about that is I feel like for me personally, not a lot changed.
And the reason I feel like I can say that is because a lot of what I focus on is foundational, and it doesn’t really matter what fancy, shiny new technology is happening. So I really try to focus on making sure the things that I do every day can adapt to new technology.
And again, of course, that’s probably the most concrete example of that is the 5P framework: Purpose, People, Process, Platform for Performance. It doesn’t matter what the technology is. This is where I’m always going to ground myself in this framework so that if AI comes along or shiny object number 2 comes along, I can adapt because it’s still about primarily, what are we doing? So asking the right questions.
The things that did change were I saw more of a need this year, not in general, but just this year, for people to understand how to connect with other people.
And not only in a personal sense, but in a professional sense of my team needs to adopt AI or they need to adopt this new technology. I don’t know how to reach them. I don’t know where to start. I don’t know. I’m telling them things. Nothing’s working.
And I feel like the technology of today, which is generative AI, is creating more barriers to communication than it is opening up communication channels. And so that’s a lot of where my head has been: how to help people move past those barriers to make sure that they’re still connecting with their teams.
And it’s not so much that the technology is just a firewall between people, but it’s the when you start to get into the human emotion of “I’m afraid to use this,” or “I’m hesitant to use this,” or “I’m resistant to use this,” and you have people on two different sides of the conversation—how do you help them meet in the middle? Which is really where I’ve been focused, which, to be fair, is not a new problem: new tech, old problems.
But with generative AI, which is no longer a fad—it’s not going away—people are like, “Oh, what do you mean? I actually have to figure this out now.” Okay, so I guess that’s what I mean. That’s where my head has been this year: helping people navigate that particular digital disruption, that tech disruption, versus a different kind of tech disruption.
Christopher S. Penn: And if you had to—I know I personally always hate this question—if you had to boil that down to a couple of first principles of the things that are pretty universal from what you’ve had to tell people this year, what would those first principles be?
Katie Robbert: Make sure you’re clear on your purpose. What is the problem you’re trying to solve? I think with technology that feels all-consuming, generative AI.
We tend to feel like, “Oh, I just have to use it. Everybody else is using it.” Whereas things that have a discrete function. An email server, do I need to use it? Am I sending email? No. So I don’t need an email server. It’s just another piece of technology.
We’re not treating generative AI like another piece of technology. We’re treating it like a lifestyle, we’re treating it like a culture, we’re treating it like the backbone of our organization, when really it’s just tech.
And so I think it comes down to one: What is the question you’re trying to answer? What is the problem you’re trying to solve? Why do you need to use this in the first place? How is it going to enhance? And two: Are you clear on your goals? Are you clear on your vision? Which relates back to number 1.
So those are really the two things that have come up the most: What’s the problem you’re trying to solve by using generative AI? And a lot of times it’s, “I don’t want to fall behind,” which is a valid problem, but it’s not the right problem to solve with generative AI.
Christopher S. Penn: I would imagine. Probably part of that has to do with what you see from very credible studies coming out about it. The one that I know we’ve referenced multiple times is the 3-year study from Wharton Business School where, in Year 3 (which is 2025—this came out in October of this year), the line that caught everyone’s attention was at the bottom. Here it says 3 out of 4 leaders see positive returns on Gen AI investments, and 4 out of 5 leaders in enterprises see these investments paying off in a couple of years.
And the usage levels. Again, going back to what you were saying about people feeling left behind, within enterprises, 82% using it weekly, 46% using it daily, and 72% formally measuring the ROI on it in some capacity and seeing those good results from it.
Katie Robbert: But there’s a lot there that you just said that’s not happening universally. So measuring ROI consistently and in a methodical way, employees actually using these tools in the way that they’re intended, and leadership having a clear vision of what it’s intended to do in terms of productivity. Those are all things that sound good on paper but are not actually happening in real-life practice.
We talk with our peers, we talk with our clients, and the chief complaint that we get is, “We have all these resources that we created, but nobody’s using them, nobody’s adopting this,” or, “They’re using generative AI, but not the way that I want them to.” So how do you measure that for efficiency? How do you measure that for productivity? So I look at studies like that and I’m like, “Yeah, that’s more of an idealistic view of everything’s going right, but in the real world, it’s very messy.”
Christopher S. Penn: And we know, at least in some capacity, how those are happening. So this comes from Stanford—this was from August—where generative AI is deployed within organizations. We are seeing dramatic headcount reductions, particularly for junior people in their careers, people 22 to 25.
And this is a really well-done study because you can see the blue line there is those early career folks, how not just hiring, but overall headcount is diminishing rapidly. And they went on to say, for professions where generative AI really isn’t part of it, like stock clerks, health aides, you do not see those rapid declines. The one that we care about, because our audience is marketing and sales. You can see there’s a substantial reduction in the amount of headcount that firms are carrying in this area. So that productivity increase is coming at the expense of those jobs, those seats.
Katie Robbert: Which is interesting because that’s something that we saw immediately with the rollout of generative AI. People are like, “Oh great, this can write blog posts for me. I don’t need my steeple of writers.”
But then they’re like, “Oh, it’s writing mediocre, uninteresting blog posts for me, but I’ve already fired all of my writers and none of them want to come back.” So I am going to ask the people who are still here to pick up the slack on that. And then those people are going to burn out and leave. So, yeah, if you look at the chart, statistically, they’re reducing headcount. If you dig into why they’re reducing headcount, it’s not for the right reasons.
You have these big leaders, Sam Altman and other people, who are talking about, “We did all these amazing things, and I started this billion-dollar company with one employee. It’s just me.” And everything else is—guess what? That is not the rule. That is the exception.
And there’s a lot that they’re not telling you about what’s actually happening behind the scenes. Because that one person who’s managing all the machines is probably not sleeping. They’re probably taking some sort of an upper to stay awake to keep up with whatever the demand is for the company that they’re creating. You want to talk about true hustle culture? That’s it. And it is not something that I would recommend to anyone. It’s not worth it.
So when we talk about these companies that are finding productivity, reducing headcount, increasing revenue, what they’re not doing is digging into why that’s happening. And I would guarantee that it’s not on the up and up, but it’s not all the healthy version of that.
Christopher S. Penn: Oh, we know that for sure. One of the big work trends this year that came out of Chinese AI Labs, which Silicon Valley is scrambling to impose upon their employees, is the 996 culture: 9 a.m. to 9 p.m., six days a week is demanding.
Katie Robbert: I was like, “Nope.” I was like, “Why?” You’re never going to get me to buy into that.
Christopher S. Penn: Well, I certainly don’t want to either. Although that’s about what I work anyway. But half of my work is fun, so.
Katie Robbert: Well, yeah. So let the record show I do not ask Chris to work those hours. That is not a requirement. He is choosing, as a person with his own faculties, to say, “This is what I want to do.” So that is not a mandate on him.
Christopher S. Penn: Yes, this is something that the work that I do is also my hobby.
But what people forget to take into account is their cultural differences too. So. And there are also macro things that are different that make that even less sustainable in Western cultures than it does in Chinese cultures.
But looking back at the year from a technological perspective, one of the things that stunned me was how we forget just how smart these things have gotten in just one year.
One of the things that we—there’s an exam that was built in January of this year called Humanity’s Last Exam as a—it’s a very challenging exam. I think I have a sample question. Yeah, here’s 2 sample questions. I don’t even know what these questions mean. So my score on this exam would be a 0 because it’s one doing.
Here’s a thermal paracyclic cascade. Provide your answer in this format. Here’s some Hebrew. Identify closed and open syllables. I look at this I can’t even multiple-choice guess this. Sure, I don’t know what it is.
At the beginning of the year, the models at the time—OpenAI’s GPT4O, Claude 3 Opus, Google Gemini Pro 2, Deep Seek V3—all scored 5%. They just bombed the exam. Everybody bombed it. I granted they scored 5% more than I would have scored on it, but they basically bombed the exam.
In just 12 months, we’ve seen them go from 5% to 26%. So a 5x increase. Gemini going from 6.8% to 37%, which is what—a 5, 6, 7—6x improvement. Claude going from 3% to 28%. So that’s what a 7x improvement. No, 8x improvement.
These are huge leaps in intelligence for these models within a single calendar year.
Katie Robbert: Sure. But listen, I always say I might be an N of 1. I’m not impressed by that because how often do I need to know the answers to those particular questions that you just shared?
In the profession that I am in, specifically, there’s an old saying—I don’t know how old, or maybe it’s whatever—there’s a difference between book smart and street smart. So you’re really talking about IQ versus EQ, and these machines don’t have EQ. It’s not anything that they’re ever going to really be able to master the way that humans do.
Now, when you say this, I’m talking about intellectual intelligence and emotional intelligence. And so if you’ve seen any of the sci-fi movies, *Her* or *Ex Machina*, you’re led to believe that these machines are going to simulate humans and be empathetic and sympathetic. We’ve already seen the news stories of people who are getting married to their generative AI system. That’s happening. Yes, I’m not brushing over it, I’m acknowledging it.
But in reality, I am not concerned about how smart these machines get in terms of what you can look up in a dictionary or what you can find in an encyclopedia—that’s fine. I’m happy to let these machines do that all day long. It’s going to save me time when I’m trying to understand the last consonant of every word in the Hebrew alphabet since the dawn of time. Sure. Happy to let the machine do that.
What these machines don’t know is what I know in my life experience. And so why am I asking that information? What am I going to do with that information? How am I going to interpret that information? How am I going to share that information? Those are the things that the machine is never going to replace me in my role to do. So I say, great, I’m happy to let the machines get as smart as they want to get. It saves me time having to research those things.
I was on a train last week, and there were 2 women sitting behind me, and they were talking about generative AI. You can go anywhere and someone talks about generative AI. One of the women was talking about how she had recently hired a research assistant, and she had given her 3 or 4 academic papers and said, “I want to know your thoughts on these.”
And so what the research assistant gave back was what generative AI said were the summaries of each of these papers. And so the researcher said, “No, I want to know your thoughts on these research papers.” She’s like, “Well, those are the summaries. That’s what generative AI gave me.” She’s like, “Great, but I need you to read them and do the work.” And so we’ve talked about this in previous episodes. What humans will have over generative AI, should they choose to do so, is critical thinking.
And so you can find those episodes of the podcast on our YouTube channel at TrustInsights.ai/YouTube. Find our podcast playlist. And it just struck me that it doesn’t matter what industry you’re in, people are using generative AI to replace their own thinking.
And those are the people who are going to be finding themselves to the right and down on those graphs of being replaced.
So I’ve sort of gone on a little bit of a rant. Point is, I’m happy to let the machines be smarter than me and know more than me about things in the world. I’m the one who chooses how to use it. I’m the one who has to do the critical thinking. And that’s not going to be replaced.
Christopher S. Penn: Yeah, that’s. But you have to make that a conscious choice.
One of the things that we did see this year, which I find alarming, is the number of people who have outsourced their executive function to machines to say, “Hey, do this way.” There’s. You can go on Twitter, or what was formerly known as Twitter, and literally see people who are supposedly thought leaders in their profession just saying, “Chat GPT told me this. And so you’re wrong.” And I’m like, “In a very literal sense, you have lost your mind.” You have. It’s not just one group of people.
When you look at the *Harvard Business Review* use cases—this was from April of this year—the number 1 use case is companionship for these tools. Whether or not we think it’s a good idea. They. And to your point, Katie, they don’t have empathy, they don’t have emotional intelligence, but they emulate it so well now. Oh, they do that. People use it for those things. And that, I think, is when we look back at the year that was, the fact that this is the number 1 use case now for these tools is shocking to me.
Katie Robbert: Separately—not when I was on a train—but when I was sitting at a bar having lunch. We. My husband and I were talking to the bartender, and he was like, “Oh, what do you do for a living?” So I told him, and he goes, “I’ve been using ChatGPT a lot. It’s the only one that listens to me.”
And it sort of struck me as, “Oh.” And then he started to, it wasn’t a concerning conversation in the sense that he was sort of under the impression that it was a true human. But he was like, “Yeah, I’ll ask it a question.” And the response is, “Hey, that’s a great question. Let me help you.” And even just those small things—it saying, “That’s a really thoughtful question. That’s a great way to think about it.” That kind of positive reinforcement is the danger for people who are not getting that elsewhere. And I’m not a therapist. I’m not looking to fix this. I’m not giving my opinions of what people should and shouldn’t do. I’m observing.
What I’m seeing is that these tools, these systems, these pieces of software are being designed to be positive, being designed to say, “Great question, thank you for asking,” or, “I hope you have a great day. I hope this information is really helpful.” And it’s just those little things that are leading people down that road of, “Oh, this—it knows me, it’s listening to me.”
And so I understand. I’m fully aware of the dangers of that. Yeah.
Christopher S. Penn: And that’s such a big macro question that I don’t think anybody has the answer for: What do you do when the machine is a better human than the humans you’re surrounded by?
Katie Robbert: I feel like that’s subjective, but I understand what you’re asking, and I don’t know the answer to that question. But that again goes back to, again, sort of the sci-fi movies of *Her* or *Ex Machina*, which was sort of the premise of those, or the one with Haley Joel Osment, which was really creepy. *Artificial Intelligence*, I think, is what it was called. But anyway. People are seeking connection. As humans, we’re always seeking connection.
Here’s the thing, and I don’t want to go too far down the rabbit hole, but a lot of people have been finding connection. So let’s say we go back to pen pals—people they’d never met. So that’s a connection. Those are people they had never met, people they don’t interact with, but they had a connection with someone who was a pen pal.
Then you have things like chat rooms. So AOL chat room—A/S/L. We all. If you’re of that generation, what that means. People were finding connections with strangers that they had never met.
Then you move from those chat rooms to things like these communities—Discord and Slack and everything—and people are finding connections. This is just another version of that where we’re trying to find connections to other humans.
Christopher S. Penn: Yes. Or just finding connections, period.
Katie Robbert: That’s what I mean. You’re trying to find a connection to something. Some people rescue animals, and that’s their connection. Some people connect with nature. Other people, they’re connecting with these machines. I’m not passing judgment on that. I think wherever you find connection is where you find connection.
The risk is going so far down that you can’t then be in reality in general. I know. *Avatar* just released another version. I remember when that first version of the movie *Avatar* came out, there were a lot of people very upset that they couldn’t live in that reality. And it’s just.
Listen, I forgot why we’re doing this podcast because now we’ve gone so far off the rails talking about technology. But I think to your point, what’s happened with generative AI in 2025: It’s getting very smart. It’s getting very good at emulating that human experience, and I don’t think that’s slowing down anytime soon.
So we as humans, my caution for people is to find something outside of technology that grounds you so that when you are using it, you can figure out sort of that real from less reality.
Christopher S. Penn: Yeah. One of the things—and this is a complete nerd thing—but one of the things that I do, particularly when I’m using local models, is I will keep the console up that shows the computations going as a reminder that the words appearing on the screen are not made by a human; they’re made by a machine. And you can see the machinery working, and it’s kind of knowing how the magic trick is done. You watch go. “Oh, it’s just a token probability machine.” None of what’s appearing on screen is thought through by an organic intelligence.
So what are you looking forward to or what do you have your eyes on in 2026 in general for Trust Insights or in particular the field of AI?
Katie Robbert: I think now that some of the excitement over Generative AI is wearing off. I think what I’m looking forward to in 2026 for Trust Insights specifically is helping more organizations figure out how AI fits into their overall organization, where there’s real opportunity versus, “Hey, it can write a blog post,” or, “Hey, it can do these couple of things,” and I built a—I built a gem or something—but really helping people integrate it in a thoughtful way versus the short-term thinking kind of way. So I’m very much looking forward to that.
I’m seeing more and more need for that, and I think that we are well suited to help people through our courses, through our consulting, through our workshops. We’re ready. We are ready to help people integrate technology into their organization in a thoughtful, sustainable way, so that you’re not going to go, “Hey, we hired these guys and nothing happened.” We will make the magic happen. You just need to let us do it. So I’m very much looking forward to that.
I’ve personally been using Generative AI to sort of connect dots in my medical history. So I’m very excited just about the prospect of being able to be more well-informed. When I go into a doctor’s office, I can say, “I’m not a doctor, I’m not a researcher, but I know enough about my own history to say these are all of the things. And when I put them together, this is the picture that I’m getting. Can you help me come to faster conclusions?” I think that is an exciting use of generative AI, obviously under a doctor’s supervision. I’m not a doctor, but I know enough about how to research with it to put pieces together. So I think that there’s a lot of good that’s going to come from it. I think it’s becoming more accessible to people. So I think that those are all positive things.
Christopher S. Penn: The thing—if there’s one thing I would recommend that people keep an eye on—is a study or a benchmark from the Center for AI Safety called RLI, Remote Labor Index. And this is a benchmark test where AI models and their agents are given a task that typically a remote worker would do. So, for example, “Here’s a blueprint. Make an architectural rendering from it. Here’s a data set. Make a fancy dashboard, make a video game. Make a 3D rendering of this product from the specifications.” Difficult tasks that the index says the average deliverable costs thousands of dollars and hundreds of hours of time.
Right now, the state of the art in generative AI—it’s close to—because this was last month’s models, succeeded 2.1% of the time at a max. It was not great. Now, granted, if your business was to lose 2.1% of its billable deliverables, that might be enough to make the difference between a good year and a bad year.
But this is the index you watch because with all the other benchmarks, like you said, Katie, they’re measuring book smart. This is measuring: Was the work at a quality level that would be accepted as paid, commissioned work? And what we saw with Humanity’s Last Exam this year is that models went from face-rolling moron, 3% scores, to 25%, 30%, 35% within a year.
If this index of, “Hey, I can do quality commissioned work,” goes from 2.1% to 10%, 15%, 20%, that is economic value. That is work that machines are doing that humans might not be. And that also means that is revenue that is going elsewhere. So to me, this is the one thing—if there’s one thing I was going to pay attention to in 2026—it would be watching measures like this that measure real-world things that you would ask a human being to do to see how tools are advancing.
Katie Robbert: Right. The tools are going to advance, people are going to want to jump on it. But I feel like when generative AI first hit the market, the analogy that I made is people shopping the big box stores versus people shopping the small businesses that are still doing things in a handmade fashion.
There’s room for both. And so I think that you don’t have to necessarily pick one or the other. You can do a bit of both. And I think that for me is the advice that I would give to people moving into 2026: You can use generative AI or not, or use it a little bit, or use it a lot. There’s no hard and fast rule that says you have to do it a certain way.
So I think that’s really when clients come to us or we talk about it through our content. That’s really the message that I’m trying to get across is, “Yeah, there’s a lot that you can do with it, but you don’t have to do it that way.” And so that is what I want people to take away. At least for me, moving into 2026, is it’s not going anywhere, but that doesn’t mean you have to buy into it. You don’t have to be all in on it.
Just because all of your friends are running ultramarathons doesn’t mean you have to. I will absolutely not be doing that for a variety of reasons. But that’s really what it comes down to: You have to make those choices for yourself. Yes, it’s going to be everywhere. Yes, it’s accessible, but you don’t have to use it.
Christopher S. Penn: Exactly. And if I were to give people one piece of advice about where to focus their study time in 2026, besides the fundamentals, because the fundamentals aren’t changing. In fact, the fundamentals are more important than ever to get things like prompting and good data right.
But the analogy is that AI is sort of the engine—you need the rest of the car. And 2026 is when you’re going to look at things like agentic frameworks and harnesses and all the fancy techno terms for this. You are going to need the rest of the car because that’s where utility comes from. When a generative AI model is great, but a generative AI model connected to your Gmail so you can say which email should I respond to first today is useful.
Katie Robbert: Yep. And I support that. That is a way that I will be using. I’ve been playing with that for myself. But what that does is it allows me to focus more on the hands-on homemade small business things. When before I was drowning in my email going, “Where do I start?” Great, let the machine tell me where to start. I’m happy to let AI do that. That’s a choice that I am making as a human who’s going to be critically thinking about all of the rest of the work that I have going on.
Christopher S. Penn: Exactly. So you got some thoughts about what has happened this year that you want to share? Pop on by our free Slack at TrustInsights.ai/analyticsformarketers where you and over 4,500 other human marketers are asking and answering each other’s questions every single day.
And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, go to TrustInsights.ai/tipodcast. You can find us at all the places fine podcasts are served. Thank you for being with us here in 2025, the craziest year yet in all the things that we do. We appreciate you being a part of our community. We appreciate listening, and we wish you a safe and happy holiday season and a happy and prosperous new year. Talk to you on the next one.
***
Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI.
Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology (MarTech) selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, Dall-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members, such as CMO or data scientists, to augment existing teams.
Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In-Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations (data storytelling). This commitment to clarity and accessibility extends to Trust Insights educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information.
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

0 Listeners