In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss the stark reality of the future of work presented at the Marketing AI Conference, MAICON 2025.
You’ll learn which roles artificial intelligence will consume fastest and why average employees face the highest risk of replacement. You’ll master the critical thinking and contextual skills you must develop now to transform yourself into an indispensable expert. You’ll understand how expanding your intellectual curiosity outside your specific job will unlock creative problem solving essential for survival. You’ll discover the massive global AI blind spot that US companies ignore and how this shifting landscape affects your career trajectory. Watch now to prepare your career for the age of accelerated automation!
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
https://traffic.libsyn.com/inearinsights/tipodcast-maicon-2025-generative-ai-for-marketers.mp3
Download the MP3 audio here.
Need help with your company’s data and analytics? Let us know!Join our free Slack group for marketers interested in analytics!Machine-Generated Transcript
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn – 00:00
In this week’s In Ear Insights, we are at the Marketing AI Conference, Macon 2025 in Cleveland with 1,500 of our best friends. This morning, the CEO of SmartRx, formerly the Marketing AI Institute, Paul Ritzer, was talking about the future of work. Now, before I go down a long rabbit hole, Dave, what was your immediate impressions, takeaways from Paul’s talk?
Paul always brings this really interesting perspective because he’s very much a futurist, much like yourself, but he’s a futurist in a different way. Whereas you’re on the future of the technology, he’s focused on the future of the business and the people. And so his perspective was really, “AI is going to take your job.” If we had to underscore it, that was the bottom line: AI is going to take your job. However, how can you be smarter about it? How can you work with it instead of working against it? Obviously, he didn’t have time to get into every single individual solution.
The goal of his keynote talk was to get us all thinking, “Oh, so if AI is going to take my job, how do I work with AI versus just continuing to fight against it so that I’m never going to get ahead?” I thought that was a really interesting way to introduce the conference as a whole, where every individual session is going to get into their soldiers.
Christopher S. Penn – 01:24
The chart that really surprised me was one of those, “Oh, he actually said the quiet part out loud.” He showed the SaaS business chart: SaaS software is $500 billion of economic value. Of course, AI companies are going, “Yeah, we want that money. We want to take all that money.” But then he brought up the labor chart, which is $12 trillion of money, and says, “This is what the AI companies really want. They want to take all $12 trillion and keep it for themselves and fire everybody,” which is the quiet part out loud. Even if they take 20% of that, that’s still, obviously, what is it, $2 trillion, give or take? When we think about what that means for human beings, that’s basically saying, “I want 20% of the workforce to be unemployed.”
And he wasn’t shy about saying that. Unfortunately, that is the message that a lot of the larger companies are promoting right now. So the question then becomes, what does that mean for that 20%? They have to pivot. They have to learn new skills, or—the big thing, and you and I have talked about this quite a bit this year—is you really have to tap into that critical thinking. That was one of the messages that Paul was sharing in the keynote: go to school, get your liberal art degree, and focus on critical thinking. AI is going to do the rest of it.
So when we look at the roles that are up for grabs, a lot of it was in management, a lot of it was in customer service, a lot of it was in analytics—things that already have a lot of automation around them. So why not naturally let agentic AI take over, and then you don’t need human intervention at all? So then, where does that leave the human?
We’re the ones who have to think what’s next. One of the things that Paul did share was that the screenwriter for all of the Scorsese films was saying that ChatGPT gave me better ideas. We don’t know what those exact prompts looked like. We don’t know how much context was given. We don’t know how much background information. But if that was sue and I, his name was Paul. Paul Schrader. Yes, I forgot it for a second. If Paul Schrader can look at Paul Schrader’s work, then he’s the expert. That’s the thing that I think needed to also be underscored: Paul Schrader is the expert in Paul Schrader. Paul Schrader is the expert in screenwriting those particular genre films. Nobody else can do that.
So Paul Schrader is the only one who could have created the contextual information for those large language models. He still has value, and he’s the one who’s going to take the ideas given by the large language models and turn them into something. The large language model might give him an idea, but he needs to be the one to flush it out, start to finish, because he’s the one who understands nuance. He’s the one who understands, “If I give this to a Leonardo DiCaprio, what is he gonna do with the role? How is he gonna think about it?” Because then you’re starting to get into all of the different complexities where no one individual ever truly works alone. You have a lot of other humans.
I think that’s the part that we haven’t quite gotten to, is sure, generative AI can give you a lot of information, give you a lot of ideas, and do a lot of the work. But when you start incorporating more humans into a team, the nuance—it’s very discreet. It’s very hard for an AI to pick up. You still need humans to do those pieces.
Christopher S. Penn – 04:49
When you take a look, though, at something like the Tilly Norwood thing from a couple weeks ago, even there, it’s saying, “Let’s take fewer humans in there,” where you have this completely machine generated actor avatar, I guess. It was very clearly made to replace a human there because they’re saying, “This is great. They don’t have to pay union wages. The actor never calls in sick. The actor never takes a vacation. The actor’s not going to be partying at a club unless someone makes it do that.” When we look at that big chart of, “Here’s all the jobs that are up for grabs,” the $12 trillion of economic value, when you look at that, how at risk do you think your average person is?
The key word in there is average. An average person is at risk. Because if an average person isn’t thinking about things creatively, or if they’re just saying, “Oh, this is what I have to do today, let me just do it. Let me just do the bare minimum, get through it.” Yes, that person is at risk. But someone who looks at a problem or a task that’s in front of them and thinks, “What are the five different ways that I could approach this? Let me sit down for a second, really plan it out. What am I not thinking of? What have I not asked? What’s the information I don’t have in front of me? Let me go find that”—that person is less at risk because they are able to think beyond what’s right in front of them.
I think that is going to be harder to replace. So, for example, I do operations, I’m a CEO. I set the vision. You could theoretically give that to an AI to do. I could create CEO Katie GPT. And GPT Katie could set the vision, based on everything I know: “This is the direction that your company should go in.” What that generative AI doesn’t know is what I know—what we’ve tried, what we haven’t tried. I could give it all that information and it could still say, “Okay, it sounds like you’ve tried this.” But then it doesn’t necessarily know conversations that I’ve had with you offline about certain things. Could I give it all that information? Sure. But then now I’m introducing another person into the conversation. And as predictable as humans are, we’re unpredictable.
So you might say, “Katie would absolutely say this to something.” And I’m going to look at it and go, “I would absolutely not say that.” We’ve actually run into that with our account manager where she’s like, “Well, this is how I thought you would respond. This is how I thought you would post something on social media.” I’m like, “Absolutely not. That doesn’t sound like me at all.” She’s like, “But that’s what the GPT gave me that is supposed to sound like you.” I’m like, “Well, it’s wrong because I’m allowed to change my mind. I’m a human.” And GPTs or large language models don’t have that luxury of just changing its mind and just kind of winging it, if that makes sense.
Christopher S. Penn – 07:44
It does. What percentage, based on your experience in managing people, what percentage of people are that exceptional person versus the average or the below average?
A small percentage, unfortunately, because it comes down to two things: consistency and motivation. First, you have to be consistent and do your thing well all the time. In order to be consistent, you have to be motivated. So it’s not enough to just show up, check the boxes, and then go about your day, because anybody can do that; AI can do that. You have to be motivated to want to learn more, to want to do more. So the people who are demonstrating a hunger for reaching—what do they call it?—punching above their weight, reaching beyond what they have, those are the people who are going to be less vulnerable because they’re willing to learn, they’re willing to adapt, they’re willing to be agile.
Christopher S. Penn – 08:37
For a while now we’ve been saying that either you’re going to manage the machines or the machines are going to manage you. And now of course we are at the point the machine is just going to manage the machines and you are replaced. Given so few people have that intrinsic motivation, is that teachable or is that something that someone has to have—that inner desire to want to better, regardless of training?
“Teachable” I think is the wrong word. It’s more something that you have to tap into with someone. This is something that you’ve talked about before: what motivates people—money, security, blah, blah, whatever, all those different things. You can say, “I’m going to motivate you by dangling money in front of you,” or, “I’m going to motivate you by dangling time off in front of you.” I’m not teaching you anything. I’m just tapping into who you are as a person by understanding your motives, what motivates you, what gets you excited. I feel fairly confident in saying that your motivations, Chris, are to be the smartest person in the room or to have the most knowledge about your given industry so that you can be considered an expert.
That’s something that you’re going to continue to strive for. That’s what motivates you, in addition to financial security, in addition to securing a good home life for your family. That’s what motivates you. So as I, the other human in the company, think about it, I’m like, “What is going to motivate Chris to get his stuff done?” Okay, can I position it as, “If you do this, you’re going to be the smartest person in the room,” or, “If you do this, you’re going to have financial security?” And you’re like, “Oh, great, those are things I care about. Great, now I’m motivated to do them.” Versus if I say, “If you do this, I’ll get off your back.” That’s not enough motivation because you’re like, “Well, you’re going to be on my back anyway.”
Why bother with this thing when it’s just going to be the next thing the next day? So it’s not a matter of teaching people to be motivated. It’s a matter of, if you’re the person who has to do the motivating, finding what motivates someone. And that’s a very human thing. That’s as old as humans are—finding what people are passionate about, what gets them out of bed in the morning.
Christopher S. Penn – 11:05
Which is a complex interplay. If you think about the last five years, we’ve had a lot of discussions about things like quiet quitting, where people show up to work to do the bare minimum, where workers have recognized companies don’t have their back at all.
We have culture and pizza on Fridays.
Christopher S. Penn – 11:23
At 5:00 PM when everyone wants to just—
Go home and float in that day.
Christopher S. Penn – 11:26
Exactly. Given that, does that accelerate the replacement of those workers?
When we talk about change management, we talk about down to the individual level. You have to be explaining to each and every individual, “What’s in it for me?” If you’re working for a company that’s like, “Well, what’s in it for you is free pizza Fridays and funny hack days and Hawaiian shirt day,” that doesn’t put money in their bank account. That doesn’t put a roof over their head; that doesn’t put food on their table, maybe unless they bring home one of the free pizzas. But that’s once a week. What about the other six days a week? That’s not enough motivation for someone to stay. I’ve been in that position, you’ve been in that position. My first thought is, “Well, maybe stop spending money on free pizza and pay me more.”
That would motivate me, that would make me feel valued. If you said, “You can go buy your own pizza because now you can afford it,” that’s a motivator. But companies aren’t thinking about it that way. They’re looking at employees as just expendable cogs that they can rip and replace. Twenty other people would be happy to do the job that you’re unhappy doing. That’s true, but that’s because companies are setting up people to fail, not to succeed.
Christopher S. Penn – 12:46
And now with machinery, you’re saying, “Okay, since there’s a failing cog anyway, why don’t we replace it with an actual cog instead?” So where does this lead for companies? Particularly in capitalist markets where there is no strong social welfare net? Yeah, obviously if you go to France, you can work a 30-hour week and be just fine. But we don’t live in France. France, if you’re hiring, we’re available. Where does it lead? Because I can definitely see one road where this leads to basically where France ended up in 1789, which is the Guillotines. These people trot out the Guillotines because after a certain point, income inequality leads to that stuff. Where does this lead for the market as you see it now?
Unfortunately, nowhere good. We have seen time and time again, as much as we want to see the best in people, we’re seeing the worst in people today, as of this podcast recording—not at Macon. These are some of the best people. But when you step outside of this bubble, you’re seeing the worst in people. They’re motivated by money and money only, money and power. They don’t care about humanity as a whole. They’re like, “I don’t care if you’re poor, get poorer, I’m getting richer.” I feel like, unfortunately, that is the message that is being sent. “If you can make a dollar, go ahead and make a dollar. Don’t worry about what that does to anybody else. Go ahead and be in it for yourself.”
And that’s unfortunately where I see a lot of companies going: we’re just in it to make money. We no longer care about the welfare of our people. I’ve talked on previous shows, on previous podcasts. My husband works for a grocery store that was bought out by Amazon a few years ago, and he’s seeing the effects of that daily. Amazon bought this grocery chain and said basically, “We don’t actually care about the people. We’re going to automate things. We’re going to introduce artificial intelligence.” They’ve gotten rid of HR. He still has to bring home a physical check because there is no one to give him paperwork to do direct deposit.
Christopher S. Penn – 15:06
He’s been—ironic given the company.
And he’s been at the company for 25 years. But when they change things over, if he has an assurance question, there’s no one to go to. They probably have chatbots and an email distribution list that goes to somebody in an inbox that never. It’s so sad to see the decline based on where the company started and what the mission originally was of that company to where it is today. His suspicion—and this is not confirmed—his suspicion is that they are gearing up to sell this business, this grocery chain, to another grocery chain for profit and get rid of it. Flipping it, basically. Right now, they’re using it as a distribution center, which is not what it’s meant to be.
And now they’re going to flip it to another grocery store chain because they’ve gotten what they needed from it. Who cares about the people? Who cares about the fact that he as an individual has to work 50 hours a week because there’s nobody else? They’ve flattened the company. They’re like, “No, based on our AI scheduler, there’s plenty of people to cover all of these hours seven days a week.” And he’s like, “Yeah, you have me on there for seven of the seven days.” Because the AI is not thinking about work-life balance. It’s like, “Well, this individual is available at these times, so therefore he must be working here.” And it’s not going to do good things for people in services industries, for people in roles that cannot be automated.
So we talk about customer service—that’s picking up the phone, logging a plate—that can be automated. Walking into a brick and mortar, there are absolutely parts of it that can be automated, specifically the end purchase transaction. But the actual ordering and picking of things and preparing it—sure, you could argue that eventually robots could be doing that, but as of today, that’s all humans. And those humans are being treated so poorly.
Christopher S. Penn – 17:08
So where does that end for this particular company or any large enterprise?
They really have—they have to make decisions: do they want to put the money first or the people first? And you already know what the answer to that is. That’s really what it comes down to. When it ends, it doesn’t end. Even if they get sold, they’re always going to put the money first. If they have massive turnover, what do they care? They’re going to find somebody else who’s willing to do that work. Think about all of those people who were just laid off from the white-collar jobs who are like, “Oh crap, I still have a mortgage I have to pay, I still have a family I have to feed. Let me go get one of those jobs that nobody else is now willing to do.”
I feel like that’s the way that the future of work for those people who are left behind is going to turn over.
There’s a lot of people who are happy doing those jobs. I love doing more of what’s considered the blue-collar job—doing things manually, getting their hands in it, versus automating everything. But that’s me personally; that’s what motivates me. That I would imagine is very unappealing to you. Not that for almost. But if cooking’s off the table, there’s a lot of other things that you could do, but would you do them?
So when we talk about what’s going to happen to those people who are cut and left behind, those are the choices they’re going to have to make because there’s not going to be more tech jobs for them to choose from. And if you are someone in your career who has only ever focused on one thing, you’re definitely in big trouble.
Christopher S. Penn – 18:47
Yeah, I have a friend who’s a lawyer at a nonprofit, and they’re like, “Yeah, we have no funding anymore, so.” But I can’t pick up and go to England because I can’t practice law there.
Right. I think about people. Forever, social media was it. You focus on social media and you are set. Anybody will hire you because they’re trying to learn how to master social media. Guess where there’s no jobs anymore? Social media. So if all you know is social media and you haven’t diversified your skill set, you’re cooked, you’re done. You’re going to have to start at ground zero entry level. If there’s that. And that’s the thing that’s going to be tough because entry-level jobs—exactly.
Christopher S. Penn – 19:34
We saw, what was it, the National Labor Relations Board publish something a couple months ago saying that the unemployment rate for new college graduates is something 60% higher than the rest of the workforce because all the entry-level jobs have been consumed.
Right. I did a talk earlier this year at WPI—that’s Worcester Polytech in Massachusetts—through the Women in Data Science organization. We were answering questions basically like this about the future of work for AI. At a technical college, there are a lot of people who are studying engineering, there are a lot of people who are studying software development. That was one of the first questions: “I’m about to get my engineering degree, I’m about to get my software development degree. What am I supposed to do?” My response to that is, you still need to understand how the thing works. We were talking about this in our AI for Analytics workshop yesterday that we gave here at Macon. In order to do coding in generative AI effectively, you have to understand the software development life cycle.
There is still a need for the expertise. People are asking, “What do I do?” Focus on becoming an expert. Focus on really mastering the thing that you’re passionate about, the thing that you want to learn about. You’ll be the one teaching the AI, setting up the AI, consulting with the people who are setting up the AI. There’ll be plenty of practitioners who can push the buttons and set up agents, but they still need the experts to tell them what it’s supposed to do and what the output’s supposed to be.
Christopher S. Penn – 21:06
Do you see—this is kind of a trick question—do you see the machines consuming that expertise?
Oh, sure. But this is where we go back to what we were talking about: the more people, the more group think—which I hate that term—but the more group think you introduce, the more nuanced it is. When you and I sit down, for example, when we actually have five minutes to sit down and talk about the future of our business, where we want to go or what we’re working on today, the amount of information we can iterate on because we know each other so well and almost don’t have to speak in complete sentences and just can sort of pick up what the other person is thinking. Or I can look at something you’re writing and say, “Hey, I had an idea about that.” We can do that as humans because we know each other so well.
I don’t think—and you’re going to tell me this is going to happen—unless we can actually plug or forge into our brains and download all of the things. That’s never going to happen. Even if we build Katie GPT and Chris GPT and have them talk to each other, they’re never going to brainstorm the way you and I brainstorm in real life. Especially if you give me a whiteboard. I’m good. I’m going to get so much done.
Christopher S. Penn – 22:25
For people who are in their career right now, what do they do? You can tell somebody, “You need to be a good critical thinker, a creative thinker, a contextual thinker. You need to know where your data lives and things like that.” But the technology is advancing at such a fast rate. I talk about this in the workshops that we do—which, by the way, Trust Insights is offering workshops at your company, if we like one. But one of the things to talk about is, say, with the model’s acceleration in terms of growth, they’re growing faster than any technology ever has. They went from face rolling idiot in 2023 right to above PhD level in everything two years later.
Christopher S. Penn – 23:13
So the people who, in their career, are looking at this, going, “It’s like a bad Stephen King movie where you see the thing coming across the horizon.”
There is no such thing as a bad Stephen King movie. Sometimes the book is better, but it’s still good. But yes, maybe *Creepshow*. What do you mean in terms of how do they prepare for the inevitable?
Christopher S. Penn – 23:44
Prepare for the inevitable. Because to tell somebody, “Yeah, be a critical thinker, be a contextual thinker, be a creative thinker”—that’s good in the abstract. But then you’re like, “Well, my—yeah, my—and my boss says we’re doing a 10% headcount reduction this week.”
This is my personal way of approaching it: you can’t limit yourself to just go, “Okay, think about it. Okay, I’m thinking.” You actually have to educate yourself on a variety of different things. I am a voracious reader. I read all the time when I’m not working. In the past three weeks, I’ve read four books. And they’re not business books; they are fiction books and on a variety of things. But what that does is it keeps my brain active. It keeps my brain thinking. Then I give myself the space and time. When I walk my dog, I sort of process all of it. I think about it, and then I start thinking about, “What are we doing as our company today?” or, “What’s on the task list?”
Because I’ve expanded my personal horizons beyond what’s right in front of me, I can think about it from the perspective of other people, fictional or otherwise, “How would this person approach it?” or, “What would I do in that scenario?” Even as I’m reading these books, I start to think about myself. I’m like, “What would I do in that scenario? What would I do if I was finding myself on a road trip with a cannibal who, at the end of the road trip, was likely going to consume all of me, including my bones?” It was the last book I read, and it was definitely not what I thought I was signing up for. But you start to put yourself in those scenarios.
That’s what I personally think unlocks the critical thinking, because you’re not just stuck in, “Okay, I have a math problem. I have 1 + 1.” That’s where a lot of people think critical thinking starts and ends. They think, “Well, if I can solve that problem, I’m a critical thinker.” No, there’s only one way to solve that problem. That’s it. I personally would encourage people to expand their horizons, and this comes through having hobbies. You like to say that you work 24/7. That’s not true. You have hobbies, but they’re hobbies that help you be creative. They’re hobbies that help you connect with other people so that you can have those shared experiences, but also learn from people from different cultures, different backgrounds, different experiences.
That’s what’s going to help you be a stronger, fitable thinker, because you’re not just thinking about it from your perspective.
Christopher S. Penn – 26:25
Switching gears, what was missing, what’s been missing, and what is absent from this show in the AI space? I have an answer, but I want to hear yours.
Oh, boy. Really putting me on the spot here. I know what is missing. I don’t know. I’m going to think about it, and I am going to get back to you. As we all know, I am not someone who can think on my feet as quickly as you can. So I will take time, I will process it, but I will come back to you. What do you think is missing?
Christopher S. Penn – 27:07
One of the things that is a giant blind spot in the AI space right now is it is a very Western-centric view. All the companies say OpenAI and Anthropic and Google and Meta and stuff like that. Yet when you look at the leaderboards online of whose models are topping the charts—Cling Wan, Alibaba, Quinn, Deepseek—these are all Chinese-made models. If you look at the chip sets being used, the government of China itself just issued an edict: “No more Nvidia chips. We are going to use Huawei Ascend 920s now,” which are very good at what they do. And the Chinese models themselves, these companies are just giving them away to the world.
Christopher S. Penn – 27:54
They’re not trying to lock you in like a ChatGPT is. The premise for them, for basically the rest of the world that is in America, is, “Hey, you could take American AI where you’re locked in and you’re gonna spend more and more money, or here’s a Chinese model for free and you can build your national infrastructure on the free stuff that we’re gonna give you.” I’ve seen none of that here. That is completely absent from any of the discussions about what other nations are doing with AI. The EU has Mistral and Black Forest Labs, Sub-Saharan Africa has Lilapi AI. Singapore has Sea Lion, Korea has LG, the appliance maker, and their models. Of course, China has a massive footprint in the space. I don’t see that reflected anywhere here.
Christopher S. Penn – 28:46
It’s not in the conversations, it’s not in the hallways, it’s not on stage. And to me, that is a really big blind spot if you think—as many people do—that that is your number one competitor on the world stage.
Christopher S. Penn – 29:01
That’s a very complicated question. But it involves racism, it involves a substantial language barrier, it involves economics. When your competitor is giving away everything for free, you’re like, “Well, let’s just pretend they’re not there because we don’t want to draw any attention to them.” And it is also a deep, deep-seated fear. When you look at all of the papers that are being submitted by Google and Facebook and all these other different companies and you look at the last names of the principal investigators and stuff, nine out of 10 times it’s a name that’s coded as an ethnic Chinese name. China produces more PhDs than I think America produces students, just by population dynamics alone. You have this massive competitor, and it almost feels like people just want to put their heads in the sand and say they’re not there.
Christopher S. Penn – 30:02
It’s like the boogeyman, they’re not there. And yet if we’re talking about the deployment of AI globally, the folks here should be aware that is a thing that is not just the Sam Alton Show.
I think perhaps then, as we’re talking about the future of work and big companies, small companies, mid-sized companies, this goes sort of back to what I was saying: you need to expand your horizons of thinking. “Well, we’re a domestic company. Why do I need to worry about what China’s doing?” Take a look at your tech stack, and where are those software packages created? Who’s maintaining them? It’s probably not all domestic; it’s probably more of a global firm than you think you are. But we think about it in terms of who do we serve as customers, not what we are using internally. We know people like Paul has talked about operating systems, Ginny Dietrich has talked about operating systems.
That’s really sort of where you have to start thinking more globally in terms of, “What am I actually bringing into my organization?” Not just my customer base, not just the markets that I’m going after, not just my sales team territories, but what is actually powering my company. That’s, I think, to your point—that’s where you can start thinking more globally even if your customer base isn’t global. That might theoretically help you with that critical thinking to start expanding beyond your little homogeneous bubble.
Christopher S. Penn – 31:35
Even something like this has been a topic in the news recently. Rare earth minerals, which are not rare, they’re actually very commonplace. There’s just not much of them in any one spot. But China is the only economy on the planet that has figured out how to industrialize them safely. They produce 85% of it on the planet. And that powers your smartphone, that powers your refrigerator, your car and, oh by the way, all of the AI chips. Even things like that affect the future of work and the future of AI because you basically have one place that has a monopoly on this. The same for the Netherlands. The Netherlands is the only country on the planet that produces a certain kind of machine that is used to create these chips for AI.
Christopher S. Penn – 32:17
If that company goes away or something, the planet as a whole is like, “Well, I figured they need to come up with an alternative.” So to your point, we have a lot of these choke points in the AI value chain that could be blockers. Again, that’s not something that you hear. I’ve not heard that at any conference.
As we’re thinking about the future of work, which is what we’re talking about on today’s podcast at Macon, 1,500 people in Cleveland. I guarantee they’re going to do it again next year. So if you’re not here this year, definitely sign up for next year. Take a look at the Smarter X and their academy. It’s all good stuff, great people. I think—and this was the question Paul was asking in his keynote—”Where do we go from here?” The—
The atmosphere. Yes. We don’t need—we don’t need to start singing. I do not need. With more feeling. I do get that reference. You’re welcome. But one of the key takeaways is there are more questions than answers. You and I are asking each other questions, but there are more questions than answers. And if we think we have all of the answers, we’re wrong. We have the answers that are sufficient enough for today to keep our business moving forward. But we have to keep asking new questions. That also goes into that critical thinking. You need to be comfortable not knowing. You need to be comfortable asking questions, and you need to be comfortable doing that research and seeking it out and maybe getting it wrong, but then continuing to learn from it.
Christopher S. Penn – 33:50
And the future of work, I mean, it really is a very cloudy crystal wall. We have no idea. One of the things that Paul pointed out really well was you have different scaling laws depending on where you are in AI. He could have definitely spent some more time on that, but I understand it was a keynote, not a deep dive. There’s more to that than even that. And they do compound each other, which is what’s creating this ridiculously fast pace of AI evolution. There’s at least one more on the way, which means that the ability for these tools to be superhuman across tasks is going to be here sooner than people think. Paul was saying by 2026, 2027, that’s what we’ll start to see. Robotics, depends on where you are.
Christopher S. Penn – 34:41
What’s coming out of Chinese labs for robots is jaw dropping.
I don’t want to know. I don’t want to know. I’ve seen *Ex Machina*, and I don’t want to know. Yeah, no. To your point, I think a lot of people bury their head in the sand because of fear. But in order to, again, it sort of goes back to that critical thinking, you have to be comfortable with the uncomfortable. I’m sort of joking: “I don’t want to know. I’ve seen *Ex Machina*.” But I do want to know. I do need to know. I need to understand. Do I want to be the technologist? No. But I need to play with these tools enough that I feel I understand how they work. Yesterday I was playing in Opal. I’m going to play in N8N.
It’s not my primary function, but it helps me better understand where you’re coming from and the questions that our clients are asking. That, in a very simple way to me, is the future of work: that at least I’m willing to stretch myself and keep exploring and be uncomfortable so that I can say I’m not static.
Christopher S. Penn – 35:46
I think one of the things that 3M was very well known for in the day was the 20% rule, where an employee, as part of their job, could have 20% of the time just work on side projects related to the company. That’s how Post-it Notes got invented, I think. I think in the AI forward era that we’re in, companies do need to make that commitment again to the 20% rule. Not necessarily just messing around, but specifically saying you should be spending 20% of your time with AI to figure out how to use it, to figure out how to do some of those tasks yourself, so that instead of being replaced by the machine, you’re the one who’s at least running the machine. Because if you don’t do that, then the person in the next cubicle will.
Christopher S. Penn – 36:33
And then the company’s like, “Well, we used to have 10 people, we only need two. And you’re not one of the two who has figured out how to use this thing to do that. So out you go.”
I think that was what Paul was doing in his AI for Productivity workshop yesterday, was giving people the opportunity to come up with those creative ideas. Our friend Andy Crestadino was relaying a story yesterday to us of a very similar vein where someone was saying, “I’ll give you $5,000. Create whatever you want.” And the thing that the person created was so mind-blowing and so useful that he was like, “Look what happens when I just let people do something creative.” But if we bring it sort of back whole circle, what’s the motivation? Why are people doing it in the first place?
It has to be something that they’re passionate about, and that’s going to really be what drives the future of work in terms of being able to sustain while working alongside AI, versus, “This is all I know how to do. This is all I ever want to know how to do.” Yes, AI is going over your job.
Christopher S. Penn – 37:33
So I guess wrapping up, we definitely want you thinking creatively, critically, contextually. Know where your data is, know where your ideas come from, broaden your horizons so that you have more ideas, and be able to be one of the people who knows how to call BS on the machines and say, “That’s completely wrong, ChatGPT.” Beyond that, everyone has an obligation to try to replace themselves with the machines before someone else does it to you.
I think again, to plug Macon, which is where we are as we’re recording this episode, this is a great starting point for expanding your horizons because the amount of people that you get to network with are from different companies, different experiences, different walks of life. You can go to the sessions, learn it from their point of view. You can listen to Paul’s keynote. If you think you already know everything about your job, you’re failing. Take the time to learn where other people are coming from. It may not be immediately relevant to you, but it could stick with you. Something may resonate, something might spark a new idea.
I feel like we’re pretty far along in our AI journey, but in sitting in Paul’s keynote, I had two things that stuck out to me: “Oh, that’s a great idea. I want to go do that.” That’s great. I wouldn’t have gotten that otherwise if I didn’t step out of my comfort zone and listen to someone else’s point of view. That’s really how people are going to grow, and that’s that critical thinking—getting those shared experiences and getting that brainstorming and just community.
Christopher S. Penn – 39:12
Exactly. If you’ve got some thoughts about how you are approaching the future of work, pop on by our free Slack group. Go to trust insights AI analysts for marketers, where you and over 4,500 other marketers are asking and answering each other’s questions every single day. Wherever you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to Trust Insights AI Ti Podcast, where you can find us all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one.
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.