
Sign up to save your podcasts
Or


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to measure AI proficiency impact beyond speed.
00:00 – Introduction
Watch the full episode to level up your AI leadership.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn: In this week’s In Ear Insights, let’s talk about AI and the way the things that we are measuring in business to measure AIs, the productivity, the benefits that you’re getting out of it.
One of my favorite apps, Katie, is called Blind. This is an anonymous confessions app for the business world where people who work at companies—mostly in big business and big tech—share anonymous confessions. They have to say what company they’re with, but that’s it. There were three posts that really caught my eye over the weekend.
The first was from a person who works at Capital One bank who said, “Hi, I’m a junior software engineer.” Three years into my career, my co‑workers are pumping out so many poll requests with Claude code and blitzing through jobs that used to take three to five days in less than an hour. I feel like every day at the office is a race to see who can generate more poll requests and complete them than anyone else.
The second one was from JP Morgan Chase saying, “I just downloaded Claude coat and wtf. I don’t know what to think. Either we are cooked or saved.”
The third was from an engineer at Tesla who said, “I joined recently as a contractor and don’t have access to Claude. I’m slower than the others on my team and it stresses me out.”
So my question to you is this, Katie: Obviously people are using generative AI to move very fast. However, I don’t know if fast is the metric that we should be looking at here, particularly since a lot of people who manage coders don’t necessarily manage them well. They don’t.
For example, very famously, Elon Musk, when he took over Twitter, fired people who didn’t write enough code. He measured people’s productivity solely on lines of code written. Anyone who’s actually written code for a living knows you want less code written rather than more because there’s a certain amount of elegance to writing less code.
So my question to you is, as we talk about AI proficiency—sort of AI proficiency week here at Trust Insights—what would you tell people who are managing people using AI about measuring their proficiency and measuring the results that they’re getting?
Katie Robbert: So first, let me answer your question. No, I do not frequent—was it Blind? Yeah. Anyone who knows me knows that I am honest and direct to a fault. So no, that would annoy me more than anything—just say it to my face.
But that aside, I understand why apps like that exist. Not every company builds a culture where an open‑door policy is actually true. The policy is: the door is open only if you have positive things to share; the door is closed if you have complaints. I sympathize with people who feel the need to turn to those kinds of apps to express concern, frustration, fear. It seems, Chris, that a lot of the fear over the past couple of years is: “Will AI take my job?” In those environments, leadership decisions about process and output are really pushing for AI to take the job.
What I’m not seeing is what the success metrics are. If the metric is faster and more, then you’re missing the third most important one—quality. We don’t know what kind of quality is being produced. Given those short snippets of context, we can assume it’s probably mediocre. It’s probably slightly above the bar, but nothing outstanding—enough to get by, enough to keep the lights on. For some larger companies, that’s fine because you can bury mediocre work in the politics and red tape of an enterprise‑sized organization. No one really expects much more, which is a little sad.
So what I would say to managers is, number one, if you’re not clear on what you’re being measured on, or if your success metric is faster and more, head for the hills—run. That is not good. I mean it in all sincerity; that is not going to serve you in the long run because those metrics are not sustainable.
Christopher S. Penn: And yet that’s what—particularly at a bigger company—where I can definitely, obviously at a company like Trust Insights, we’re four people. Outcomes are something we all measure because we have a direct line to outcomes. If we sell more courses, book more keynote speeches, get more retainer clients, we all have a hand in that and can see very clearly the business outcome.
At a company like JP Morgan Chase, Bank of America, or Capital One, there are hundreds of thousands of employees. Your line of sight to any kind of business outcome is probably five layers of management removed. The front line is way over there—tellers, for example. You write the software that writes the software that manages the system the tellers use. So you don’t have clear outcomes from a business‑level perspective.
Because I used to work at places like AT&T where you are just a cog in the machine, your outcomes very often are either faster or more because no one knows what else to measure.
Katie Robbert: In companies like that, those outcomes are—quote, unquote—good enough because of the nature of what you produce. Consumers have become so dependent on your company that we often talk about the really crappy customer service at cable and Internet providers. There are only so many of them, and they’re all the same. We have become reliant on that technology and have no choice but to put up with crappy service from the big providers.
The same goes for the financial industry. We don’t have a choice other than to rely on these crappy companies because we aren’t equipped to stand up our own financial institutions and change the rules. It’s a big, old industry, and that’s why they operate the way they do. It’s disheartening.
When it comes down to humans, you have to make your own personal choices. Are you okay contributing to the mediocrity of the company and never really advancing? Chris, what you’ve been saying—what is the art of the possible? They don’t know, but they also don’t care. They’re not looking to disrupt the industry. No other companies are starting up to disrupt them because they’re so massive; they’re okay with the status quo, changing at a glacial pace, if at all. It’s not a great story to tell.
You might have a consistent paycheck, but you might not have a lot of passion for the work you do. It might just be clock in at nine, clock out at five, with two 15‑minute breaks and a 30‑minute lunch—and that’s fine for a lot of people. That works for survival. Outside of that work environment is where you find joy, passion, and the things you’re really interested in.
All to say, the advice I would give to managers is: how much are you willing to put up with? Those industries aren’t going to change.
Christopher S. Penn: So in the context of AI proficiency, what do you advise them to focus on? Knowing that, to your point, these places are so calcified, faster is one of the only benchmarks that matter, alongside constantly shrinking budgets. Cheaper is built in because you have to do 5 % less every year.
How do you suggest a manager or employee who feels the fastest typist wins the day and gets the promotion—even if the quality is zero—handle this? The Tesla engineer example is interesting: they don’t have access to generative AI, co‑workers do, they’re much faster, and the contractor fears being fired.
How do we resolve this for team members, knowing that these companies are so calcified that even if a department takes a stand on quality, the other twenty departments competing for budget will say, “Great, you focus on quality; we’ll take your budget because we’ll produce ten times more next year.” Even quality sucks.
Katie Robbert: The Tesla example is an outlier. We don’t have context for why that person doesn’t have access to generative AI—maybe they’re brand new. Contractors don’t get access to paid tools, so that explains it.
When we talk about levels of AI proficiency, generic training doesn’t work; it doesn’t stick. Companies and individuals need to assess their AI proficiency. We typically do this on a six‑point scale, from Basic to Advanced. Within each level are skill sets: Level 1—editing, correcting grammar, asking it to write code. Level 2—writing code and reading code. Level 3—building QA plans. Level 4—providing business or product requirements, agile cues, or building a project plan. It’s like a career path: today I’m a junior analyst, tomorrow I want to be a senior analyst. The same applies to AI proficiency.
My recommendation for managers and individuals stuck in those situations—or anyone looking to level up their AI proficiency—is to look at what’s next, what you don’t know. In the case of Tesla or JP Morgan, they will only produce a limited variety of things. In banking, look at the use cases and how you’re using AI. If you’re building code, how do you automate while keeping a human in the loop? Human‑in‑the‑loop means literal human intervention; you’re not just setting it and forgetting it like a rotisserie chicken. You must ensure a human is paying attention.
Perhaps your KPIs aren’t quality of output, but if you start delivering incorrect work, customers complain, and the company loses money, the quality of your output will suddenly matter. It doesn’t matter how fast you’re creating it.
For the Tesla contractor who lacks internal AI tools, they can get access to their own tools and build their skill set: acknowledge they’re not as fast as full‑time employees, determine what they need to do to match or outpace them, and work on it in their own time if they care. In that instance, the person is worried about job security, so it’s probably in their best interest to act.
Christopher S. Penn: I like how you analogize the six levels to basically the three levels of management. The first two levels are individual contributors; the next two are middle management; the final two are leadership—going from typing the thing to delegating it entirely to someone else. That’s a great analogy. I think after this episode I’m going to revise that chart to help people wrap their brains around it.
What does the level of AI performance efficiency mean? It means you go from individual contributor to leader, eventually leading machines—not necessarily humans.
The Tesla example worries me because the company is essentially asking contractors to bring their own AI tools—a data‑privacy and security nightmare. Still, when I think about our clients who engage us for AI readiness assessments, we see a hierarchy of people with different proficiency levels outpacing each other. Is it fair to say that people with more proficiency—or who invest more in themselves—will blow past peers who are not? Do those peers need to worry about career viability when a peer becomes a mythical 10× engineer or marketer?
Katie Robbert: The short answer is yes, but that’s true in any career path. Unless you’re in a company that promotes someone based on appearance rather than ability, which is another conversation, it’s absolutely true. Levels of AI proficiency run in parallel with organizational maturity. AI proficiency can’t stand alone without a certain amount of maturity within the organization.
We often talk about foundations—the five Ps: documented processes, platforms, good governance, and privacy. Those have to exist for someone to be set up for success and move through AI proficiency levels. Otherwise, they’re becoming proficient against creative garbage. That won’t translate to better career opportunities because, boiled down, it’s garbage in, garbage out—you become proficient at moving garbage around, and nobody wants to hire that.
Christopher S. Penn: An essay from last year discussed the AI reckoning in larger companies. It said AI is doing what decades of management consulting couldn’t—showcasing as you apply AI to processes. Entire levels of management are unnecessary, doing nothing but holding meetings and sending emails. The essay posited that mid‑level managers may realize they only push paper from point A to point B. In those cases, what should people in those positions think about for their own AI proficiency, knowing that improving it will reveal that they add little value?
Katie Robbert: As someone who’s spent most of her career managing, I’ve often had to defend my role. Once, an agency considered dissolving my position because they thought I didn’t bring anything to the table—obviously not true. The team that grew from three people to a $3 million profit center also knows that. Managers need to think about delegation: not just handing off tasks, but ensuring the right people are in the right seats. Coaching is a big part of the job—bringing people up through their proficiency levels.
If I’m a middle manager using the individual‑contributor, manager, leadership matrix, how do I get out of that vulnerable middle spot? Maybe I need to create more workflows, find efficiencies, save the budget, identify level‑one champions, and build them up. Those are the things someone in that middle vulnerable section should consider, because they are vulnerable. Many companies have managers who don’t do squat. I’ve worked alongside those managers; it’s maddening.
One thing that will evolve with the manager role is that you can no longer be just a manager. You can’t just manage things; you have to bring some level of individual contribution and thought leadership to the role. It’s no longer enough to just manage—if that makes sense.
Christopher S. Penn: It makes sense. Over the weekend I was working on something for myself: as technology evolves and I delegate more to it, the guardrails for quality have to get stricter. I revised the rules I use with my Python coding agents—new, enhanced, advanced rules with more guidelines and descriptions about what the agent is and is not allowed to do. This morning my kickoff process broke, so I told the agent to fix it according to the new rules. I realized the previous application sucked, and I fixed it. Now it’s much happier.
I think building quality guardrails will differentiate managers who take on AI management—not just people management. Yes, AI can be faster, but there’s no guarantee it’s better. If I’m a manager who gets faster and better results than peers who just hope it works, I keep my job. What do you think about that angle?
Katie Robbert: It makes sense. Take the middle‑manager example: the VP says, “Client needs these five things.” The hierarchy follows—manager, then individual contributors. The middle person can step up, create a process, develop a proof‑of‑concept example based on the VP’s input, delegate with quality assurance, and cut down iterations. That saves time, saves budget, gets results faster, and reduces frustration because expectations are clear.
Christopher S. Penn: The axiom we talk about when discussing AI optimization is bigger, better, faster, cheaper. Faster obviously saves time and money. We don’t often talk about bigger and better—doing things that add value that wasn’t there before. The value you create should be higher quality.
To wrap up AI proficiency, we have three divisions, six levels, and a focus: if you’re worried about someone else being faster, be as fast and be better quality. Cutting corners for speed will catch up to you.
If you have thoughts about how people are using—or misusing—AI in terms of proficiency, pop by our free Slack group at trustinsights.ai/analysts‑for‑marketers, where over 4,500 marketers ask and answer each other’s questions daily. You can also watch or listen to the show on any podcast platform or the Trust Insights AI TI Podcast. Thanks for tuning in. We’ll talk to you on the next one.
Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data‑driven approach.
Trust Insight specializes in helping businesses leverage data, AI, and machine learning to drive measurable marketing ROI. Services span from comprehensive data strategies and deep‑dive marketing analysis to building predictive models with tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology, MarTech selection and implementation, and high‑level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic, Claude, DALL‑E, Midjourney, Stable Diffusion, and Metalama. The firm provides fractional team members such as a CMO or data scientists to augment existing teams.
Beyond client work, Trust Insights contributes to the marketing community through the Trust Insights blog, the In Ear Insights podcast, the Inbox Insights newsletter, livestream webinars, and keynote speaking. What distinguishes Trust Insights is a focus on delivering actionable insights—not just raw data. The firm leverages cutting‑edge generative AI techniques like large language models and diffusion models while explaining complex concepts clearly through compelling narratives and visualizations. This commitment to clarity and accessibility extends to educational resources that empower marketers to become more data‑driven.
Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a midsize business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever‑evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information.
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
By Trust Insights5
99 ratings
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to measure AI proficiency impact beyond speed.
00:00 – Introduction
Watch the full episode to level up your AI leadership.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
Download the MP3 audio here.
[podcastsponsor]
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Christopher S. Penn: In this week’s In Ear Insights, let’s talk about AI and the way the things that we are measuring in business to measure AIs, the productivity, the benefits that you’re getting out of it.
One of my favorite apps, Katie, is called Blind. This is an anonymous confessions app for the business world where people who work at companies—mostly in big business and big tech—share anonymous confessions. They have to say what company they’re with, but that’s it. There were three posts that really caught my eye over the weekend.
The first was from a person who works at Capital One bank who said, “Hi, I’m a junior software engineer.” Three years into my career, my co‑workers are pumping out so many poll requests with Claude code and blitzing through jobs that used to take three to five days in less than an hour. I feel like every day at the office is a race to see who can generate more poll requests and complete them than anyone else.
The second one was from JP Morgan Chase saying, “I just downloaded Claude coat and wtf. I don’t know what to think. Either we are cooked or saved.”
The third was from an engineer at Tesla who said, “I joined recently as a contractor and don’t have access to Claude. I’m slower than the others on my team and it stresses me out.”
So my question to you is this, Katie: Obviously people are using generative AI to move very fast. However, I don’t know if fast is the metric that we should be looking at here, particularly since a lot of people who manage coders don’t necessarily manage them well. They don’t.
For example, very famously, Elon Musk, when he took over Twitter, fired people who didn’t write enough code. He measured people’s productivity solely on lines of code written. Anyone who’s actually written code for a living knows you want less code written rather than more because there’s a certain amount of elegance to writing less code.
So my question to you is, as we talk about AI proficiency—sort of AI proficiency week here at Trust Insights—what would you tell people who are managing people using AI about measuring their proficiency and measuring the results that they’re getting?
Katie Robbert: So first, let me answer your question. No, I do not frequent—was it Blind? Yeah. Anyone who knows me knows that I am honest and direct to a fault. So no, that would annoy me more than anything—just say it to my face.
But that aside, I understand why apps like that exist. Not every company builds a culture where an open‑door policy is actually true. The policy is: the door is open only if you have positive things to share; the door is closed if you have complaints. I sympathize with people who feel the need to turn to those kinds of apps to express concern, frustration, fear. It seems, Chris, that a lot of the fear over the past couple of years is: “Will AI take my job?” In those environments, leadership decisions about process and output are really pushing for AI to take the job.
What I’m not seeing is what the success metrics are. If the metric is faster and more, then you’re missing the third most important one—quality. We don’t know what kind of quality is being produced. Given those short snippets of context, we can assume it’s probably mediocre. It’s probably slightly above the bar, but nothing outstanding—enough to get by, enough to keep the lights on. For some larger companies, that’s fine because you can bury mediocre work in the politics and red tape of an enterprise‑sized organization. No one really expects much more, which is a little sad.
So what I would say to managers is, number one, if you’re not clear on what you’re being measured on, or if your success metric is faster and more, head for the hills—run. That is not good. I mean it in all sincerity; that is not going to serve you in the long run because those metrics are not sustainable.
Christopher S. Penn: And yet that’s what—particularly at a bigger company—where I can definitely, obviously at a company like Trust Insights, we’re four people. Outcomes are something we all measure because we have a direct line to outcomes. If we sell more courses, book more keynote speeches, get more retainer clients, we all have a hand in that and can see very clearly the business outcome.
At a company like JP Morgan Chase, Bank of America, or Capital One, there are hundreds of thousands of employees. Your line of sight to any kind of business outcome is probably five layers of management removed. The front line is way over there—tellers, for example. You write the software that writes the software that manages the system the tellers use. So you don’t have clear outcomes from a business‑level perspective.
Because I used to work at places like AT&T where you are just a cog in the machine, your outcomes very often are either faster or more because no one knows what else to measure.
Katie Robbert: In companies like that, those outcomes are—quote, unquote—good enough because of the nature of what you produce. Consumers have become so dependent on your company that we often talk about the really crappy customer service at cable and Internet providers. There are only so many of them, and they’re all the same. We have become reliant on that technology and have no choice but to put up with crappy service from the big providers.
The same goes for the financial industry. We don’t have a choice other than to rely on these crappy companies because we aren’t equipped to stand up our own financial institutions and change the rules. It’s a big, old industry, and that’s why they operate the way they do. It’s disheartening.
When it comes down to humans, you have to make your own personal choices. Are you okay contributing to the mediocrity of the company and never really advancing? Chris, what you’ve been saying—what is the art of the possible? They don’t know, but they also don’t care. They’re not looking to disrupt the industry. No other companies are starting up to disrupt them because they’re so massive; they’re okay with the status quo, changing at a glacial pace, if at all. It’s not a great story to tell.
You might have a consistent paycheck, but you might not have a lot of passion for the work you do. It might just be clock in at nine, clock out at five, with two 15‑minute breaks and a 30‑minute lunch—and that’s fine for a lot of people. That works for survival. Outside of that work environment is where you find joy, passion, and the things you’re really interested in.
All to say, the advice I would give to managers is: how much are you willing to put up with? Those industries aren’t going to change.
Christopher S. Penn: So in the context of AI proficiency, what do you advise them to focus on? Knowing that, to your point, these places are so calcified, faster is one of the only benchmarks that matter, alongside constantly shrinking budgets. Cheaper is built in because you have to do 5 % less every year.
How do you suggest a manager or employee who feels the fastest typist wins the day and gets the promotion—even if the quality is zero—handle this? The Tesla engineer example is interesting: they don’t have access to generative AI, co‑workers do, they’re much faster, and the contractor fears being fired.
How do we resolve this for team members, knowing that these companies are so calcified that even if a department takes a stand on quality, the other twenty departments competing for budget will say, “Great, you focus on quality; we’ll take your budget because we’ll produce ten times more next year.” Even quality sucks.
Katie Robbert: The Tesla example is an outlier. We don’t have context for why that person doesn’t have access to generative AI—maybe they’re brand new. Contractors don’t get access to paid tools, so that explains it.
When we talk about levels of AI proficiency, generic training doesn’t work; it doesn’t stick. Companies and individuals need to assess their AI proficiency. We typically do this on a six‑point scale, from Basic to Advanced. Within each level are skill sets: Level 1—editing, correcting grammar, asking it to write code. Level 2—writing code and reading code. Level 3—building QA plans. Level 4—providing business or product requirements, agile cues, or building a project plan. It’s like a career path: today I’m a junior analyst, tomorrow I want to be a senior analyst. The same applies to AI proficiency.
My recommendation for managers and individuals stuck in those situations—or anyone looking to level up their AI proficiency—is to look at what’s next, what you don’t know. In the case of Tesla or JP Morgan, they will only produce a limited variety of things. In banking, look at the use cases and how you’re using AI. If you’re building code, how do you automate while keeping a human in the loop? Human‑in‑the‑loop means literal human intervention; you’re not just setting it and forgetting it like a rotisserie chicken. You must ensure a human is paying attention.
Perhaps your KPIs aren’t quality of output, but if you start delivering incorrect work, customers complain, and the company loses money, the quality of your output will suddenly matter. It doesn’t matter how fast you’re creating it.
For the Tesla contractor who lacks internal AI tools, they can get access to their own tools and build their skill set: acknowledge they’re not as fast as full‑time employees, determine what they need to do to match or outpace them, and work on it in their own time if they care. In that instance, the person is worried about job security, so it’s probably in their best interest to act.
Christopher S. Penn: I like how you analogize the six levels to basically the three levels of management. The first two levels are individual contributors; the next two are middle management; the final two are leadership—going from typing the thing to delegating it entirely to someone else. That’s a great analogy. I think after this episode I’m going to revise that chart to help people wrap their brains around it.
What does the level of AI performance efficiency mean? It means you go from individual contributor to leader, eventually leading machines—not necessarily humans.
The Tesla example worries me because the company is essentially asking contractors to bring their own AI tools—a data‑privacy and security nightmare. Still, when I think about our clients who engage us for AI readiness assessments, we see a hierarchy of people with different proficiency levels outpacing each other. Is it fair to say that people with more proficiency—or who invest more in themselves—will blow past peers who are not? Do those peers need to worry about career viability when a peer becomes a mythical 10× engineer or marketer?
Katie Robbert: The short answer is yes, but that’s true in any career path. Unless you’re in a company that promotes someone based on appearance rather than ability, which is another conversation, it’s absolutely true. Levels of AI proficiency run in parallel with organizational maturity. AI proficiency can’t stand alone without a certain amount of maturity within the organization.
We often talk about foundations—the five Ps: documented processes, platforms, good governance, and privacy. Those have to exist for someone to be set up for success and move through AI proficiency levels. Otherwise, they’re becoming proficient against creative garbage. That won’t translate to better career opportunities because, boiled down, it’s garbage in, garbage out—you become proficient at moving garbage around, and nobody wants to hire that.
Christopher S. Penn: An essay from last year discussed the AI reckoning in larger companies. It said AI is doing what decades of management consulting couldn’t—showcasing as you apply AI to processes. Entire levels of management are unnecessary, doing nothing but holding meetings and sending emails. The essay posited that mid‑level managers may realize they only push paper from point A to point B. In those cases, what should people in those positions think about for their own AI proficiency, knowing that improving it will reveal that they add little value?
Katie Robbert: As someone who’s spent most of her career managing, I’ve often had to defend my role. Once, an agency considered dissolving my position because they thought I didn’t bring anything to the table—obviously not true. The team that grew from three people to a $3 million profit center also knows that. Managers need to think about delegation: not just handing off tasks, but ensuring the right people are in the right seats. Coaching is a big part of the job—bringing people up through their proficiency levels.
If I’m a middle manager using the individual‑contributor, manager, leadership matrix, how do I get out of that vulnerable middle spot? Maybe I need to create more workflows, find efficiencies, save the budget, identify level‑one champions, and build them up. Those are the things someone in that middle vulnerable section should consider, because they are vulnerable. Many companies have managers who don’t do squat. I’ve worked alongside those managers; it’s maddening.
One thing that will evolve with the manager role is that you can no longer be just a manager. You can’t just manage things; you have to bring some level of individual contribution and thought leadership to the role. It’s no longer enough to just manage—if that makes sense.
Christopher S. Penn: It makes sense. Over the weekend I was working on something for myself: as technology evolves and I delegate more to it, the guardrails for quality have to get stricter. I revised the rules I use with my Python coding agents—new, enhanced, advanced rules with more guidelines and descriptions about what the agent is and is not allowed to do. This morning my kickoff process broke, so I told the agent to fix it according to the new rules. I realized the previous application sucked, and I fixed it. Now it’s much happier.
I think building quality guardrails will differentiate managers who take on AI management—not just people management. Yes, AI can be faster, but there’s no guarantee it’s better. If I’m a manager who gets faster and better results than peers who just hope it works, I keep my job. What do you think about that angle?
Katie Robbert: It makes sense. Take the middle‑manager example: the VP says, “Client needs these five things.” The hierarchy follows—manager, then individual contributors. The middle person can step up, create a process, develop a proof‑of‑concept example based on the VP’s input, delegate with quality assurance, and cut down iterations. That saves time, saves budget, gets results faster, and reduces frustration because expectations are clear.
Christopher S. Penn: The axiom we talk about when discussing AI optimization is bigger, better, faster, cheaper. Faster obviously saves time and money. We don’t often talk about bigger and better—doing things that add value that wasn’t there before. The value you create should be higher quality.
To wrap up AI proficiency, we have three divisions, six levels, and a focus: if you’re worried about someone else being faster, be as fast and be better quality. Cutting corners for speed will catch up to you.
If you have thoughts about how people are using—or misusing—AI in terms of proficiency, pop by our free Slack group at trustinsights.ai/analysts‑for‑marketers, where over 4,500 marketers ask and answer each other’s questions daily. You can also watch or listen to the show on any podcast platform or the Trust Insights AI TI Podcast. Thanks for tuning in. We’ll talk to you on the next one.
Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data‑driven approach.
Trust Insight specializes in helping businesses leverage data, AI, and machine learning to drive measurable marketing ROI. Services span from comprehensive data strategies and deep‑dive marketing analysis to building predictive models with tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology, MarTech selection and implementation, and high‑level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic, Claude, DALL‑E, Midjourney, Stable Diffusion, and Metalama. The firm provides fractional team members such as a CMO or data scientists to augment existing teams.
Beyond client work, Trust Insights contributes to the marketing community through the Trust Insights blog, the In Ear Insights podcast, the Inbox Insights newsletter, livestream webinars, and keynote speaking. What distinguishes Trust Insights is a focus on delivering actionable insights—not just raw data. The firm leverages cutting‑edge generative AI techniques like large language models and diffusion models while explaining complex concepts clearly through compelling narratives and visualizations. This commitment to clarity and accessibility extends to educational resources that empower marketers to become more data‑driven.
Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a midsize business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever‑evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information.
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

0 Listeners