The AI Governance Brief

The Anti-Silo: General Staff/Workers—The Forgotten Stakeholders (Episode 6)


Listen Later

Forty-five percent of workers now use AI regularly. Confidence in using that AI? Down 18% in the last year.

That's not a typo. AI usage jumped 13% while trust collapsed.

Workers are using tools they don't trust, haven't been trained on, and increasingly fear will replace them. Fifty-seven percent of employees hide their AI usage from employers. Half can't tell if their AI-generated work is even accurate.

And here's the nightmare: 56% of the global workforce reports receiving NO recent training. None.

While management deploys AI at breakneck speed and HR scrambles to audit bias, frontline workers are left to figure it out alone—with their jobs on the line.

**The Great Disconnect:**

ManpowerGroup's 2026 Global Talent Barometer—released January 20th—reveals catastrophic results:

- AI usage jumped 13% to 45% of workers
- Confidence in using technology fell sharply by 18%
- For the first time in three years, overall worker confidence declined
- Baby Boomers: 35% decrease in tech confidence
- Gen X workers: 25% drop

More than half the global workforce—56%—reports receiving no recent training. Fifty-seven percent have no access to mentorship opportunities.

You're deploying AI faster than ever while systematically denying workers the support they need to use it.

**The Assumption That's Completely Wrong:**

The belief that workers are resistant to AI? Wrong.

A Weavix survey of 300 frontline manufacturing workers found:

- 74% are comfortable with AI-powered tools
- 87% are comfortable with data collection for safety and efficiency
- 81% report being MORE engaged at work than last year
- 94% are optimistic about workplace safety improvements in 2026

Nearly nine in ten frontline workers are FINE with AI monitoring if it improves safety and efficiency. The problem isn't worker resistance.

[CLIP] "Workers are comfortable with AI and data collection, but their leaders have hamstrung them with prehistoric communication devices or nothing at all."

67% of manufacturing workers still rely primarily on outdated two-way radios. 64% operate under smartphone restrictions. They're ready for AI. Management is blocking them with 1990s infrastructure.

**The Hidden AI Crisis:**

According to a KPMG and University of Melbourne study: 57% of employees HIDE their AI usage from employers.

They're using AI anyway. They just don't tell you.

And half of those workers can't tell whether the AI-generated content they're creating is even accurate. They're publishing work they don't trust because they need to keep up.

That's the "AI workslop" crisis—poorly created AI content that "can sound authoritative and accurate but lacks the examples and detail that individuals require for behavior change."

This isn't just inefficiency. It's organizational sabotage from the bottom up, created entirely by management failure to include workers in AI transformation.

**Four Worker-Level Failures:**

**Failure #1 - The Training Void:**

- Over 90% of global enterprises face critical skills shortages by 2026
- Sustained skills gaps risk $5.5 trillion in losses from global market performance
- Only one-third of employees report receiving ANY AI training in the past year
- OECD found most AI training focuses on advanced skills only 1% of jobs require

Result: "AI workslop"—managers using AI to write performance reviews without considering actual performance. AI-enabled dereliction of duty.

**Failure #2 - The Participation Gap:**

Who's typically on AI Governance Committees? C-Suite, IT leadership, Legal, Compliance, HR directors. Who's NOT? Frontline workers—the people who actually USE AI daily.

Workers with 20+ years of experience: Only 29% feel their feedback reaches decision-makers.

This creates "Shadow Participation"—workers shaping AI adoption through workarounds, hidden usage, and informal experimentation. 57% of your AI adoption lessons are invisible to you.

**Failure #3 - The Infrastructure Mismatch:**

81% of frontline workers report being MORE engaged than last year. 94% are optimistic about safety improvements.

What do you give them? Two-way radios from 1985.

You're spending millions on AI platforms while your frontline can't even send a text message with a photo.

**Failure #4 - The Feedback Vacuum:**

When AI makes a mistake that a frontline worker catches, what happens? In most organizations: Nothing. The worker fixes it manually, the AI never learns, the error repeats tomorrow.

You've created AI systems that can't learn from the people using them.

**The Frontline Stakeholder Model:**

**Principle #1 - Workers Are Stakeholders, Not Users:**

Stop calling them "end users." Users consume products. Stakeholders have vested interests in outcomes. Frontline workers' livelihoods depend on AI decisions about productivity, performance, and job security.

Stakeholders have rights:
- Right to understand how AI affects their work
- Right to contribute feedback that shapes AI deployment
- Right to transparent communication about AI-driven changes
- Right to training that enables effective AI participation
- Right to escalate concerns without retaliation

**Principle #2 - Frontline Workers Own Operational AI Intelligence:**

Workers know:
- Which AI recommendations make sense and which are nonsense
- Where AI saves time versus where it creates busywork
- Which automated decisions align with customer needs
- Where AI monitoring feels helpful versus invasive

That's operational AI intelligence. Your job is to extract it, not ignore it.

**Principle #3 - Participation Must Be Systematic, Not Symbolic:**

One frontline representative on a quarterly committee isn't participation. It's tokenism.

Real participation requires:
- Structured feedback loops with response protocols
- Frontline AI Champions Network with peer trainers
- Accessible training embedded in workflow
- Authority to override AI decisions with documentation

**The Participatory AI Framework:**

**Stage 1 - Pre-Deployment Frontline Consultation:**

Conduct an Operational Impact Assessment before any AI tool touches frontline work:
- How will this tool change daily workflow?
- What tasks will it eliminate, augment, or complicate?
- What new skills will workers need?
- Where might AI create errors that workers catch?

**Stage 2 - Phased Rollout with Frontline Champions:**

Create a Frontline AI Champions Network:
- Early adopters who demonstrate AI fluency
- Peer trainers for new AI tools
- Escalation points for AI concerns
- Beta testers for new deployments
- Authority to pause rollout if serious issues emerge

**Stage 3 - Embedded Training and Support:**

- Contextual help INSIDE the tool, not separate modules
- Peer learning sessions led by champions
- Safe practice environments without performance impact
- Micro-credentials for demonstrated AI competency

McKinsey's research: "For every two dollars top-performing sites spend on technology, they spend three on processes and five on capability building."

Stop spending 100% on tools and 0% on people.

**Stage 4 - Continuous Feedback and Iteration:**

- Weekly anomaly reporting with 48-hour IT response commitment
- Monthly worker feedback sessions—conversations, not surveys
- Quarterly AI tool performance reviews (AI performance, not worker performance)
- Clear authority to override AI with documentation

**Evidence This Works:**

- McKinsey Global Lighthouse Network: Top sites spend $5 on capability building per $2 on technology
- 74% of frontline workers comfortable with AI when given prop...

...more
View all episodesView all episodes
Download on the App Store

The AI Governance BriefBy Keith Hill