Hosts: Srini Annamaraju & David Royle.
“The AI bubble is the wrong fear.”
The real threat sits inside your own walls: shadow AI you don’t see, boards that confuse risk aversion with risk management, and leaders trying to govern a technology they don’t actually understand.
We unpack why mid-market boards are exposed, how shadow AI reveals the truth about how your org really works, and what an actually realistic 12-month AI plan looks like.
And yes—why people, not models, are now the biggest AI risk vector.
The conversation revolves around a recent paper that David authored, a link to the post that has the details is here.
What we cover
- Bubble noise vs fundamentals - Valuations swing wildly, but enterprise AI maturity rises daily. We explain why it has nothing to do with the technology reshaping your org.
- Shadow AI as diagnosis - It’s not a tooling problem but a symptom of mismatched expectations.
- Boards: from passive listeners to owners - Why literacy is step zero, and why chairs need to move fast.
- Risk aversion trap - The boards that “get it” flip from “should we?” to “how quickly, safely, and visibly can we?”
- 90-day governance playbook - Inventory → Validate → Govern.
- Top-down vs bottom-up AI - How grassroots use cases and board-led operating models collide.
- 12-month reality check - You won’t be AI-first in a year. But you can be an AI-literate, AI-safe, AI-enabled organisation in 12 months.
- Explainability anxiety - Why boards demand transparency from AI they never asked of spreadsheets or humans.
- The uncomfortable truth - The biggest AI risk isn’t the model. It’s your people.
- Evals preview - Why audits, trust contracts, drift checks, and forward-deployed evaluators will soon be board-level concerns.
Chapters
- AI bubble vs enterprise fundamentals
- Shadow AI as a symptom
- Boards falling behind
- Risk aversion vs risk management
- 90-day governance plan
- A realistic 12-month AI horizon
- The real AI risk: people
- Intro to enterprise evals
Takeaways
- Shadow AI is a mirror - reveals gaps in culture, process, and leadership direction, not tooling.
- Boards must lead, not observe - Active literacy and ownership are key.
- Governance is the stabiliser. Inventories, validations, guardrails, and oversight reduce drift & exposure.
- Explainability is contextual. Set boundaries, not magic expectations.
- People are the attack surface. Don't miss non-malicious misuse.
- 12 months = foundations. Literacy, safety, and one high-value use case per function. That’s the win.
Who it’s for
Board members, CEOs, COOs, CIOs, CROs, and mid-market operators needing a grounded, real-world view of AI risk, governance, and organisational maturity.
Help Spread the Word - Enjoyed the episode? Follow the show, leave a review, and share with a colleague grappling with shadow AI, governance gaps, or board-level AI decisions. Want to join as a guest or sponsor a future episode? Get in touch!