How do you balance the endless potential of AI with the need for strict control?
In this episode of the AI First Podcast, Jon Herstein, Chief Customer Officer at Box, sits down with Jeff Chambers, VP of IT Technology at WongDoody an Infosys Company, a global creative technology company. Jeff digs into how WongDoody is using AI to drive innovation, from automating workflows to enhancing digital marketing strategies for global brands.
Learn how they’re balancing the power of AI with strong governance, ensuring security and privacy while embracing new technology. Jeff also discusses the challenges of scaling AI responsibly, the importance of a human-in-the-lead approach, and the impact of AI on their internal culture and processes.
Key moments:
(00:00) Introduction
(01:04) Jeff Chambers introduces himself and WongDoody
(01:15) What breaks first when scaling AI across an organization?
(01:36) Security and privacy as the primary failure points in enterprise AI adoption
(02:25) The tension between enabling AI experimentation and enforcing governance
(02:47) How WongDoody follows Infosys's ISO 42001 framework to vet AI models
(03:50) Zero trust as the foundation for AI governance and scaling
(04:20) What's allowed, what's restricted, and how it's enforced across geographies
(04:31) Using Box Enterprise Advanced to deploy AI on data in a deliberate, controlled way
(05:49) Managing AI credits, usage tracking, and preventing uncontrolled spend
(05:57) Treating AI credits as an R&D expense and benchmarking models against each other
(08:07) What most organizations are underestimating about AI right now
(08:12) The missing strategy: experimentation lifecycle, change management, and sustained iteration
(09:07) The challenge of model deprecation and the need to continuously maintain AI solutions
(09:35) Why WongDoody recommends against prematurely customizing or fine-tuning models
(10:27) The first controls CIOs need before moving AI from experimentation to production
(10:49) Strategy, change champions, zero trust environments, and defining KPIs
(12:03) Protecting sensitive content in an AI-powered enterprise environment
(12:11) Restructuring user permissions and content access as the foundation for safe AI deployment
(13:13) How AI agents inherit user permissions — and why that creates risk
(14:14) Securing API gateways and monitoring all access points in AI-connected systems
(14:29) Is the concept of zero trust evolving in the age of AI agents?
(14:34) The gap between current AI governance tools and what's actually needed
(15:33) The case for AI agents having built-in guardrails, like employee security training
(15:53) Treating AI agents like new staff — they need training, oversight, and boundaries
(16:26) Defining responsible AI at WongDoody
(16:29) Vetting LLMs for privacy, security, regulatory compliance, and data residency
(17:57) Has AI delivered tangible business value beyond experimentation?
(18:11) Real outcomes: RFP processing, text-to-image generation, and design iteration
(19:53) Unpacking the "human in the lead" concept vs. human in the loop
(20:30) Why continuous human oversight — not just final approval — is essential with AI tools
(21:26) Jeff's most controversial take: the ethical and environmental cost of AI
(21:37) Data centers, energy consumption, and the responsibility of governments and companies
(22:14) Are solutions coming for AI's environmental impact?
(22:53) Closing thoughts and takeaways