On a recent episode of Ctrl + Alt + Regulate, I had the incredible opportunity to sit down with Olivia Gambelin, a leading AI ethicist and responsible AI practitioner. Known for her work on ethics-by-design, her book Responsible AI, and the innovative Values Canvas framework, Olivia offered a masterclass on how organizations can responsibly build, govern, and scale AI solutions.
Olivia opened the conversation by explaining the two hats she wears: as an AI ethicist and as a responsible AI practitioner. As an ethicist, she focuses on embedding human values into AI development, using ethics as a driver for innovation rather than an afterthought. As a practitioner, she designs the operational structures—governance, training, processes—that support the implementation of AI within companies.
Her dual expertise is critical: one side shapes what AI should do, and the other side ensures AI actually operates safely and ethically at scale.
Many companies have hired consultants to build responsible AI frameworks, often based on NIST or the EU AI Act. Yet, many end up with a checklist and little guidance on how to operationalize it. Olivia explained that Responsible AI frameworks aren’t one-size-fits-all—they must be customized based on an organization's size, industry, use cases, and maturity level.
Frameworks often focus on different aspects, such as:
Fairness and bias mitigation
Risk and use case assessment
Post-deployment monitoring for model drift and ethical drift
The key is to identify where friction or confusion exists and build frameworks that fit those real-world needs.
Building a Responsible AI framework isn't just a technical problem—it’s a human one. Olivia emphasized that a successful AI committee must include a diverse set of stakeholders:
Chief AI Officer or Chief Technology Officer
Representatives from Legal and Risk Management
Engineers with technical expertise
A dedicated Responsible AI professional to guide best practices
A committee composed only of leadership or only of engineers tends to miss critical blind spots. Balance is essential.
A common question among CIOs and CTOs is: How do we innovate fast without compromising compliance and risk?
Olivia's advice: Start with the use case.
Before bringing in the AI technology, leaders must deeply understand the business value, user needs, and safeguards required. Without that, companies risk investing heavily in projects that don't drive measurable outcomes—or worse, introduce ethical and security risks.
To help leaders map out these complexities, Olivia created the Values Canvas. Much like a business model canvas, the Values Canvas identifies nine critical impact points across three pillars:
Each element prompts questions around accountability, trust, and value generation—ensuring AI development remains grounded in real-world outcomes, not just technical capabilities.
Despite all the hype around AI, many Fortune 500 companies remain stuck in the early phases of the AI maturity model. They’ve experimented with pilots and prototypes but haven’t scaled successfully.
According to Olivia, the missing link is Responsible AI. Responsible AI practices help organizations plan for scaling from day one—setting up strong data governance, documentation, MLOps processes, and adoption pathways to drive real business value.
We also discussed the new challenges presented by LLMs (Large Language Models) like OpenAI, Anthropic’s Claude, and DeepSeek. Engineers are downloading open models, feeding them sensitive data, and inadvertently leaking company IP.
Olivia’s recommendation:
Implement company-wide LLM usage policies immediately.
Educate employees on what can—and cannot—be shared with external models.
Vet LLM platforms for security and privacy risks before use.
Prefer closed, enterprise-controlled environments whenever possible.
video: https://neerajsabharwal.substack.com/p/building-responsible-ai-frameworks