CTO of Etiq AI, Raluca Crisan, joins us to explore what it really takes to build responsible, scalable AI in today’s high-pressure, fast-iterating tech environments. Drawing from her experience in data science and model governance across startups and impact-driven organizations like Zinc, Raluca shares how teams can move from reactive fixes to proactive safeguards without slowing innovation down.
Unpack why most data scientists still struggle with bias detection and testing, how orchestration tooling is evolving to support real-world deployment cycles, and what it means to operationalize responsibility from inside the data pipeline. The conversation touches on invisible risks in behavioral data, lessons from building testing tools that data scientists actually want to use, and the nuanced challenge of debugging AI failures in live environments.
We also look at why generative AI has accelerated urgency around model oversight, how LLMs mirror user bias, and why automation-first approaches to testing may be key to unlocking trust at scale.
Tune in for a wide-ranging discussion on responsible AI, emergent failure modes, and what it takes to make testing as intuitive, and indispensable, as model training.
MLOps London Meetup: https://www.meetup.com/mlopslondon/
Learn more about Etiq AI: https://www.etiq.ai/