
Sign up to save your podcasts
Or
Hugo Bowne-Anderson, Independent Data & AI Scientist, joins us to tackle why most AI applications fail to make it past the demo stage. We'll explore his concept of Evaluation-Driven Development (EDD) and how treating evaluation as a continuous process—not just a final step—can help teams escape "Proof-of-Concept Purgatory." How can we build AI applications that remain reliable and adaptable over time? What shifts are happening as boundaries between data, ML, and product development collapse? From practical testing approaches to monitoring strategies, this episode offers essential insights for anyone looking to create AI applications that deliver genuine business value beyond the initial excitement.
4.8
5454 ratings
Hugo Bowne-Anderson, Independent Data & AI Scientist, joins us to tackle why most AI applications fail to make it past the demo stage. We'll explore his concept of Evaluation-Driven Development (EDD) and how treating evaluation as a continuous process—not just a final step—can help teams escape "Proof-of-Concept Purgatory." How can we build AI applications that remain reliable and adaptable over time? What shifts are happening as boundaries between data, ML, and product development collapse? From practical testing approaches to monitoring strategies, this episode offers essential insights for anyone looking to create AI applications that deliver genuine business value beyond the initial excitement.
30,717 Listeners
2,658 Listeners
436 Listeners
293 Listeners
337 Listeners
141 Listeners
9,181 Listeners
168 Listeners
141 Listeners
9,311 Listeners
15,494 Listeners
1,362 Listeners
9 Listeners
162 Listeners
464 Listeners