
Sign up to save your podcasts
Or


Greg Whalen, CTO of Prove AI, explains why many enterprises are stalling with generative AI by treating it like traditional software and postponing the hard parts like observability, debugging, and governance. He breaks down what “observability” actually means in AI systems, why outcome-based metrics matter more than chasing black-box explanations, and how Prove AI helps teams collect the right telemetry to reach production safely. The conversation also explores agentic workflows, human oversight, and why technical leaders need to change how their organizations operate if they want gen AI to deliver real value.
See omnystudio.com/listener for privacy information.
By Phillip Lanos, Jordan French4.5
1313 ratings
Greg Whalen, CTO of Prove AI, explains why many enterprises are stalling with generative AI by treating it like traditional software and postponing the hard parts like observability, debugging, and governance. He breaks down what “observability” actually means in AI systems, why outcome-based metrics matter more than chasing black-box explanations, and how Prove AI helps teams collect the right telemetry to reach production safely. The conversation also explores agentic workflows, human oversight, and why technical leaders need to change how their organizations operate if they want gen AI to deliver real value.
See omnystudio.com/listener for privacy information.