
Sign up to save your podcasts
Or


In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.
By Naveen Balani3.2
55 ratings
In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.

1,087 Listeners

333 Listeners

226 Listeners

152 Listeners

211 Listeners

1,598 Listeners

9,932 Listeners

7 Listeners

280 Listeners

227 Listeners

610 Listeners

173 Listeners

179 Listeners

96 Listeners

5 Listeners