
Sign up to save your podcasts
Or


In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.
By Naveen Balani3.2
55 ratings
In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.

32,175 Listeners

16,180 Listeners

7,747 Listeners

1,093 Listeners

30,224 Listeners

345 Listeners

200 Listeners

10,043 Listeners

137 Listeners

93 Listeners

631 Listeners

473 Listeners

93 Listeners

7 Listeners

4 Listeners