
Sign up to save your podcasts
Or


In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.
By Naveen Balani3.2
55 ratings
In this episode, we explore a quiet but profound challenge emerging across the artificial intelligence landscape: model upgrades that no longer behave like software updates. Newer versions of large language models often shift reasoning patterns, change output formats, and break carefully-designed workflows, leaving organisations struggling to maintain consistency, trust, and reproducibility.
Drawing from real-world evidence and industry research, this narrative uncovers why behaviour changes between versions are inevitable, why backward compatibility is nearly impossible, and why engineering discipline — not just better models — will determine who succeeds in the era of agentic systems.
A deep dive into a problem every AI team will face, and one that will shape the future of intelligent systems.

32,262 Listeners

16,144 Listeners

7,707 Listeners

1,104 Listeners

30,217 Listeners

346 Listeners

215 Listeners

10,192 Listeners

142 Listeners

100 Listeners

676 Listeners

459 Listeners

97 Listeners

7 Listeners

6 Listeners