Vanishing Gradients

Episode 70: 1,400 Production AI Deployments


Listen Later

There’s a company who spent almost $50,000 because an agent went into an infinite loop and they forgot about it for a month.

It had no failures and I guess no one was monitoring these costs. It’s nice that people do write about that in the database as well. After it happened, they said: watch out for infinite loops. Watch out for cascading tool failures. Watch out for silent failures where the agent reports it has succeeded when it didn’t!

We Discuss:

* Why the most successful teams are ripping out and rebuilding their agent systems every few weeks as models improve, and why over-engineering now creates technical debt you can’t afford later;

* The $50,000 infinite loop disaster and why “silent failures” are the biggest risk in production: agents confidently report success while spiraling into expensive mistakes;

* How ELIOS built emergency voice agents with sub-400ms response times by aggressively throwing away context every few seconds, and why these extreme patterns are becoming standard practice;

* Why DoorDash uses a three-tier agent architecture (manager, progress tracker, and specialists) with a persistent workspace that lets agents collaborate across hours or days;

* Why simple text files and markdown are emerging as the best “continual learning” layer: human-readable memory that persists across sessions without fine-tuning models;

* The 100-to-1 problem: for every useful output, tool-calling agents generate 100 tokens of noise, and the three tactics (reduce, offload, isolate) teams use to manage it;

* Why companies are choosing Gemini Flash for document processing and Opus for long reasoning chains, and how to match models to your actual usage patterns;

* The debate over vector databases versus simple grep and cat, and why giving agents standard command-line tools often beats complex APIs;

* What “re-architect” as a job title reveals about the shift from 70% scaffolding / 30% model to 90% model / 10% scaffolding, and why knowing when to rip things out is the may be the most important skill today.

You can also find the full episode on Spotify, Apple Podcasts, and YouTube.

You can also interact directly with the transcript here in NotebookLM: If you do so, let us know anything you find in the comments!

👉 Want to learn more about Building AI-Powered Software? Check out our Building AI Applications course. It’s a live cohort with hands on exercises and office hours. Our final cohort starts March 10, 2026. Here is a 25% discount code for readers. 👈

Show Notes Links

* Alex Strick van Linschoten on LinkedIn

* Alex Strick van Linschoten on Twitter/X

* LLMOps Database

* LLMOps Database Dataset on Hugging Face

* Hugo’s MCP Server for LLMOps Database

* Alex’s Blog: What 1,200+ Production Deployments Reveal About LLMOps in 2025

* Previous Episode: Practical Lessons from 750 Real-World LLM Deployments

* Previous Episode: Tales from 400 LLM Deployments

* Context Rot Research by Chroma

* Hugo’s Post: AI Agent Harness - 3 Principles for Context Engineering

* Hugo’s Post: The Rise of Agentic Search

* Episode with Nick Moy: The Post-Coding Era

* Hugo’s Personal Podcast Prep Skill Gist

* Claude Tool Search Documentation

* Gastown on GitHub (Steve Yegge)

* Welcome to Gastown by Steve Yegge

* ZenML - Open Source MLOps & LLMOps Framework

* Upcoming Events on Luma

* Vanishing Gradients on YouTube

* Watch the podcast livestream on YouTube

* Join the final cohort of our Building AI Applications course in March, 2026 (25% off for listeners)

👉 Want to learn more about Building AI-Powered Software? Check out our Building AI Applications course. It’s a live cohort with hands on exercises and office hours. Our final cohort starts March 10, 2026. Here is a 25% discount code for readers. 👈



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hugobowne.substack.com
...more
View all episodesView all episodes
Download on the App Store

Vanishing GradientsBy Hugo Bowne-Anderson

  • 5
  • 5
  • 5
  • 5
  • 5

5

12 ratings


More shows like Vanishing Gradients

View all
Odd Lots by Bloomberg

Odd Lots

1,989 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,457 Listeners

The a16z Show by Andreessen Horowitz

The a16z Show

1,096 Listeners

Talk Python To Me by Michael Kennedy

Talk Python To Me

583 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

302 Listeners

Practical AI by Practical AI LLC

Practical AI

215 Listeners

Last Week in AI by Skynet Today

Last Week in AI

312 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

99 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

561 Listeners

No Priors: Artificial Intelligence | Technology | Startups by Conviction

No Priors: Artificial Intelligence | Technology | Startups

141 Listeners

Latent Space: The AI Engineer Podcast by Latent.Space

Latent Space: The AI Engineer Podcast

100 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

675 Listeners

Sharp Tech with Ben Thompson by Andrew Sharp and Ben Thompson

Sharp Tech with Ben Thompson

97 Listeners

High Signal: Data Science | Career | AI by Delphina

High Signal: Data Science | Career | AI

19 Listeners

OpenAI Podcast by OpenAI

OpenAI Podcast

59 Listeners