
Sign up to save your podcasts
Or


Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)
By Hugo Bowne-Anderson5
1111 ratings
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)

476 Listeners

1,083 Listeners

434 Listeners

302 Listeners

340 Listeners

268 Listeners

210 Listeners

194 Listeners

89 Listeners

489 Listeners

133 Listeners

97 Listeners

33 Listeners

18 Listeners

52 Listeners