
Sign up to save your podcasts
Or
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)
5
1111 ratings
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)
1,032 Listeners
480 Listeners
441 Listeners
298 Listeners
322 Listeners
267 Listeners
192 Listeners
198 Listeners
88 Listeners
408 Listeners
121 Listeners
75 Listeners
31 Listeners
4 Listeners
28 Listeners