
Sign up to save your podcasts
Or
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)
5
1111 ratings
Too many teams are building AI applications without truly understanding why their models fail. Instead of jumping straight to LLM evaluations, dashboards, or vibe checks, how do you actually fix a broken AI app?
In this episode, Hugo speaks with Hamel Husain, longtime ML engineer, open-source contributor, and consultant, about why debugging generative AI systems starts with looking at your data.
In this episode, we dive into:
If you're building AI-powered applications, this episode will change how you approach debugging, iteration, and improving model performance in production.
LINKS
Vanishing Gradients on Lu.ma
Building LLM Application for Data Scientists and SWEs, Hugo course on Maven (use VG25 code for 25% off)
Hugo is also running a free lightning lesson next week on LLM Agents: When to Use Them (and When Not To)
1,001 Listeners
470 Listeners
296 Listeners
269 Listeners
190 Listeners
281 Listeners
88 Listeners
354 Listeners
125 Listeners
190 Listeners
63 Listeners
424 Listeners
57 Listeners
36 Listeners
4 Listeners