Debugging AI Products: From Data Leakage to Evals with Hamel Husain

Listen to this episode on: Spotify | Apple Podcasts
How do you know if your AI product is actually any good? Hamel Husain has been answering that question for over 25 years. As a former machine learning engineer and data scientist at Airbnb and GitHub (where he worked on research that paved the way for GitHub Copilot), Hamel has spent his career helping teams debug, measure, and systematically improve complex systems.
In this episode, Hamel joins Teresa Torres to break down the craft of error analysis and evaluation for AI products. Together, they trace his journey from forecasting guest lifetime value at Airbnb to consulting with startups like Nurture Boss, an AI-native assistant for apartment complexes. Along the way, they dive into:
- Why debugging AI starts with thinking like a scientist
- How data leakage undermines models (and how to spot it)
- Using synthetic data to stress-test failure modes
- When to rely on code-based assertions vs. LLM-as-judge evals
- Why your CI/CD set should always include broken cases
- How to prioritize failure modes without drowning in them
Whether you’re a product manager, engineer, or designer, this conversation offers practical, grounded strategies for making your AI features more reliable—and for staying sane while you do it.
Show Notes
Guest: Hamel Husain
AI products and problems discussed:
- GitHub Copilot
- Forecasting AirBnB Guest Growth
- NurtureBoss
Resources & Links
- Hamel’s blog on AI evals
- AI Evals for Engineers and PMs course on Maven (Get 35% off with my affiliate link)
Chapters
00:00 Introduction to Hamel Hussein
00:34 Challenges in AI Consulting
02:00 Machine Learning Fundamentals
04:47 Debugging Machine Learning Models
05:00 Case Study: Airbnb's Guest Growth
08:51 Understanding Machine Learning Models
18:35 Introduction to Nurture Boss
25:40 Building AI Products with Synthetic Data
41:20 Connecting Machine Learning to Error Analysis
42:28 Real-World Example: Text Message Errors
44:15 Prioritizing and Documenting Errors
45:59 Continuous Improvement and Iteration
58:08 Using Synthetic Data for Evaluation
01:08:42 Avoiding Overfitting in Evaluations
01:19:28 Practical Tips for Error Analysis
01:25:10 Final Thoughts and Resources
Full Transcript
Podcast transcripts are only available to paid subscribers.