Building Alyx: How Arize AI Dogfooded Its Way to an Agentic Future

Listen to this episode on: Spotify | Apple Podcasts
What does it really take to build an AI agent inside an AI platform—especially when you’re using that same platform to build the agent?
In this episode of Just Now Possible, Teresa Torres talks with SallyAnn DeLucia (Director of Product at Arize) and Jack Zhou (Staff Engineer at Arize) about the journey of building Alyx, their AI agent designed to help teams debug, optimize, and evaluate AI applications.
They share the scrappy beginnings—Jupyter notebooks, hacked-together web apps, and weekly dogfooding sessions with their customer success team—and the hard-earned lessons about evals, tool design, and how to prioritize early skills. Along the way, you’ll hear how cross-functional experience, intuition-building, and customer insight shaped Alyx into a product that’s now central to the Arize platform.
If you’ve ever wondered how to move from vibe checks and one-off prototypes to systematic improvement in your AI product, this episode is for you.
Show Notes
Guests:
- SallyAnn DeLucia, Director of Product, Arize
- Jack Zhou, Staff Engineer, Arize
In this episode, we cover:
- What tracing, observability, and evals really mean in GenAI applications
- How Arize used its own platform to build Alyx, its AI agent
- The role of customer success engineers in surfacing repeatable workflows
- Why early prototyping looked like messy notebooks and hacked-together local apps
- How dogfooding shaped Alyx’s evolution and built confidence for launch
- Why evals start messy, and how Arize layered evals across tool calls, sessions, and system-level decisions
- The importance of cross-functional, boundary-spanning teams in building AI products
- What’s next for Alyx: moving from “on rails” workflows to more autonomous, agentic planning loops
Resources & Links
- Arize AI — Sign up for a free account and try Alex
- Arize Blog — Lessons learned from building AI products
- Maven AI Evals Course — The course Teresa took to learn about evals (Get 35% off with Teresa’s affiliate link)
- Cursor — The AI-powered code editor used by the Arize engineering team
- DataDog — For understanding application traces
- OpenAI GPT Models — GPT-3.5, GPT-4, and newer models used in early and current versions of Alex
- Jupyter Notebooks — A tool for combining code, data, and notes, used in Arise’s prototyping
- Axial Coding Method by Hamel Husain — A framework for analyzing data and designing evals
Chapters
00:00 Introduction to Sally Ann and Jack
01:08 Overview of Arize.ai and Its Core Components
01:44 Deep Dive into Tracing, Observability, and Evals
03:56 Introduction to Alyx: Arize's AI Agent
04:15 The Genesis and Evolution of Alyx
08:51 Challenges and Solutions in Building Alyx
24:33 Prototyping and Early Development of Alyx
26:22 Exploring the Power of Coding Notebooks
26:51 Early Experiments with Alyx
27:59 Challenges with Real Data
29:20 Internal Testing and Dogfooding
31:55 The Importance of Evals
35:16 Developing Custom Evals
43:09 Future Plans for Alyx
47:59 How to Get Started with Alyx
Full Transcript
Podcast transcripts are only available to paid subscribers.