Prompt Change
What is a prompt change?
A prompt change refers to modifications made to the instructions given to a large language model (LLM) to improve its output quality or behavior. In AI product development, teams iterate on prompts to refine how the AI responds, making adjustments and testing whether those changes improve performance.
How do teams test prompt changes?
Teams test prompt changes through evaluations (evals) to determine whether modifications actually improve the AI's performance. The process creates a fast feedback loop:
- Make the change — Modify the prompt instructions
- Run evals — Test the new prompt against evaluation criteria
- Decide — Release the change if it improves performance, or discard it if it doesn't
This data-driven approach ensures that prompt changes are making the AI product better rather than just different. By running evals before releasing changes, teams can confidently improve their AI products without guessing about impact.
What other changes work alongside prompt changes?
Prompt changes are one of several experimental variables that teams iterate on to optimize AI product quality:
- Model changes — Switching to different LLM models or versions
- Temperature changes — Adjusting how creative or deterministic the responses are
- Chunking strategy changes — Modifying how content is broken up and processed
All of these changes can be tested using the same eval-driven feedback loop. Prompt changes are often the most accessible way to improve AI product quality without changing underlying models or infrastructure.
Learn more:
- Behind the Scenes: Building the Product Talk Interview Coach
- Building My First AI Product: 6 Lessons from My 90-Day Deep Dive
- How I Designed & Implemented Evals for Product Talk's Interview Coach
Related terms: