On Tuesday, we looked at how to measure the impact of each user story, looking at both measuring the effectiveness of the mechanics of the feature and also the impact of the story on our overall product goals. Today, we’ll look at how to predict the impact of a story before implementing.
Predicting the impact of a story has two key benefits. First, it will help us prioritize which stories to build. And second, it helps us surface the assumptions behind what we are building. This second benefit will be critical in helping us learn over time what is likely to be effective. Let’s look at each in turn.
Expected Impact Can Help You Prioritize
If you have an expected impact for each story in your backlog, then you can use this information to help prioritize your backlog. Many other factors come into play, which we aren’t going to get into here, but understanding what you can expect from each story is extremely informative in the prioritization process.
Predicting Expected Impact Helps You Surface Underlying Assumptions
But don’t kid yourself, expected impact will be wrong more often than it is right. No matter how much experience you have with different types of development and feature sets, the expected impact is going to be different in each and every context. So why predict impact if the numbers aren’t going to be reliable?
This brings us to our second benefit. By predicting the impact of each user story you are capturing ahead of time what you thought the benefit might be. After you release, you can compare what happened with what you thought might happen. There will be gaps. These gaps are exactly where the learning happens.
Let’s return to our event site example to illustrate. Remember, we are trying to increase the number of events created on the site and we have the following user stories in our backlog:
- As an event host, I am able to import my address book so that it is easier for me to invite my contacts to my events.
- As an event host, I am able to look up a venue address, so that I am sure to get the location correct.
Suppose contact imports is the number one most requested feature and you also know that people tend to abandon the event creation process when they get to invitations. Both are a pretty good indicator that this is a valuable story. But how valuable?
You also know that people get caught up on the venue location. You aren’t hearing feedback about this but you notice that many events don’t have locations and thus are not considered completed events.
Suppose only 44% of people who start the event creation process finish.
Which story will have a bigger impact?
For the import contacts feature, you want to make two predictions:
- what percentage of users will import their contacts
- for users who have imported their contacts, how will this impact the average number of events created per user.
Together, these will tell you the overall impact on event creation. Suppose you guess that 20% of users will import their contact list and that users who import their contacts will be 50% more likely to create events, with an overall expected impact of a 10% increase in event creation across all users.
For the venue location story, you need to predict:
- what percentage of event creators will use the venue location lookup
- for users who use the venue location lookup, how will this impact the average number of events created per user.
Again, together these will tell you the overall impact on event creation. Suppose you guess that 50% of event creators will use the venue lookup and that it will increase event creation on average by 30% for those users, with an overall expected impact of a 15% increase in event creation across all users.
Now let’s be honest about the fact that you are guessing here. You really have no idea what impact either of these features will have. But that’s okay.
The goal is to identify the gaps between your guesses and reality.
Suppose you build both features and you get the following results. For contact imports you find that only 3% of users import their contacts, but for those who did, event creation went up by 140%, resulting in a 4.2% increase in event creation. And suppose 85% of users used venue lookup but it only increase event creation by 10%, increasing overall event creation by 8.5%.
What can you learn from this?
First, you can see that a feature that has a small impact but is widely adopted has a bigger overall impact than a feature that has a big impact but is not widely adopted. This is exactly why adoption is critical.
But second, since you bothered to make predictions, you can start to ask yourself about the gaps. You thought that 20% of people would import contacts. But only 3% did. Why? Do people not know it’s there? Is there a usability problem? Are there privacy concerns? The more you dive into why things didn’t turn out as you expected the more you will learn. And of course, you will want to ask these questions for each of your predictions.
You might argue that you would do this investigation without predictions. But this isn’t likely. Even for the most disciplined, if you see a 140% increase in event creation, you are going to call that a success. After all, it’s a big number. You might focus on trying to increase the number of people who import contacts. And that might be a good strategy if the feature is simply not visible enough or if there are usability issues to tackle. But if there are legitimate privacy concerns, then optimization isn’t going to get you very far.
Predictions help you see the gap and encourage you to ask why the gap occurred. If you don’t predict impact ahead of time, you won’t do this retrospective analysis. You’l miss the gaps.
Put it on paper. Identify the gaps. Use the gaps to learn. Use the learning to influence your next round of predictions. Iterate.
Do you take the time to predict expected impact? What have you learned from doing so?