*This post is part of a series about making better product decisions.*

You probably estimate every day. How long will it take to build a given feature? What impact will it have on your customer base? How will it impact business outcomes?

It’s too bad you probably aren’t very good at it.

Don’t worry. You aren’t the only one. It turns out, we all suck at estimating. We just aren’t very good at predicting the future.

The Heath brothers, in Decisive summarize research that can help us get better at estimating. Often when we estimate, we make a single estimate. If you ask an engineer, how long will it take to add a feature, they might answer 3 days. Or if you estimate the expected conversion rate of an email you might end up with 3%.

Research, however, suggests that you can dramatically** improve your estimates by predicting a range**. What’s the high end? What’s the low end? Asking each question forces you to draw from different types of knowledge.

What’s the shortest amount of time it’s taken for an engineer to build this type of feature? What’s the longest? The future probably falls somewhere in that range. This technique can **improve your estimates by 70%**.

You can also ask, what’s the most likely outcome in this particular situation? Whats the most likely middle point? **Adding a best guess in the middle improved estimates by 98%. **

You can extend this concept even further. Whenever you are trying to estimate something **spend some time doing a pre-mortem and a pre-parade**.

If you work in an Agile environment, you are well aware of postmortems. So what’s a pre-mortem? Pre-mortems force you to fast-forward in time and ask yourself, we built this feature and it was a disaster, what went wrong? Why did it take 10x as long? Why did it have zero impact on business outcomes? Why did it negatively influence user behavior? Post-moretms help you tap into past experience to more accurately predict the low end of the range.

Pre-parades on the other hand, ask you to fast-forward in time and ask yourself, we built this feature and it was an overwhelming success, what went right? You were able to reduce development time by 10x, why? You quadruped the expected impact on business outcomes and customer behavior? Why? Again, pre-parades help you apply your past experience to more accurately estimate the top end of the range.

So don’t just rely on a single estimate. The next time you find yourself having to estimate something (which I’m sure will be later today), take the time to run through this exercise and predict a range.

**Do you have experience predicting ranges? Please share in the comments. **

P.S. I took a 10 day digital sabbatical this month to have this great adventure.

*This post is part of a series about making better product decisions.*

melissaiz says

I particularly like the pre-mortem and pre-parade concepts you defined above. They’re not unlike some of the strategies described in Kill the Company by Lisa Bodell. She advocates getting people together to discuss what might happen that would negatively impact the company they work for, which leads to proactive actions that help minimize those risks. Great ideas!

Vinny Pasceri (@vinnypasceri) says

Great post, Teresa. Estimation is hard, and I’ve rarely seen a team effectively estimate work. The one exception was while I was at Redfin. One team in particular was hitting dates without removing scope, reducing quality, or adding resources. I’ve been an advocate of this methodology since then.

Read “How to Measure Anything” by Douglas W. Hubbard to get the full picture at estimating. His quick pitch is that you need the following:

– All estimators need to be calibrated. You do so with a calibration test (link below)

– Estimate with a range (high & low) with a 90% confidence interval

– Run a Monte Carlo simulation on the data to understand potential outcomes

Here’s an example on how to execute on this methodology:

1. Calibrate all the engineers and all other folks estimating work. A Redfin engineer created a calibration test based on Hubbard’s methodology here – http://markbiddlecom.github.io/estimation-calibration/calibration.html

2. Determine the unit for measurement. This could be calendar days or “dev days” (5 calendar hours a day). Explain this to the team. Also discuss that the high/low with 90% CI should include uncertainty.

3. Pick a chunk of work to estimate. Discuss with the team the scope and answer any questions about the work.

4. Ask the estimators to pick the low number. Once they are ready, you ask them to share at the same time. Using hand-signals or cards with the units are fine. If the team is properly calibrated, you won’t see too much variance in this number. For example, if you have a team of 8, you’ll likely see 5 people with the same number, 2 people that are close, and 1 outlier.

5. Discuss the results. Typically, I focus on the high/low numbers or the outliers to make sure I understand *why* they believe that’s the right number. Discuss as a group. You may need to estimate that number again if the numbers are wildly off. Otherwise, the team just agrees to snap to a specific estimate.

6. Repeat for the high estimate.

7. Sequence the work, including dependencies

8. Run the monte carlo simulation

9. When the developers complete the work, make sure they track “actual” time and indicate if they were over or under. This will improve the model and will update the projected completion date range.

Resources for this:

– https://github.com/mjwade/spolsky-sheets

– https://github.com/FocusedObjective/FocusedObjective.Resources

– http://scrumage.com/blog/2015/09/agile-project-forecasting-the-monte-carlo-method/

– http://markbiddlecom.github.io/estimation-calibration/calibration.html

I’m interested to hear outcomes from any teams that have applied this methodology.

Teresa Torres says

Wow, Vinny, thanks for all of this. This looks like a great process for companies that have a real need for accurate estimates. However, I can’t help but wonder if we only ask our engineering teams for estimates because we want a false sense of certainty. I’m not convinced that even accurate estimates are helpful or worth the effort. But I know I’m in the minority on this one, so I’m glad you shared this process.