Your design team is excited about a new signup process. They are convinced it will increase registrations.
Your sales team keeps asking for the same feature over and over again. They think it’s responsible for lost sales since your biggest competitor features it prominently.
Your product manager wants to increase the rate at which you email your users, she thinks it will increase return visits.
As a product leader, you face these types of scenarios and many more every day. If you bring an experimental mindset to your work, you should be intuitively translating these suggestions into hypotheses:
- Redesigning the signup process will increase registrations.
- Adding this feature will increase sales.
- Sending more email will increase return visits.
To make these hypotheses testable, you need to define the 5 components of a good hypothesis, including estimating the impact of the change.
Shifting to an experimental mindset means turning suggestions into hypotheses. – Tweet This
Connect the Expected Impact to the Desired Change
If your design team thinks a redesign will increase registrations, you need to ask them why. Have they identified specific problems that they are attempting to resolve or are they merely moving pixels around?
Same with your sales team. Do they understand why the missing feature is leading to lost sales? If not, you need to do the work to understand why.
Does your product manager have a clear rationale for why sending more email will drive return visits? Has she identified a clear use-case that she is hoping to address?
It’s easy to generate product ideas. It’s much harder to understand the impact you expect each idea to have. – Tweet This
But it’s important that you do so.
If you don’t, you can waste countless hours testing ideas that don’t matter.
Make the button green. Add saved searches. Make the caption bold. Shorten the sign-up process. Reduce page load times. Allow people to edit their profiles. Integrate Google Docs. Offer an API.
The list is infinite.
The only way to manage the chaos is to take a step back and identify why each change matters. For each idea, ask:
- Why does this matter?
- What impact do you expect this change to have?
- Why do you think that?
Not only is this good product management, but as you run more experiments it will help reduce false positives – changes that look like winners, but aren’t.
Three Strategies for Estimating How Much Impact
In the 5 components of a good hypothesis, you learned it’s not enough to know the why behind the expected impact, you also need to draw a line in the sand. You need to set a threshold. You need to answer, “how much of an impact do you expect to see?”
This can be one of the more challenging aspects of defining a good hypothesis.
1. Start with your baseline.
If you are trying to improve your conversion rate, know what your conversion rate is today.
If you are trying to improve customer satisfaction, know what your customer satisfaction is today.
A 50% increase is much easier when your conversion rate is 10% than when it is 40%, and it’s impossible when it is 80%.
Start with your baseline before estimating how much impact a product change will have. – Tweet This
If you don’t have access to strong product analytics and you are planning to use a tool like Optimizely or Visual Website Optimizer to run an A/B test, you can first run an A/A test, where both versions are your current version. This is a quick way to get your current baseline.
Similarly, if you are running a usability test on a new design, run participants through the same task on your current design so that you can make a better judgment about which is better.
2. Look at Comparable Product Changes
Find similar changes that you have made in the past and understand what impact they had on your product.
Over time, you’ll build up your knowledge base of how much impact different types of product changes tend to have.
If you are optimizing subject lines, look at your past subject line experiments. If you are adding a new feature, look at how similar features performed in the past.
If what you are doing is truly brand new, you might not be abler of find comparable changes. But before you give up, move up the levels of product analysis. If you have never released a feature like the one you want to test, look for past experiments that tested the same value proposition.
3. Assess How Much Impact Makes the Investment Worthwhile
Finally, you can and always should, ask yourself, “how much impact do you need to see to make this change worth it?”
It’s easy to think any increase is worth it.
If you are redesigning your sign up process, than any increase in sign ups is an indicator that you should invest in the redesign. You might draw your line in the sand at one additional sign up.
However, this doesn’t take into the account the expense of investing in the redesign nor does it account for opportunity cost.
Every change, no matter how small, comes at a price. it’s not the just the first-time investment required to implement the redesign, but also the ongoing expense of maintaining it over time.
It includes the time and effort your customers will spend to learn the new way of doing things.
And it includes the investment you make in defining the change.
It also includes the opportunity costs of other changes that might have more impact.
When you think even one additional sign up makes this change worth it, you are framing your decision as a “whether or not” decision. You are asking yourself whether or not this redesign is better than your current design.
But that’s the wrong question to ask.
Instead, you want to ask, “how can you redesign your sign up process to maximize new sign ups?”
If you frame your question this way, one additional sign up hardly seems worth it.
It’s easy to think an incremental improvement is better than no improvement at all. This is dangerous thinking.
The cost of an incremental improvement is large, it’s all the other things you could be building that might have a much larger impact, while the benefit is small.
On the other hand, the cost of an aggressive line in the sand is that your hypothesis might fail. But the benefit of an aggressive line in the sand is that it will motivate you to find a new design that clears your threshold.
View a failed hypothesis as a challenge. This one failed. But there is another change that will clear your threshold. Go find it!
To estimate impact: Get your baseline, find comparable changes, ask how much impact makes it worth it. – Tweet This
Remember This
- Always have a clear reason for why your proposed change will have the desired impact.
- To estimate the expected impact:
- Start with your baseline.
- Look for comparable product changes.
- Assess how much impact makes it worthwhile.
Do you want to keep investing in your experimental mindset? Subscribe to the Product Talk mailing list.