Statistical Significance

What is statistical significance?

Statistical significance represents the likelihood that the difference you see between two variations is an actual difference and not due to chance. The generally accepted standard is 95% confidence, which means there's only a 5% chance your results are due to chance—what's called a false positive.

If you aren't checking for statistical significance on your A/B tests, you risk implementing changes that have no real impact or, worse, a negative impact that isn't being correctly measured. This is why understanding statistical significance is essential for making good product decisions.

How does statistical significance work?

Think about flipping a coin. You have a 50% chance of getting heads and a 50% chance of getting tails. But when you flip a coin ten times, you don't always get 5 heads and 5 tails. That's because probability plays out over a very large sample. The more times you flip a coin, the closer to 50/50 you'll get.

The same problem occurs with A/B tests. Because you're only looking at a sample and not all occurrences, the conversion rate of the sample might be the equivalent of getting 10 heads in a row, and you might draw incorrect conclusions from your data.

Statistical significance tells you how confident you can be that your results represent an actual difference and not just chance. Most researchers want to see 95% confidence before concluding the results are significant and that you can trust them. A 95% confidence level means there's still a 5% chance the difference in conversion rate was due to chance.

What should you know about your testing tools?

Testing tools often make statistical significance seem simple, but they can be misleading. Different tools draw the line for statistical significance at different confidence levels—some as low as 80%, which means 1 out of 5 tests are likely a false positive.

Know your tools. Understand where they draw the line for statistical significance. Know whether they're tracking actions versus people, visits versus sessions. Your testing tools can dump out data, but they can't tell you what that data means. This is where an analytical mindset is invaluable.

Many product managers and designers are strong analytical thinkers, but it never hurts to have engineers weigh in on experiment design and results interpretation. Their engineering backgrounds often give them stronger backgrounds in statistics and scientific thinking.

Learn more:
- Just Enough Statistics to Get the Product Job Done
- The 14 Most Common Hypothesis Testing Mistakes Product Teams Make (And How to Avoid Them)

Related terms:
- Experiments
- Assumption Testing
- Hypothesis

← Back to Discovery Glossary

Last Updated: October 25, 2025