A few months ago, fellow Product Talk coach Hope Gurion and I sat down to discuss why there’s no single right way to do discovery.
Want to read the previous parts of the series? Find Part 1 here and Part 2 here.
In this third and final conversation in the series, we discussed two core principles of continuous discovery: why it’s essential to set up compare and contrast decisions and surface and test assumptions.
You can watch the recording above or read an edited version of the transcript below.
Teresa Torres: Hi, everyone. Welcome to “Why There’s No Single ‘Right’ Way to Do Product Discovery.” I’m Teresa Torres from Product Talk. I’m joined today by Hope Gurion. Hope, do you want to say hello and give a little introduction?
Hope Gurion: Hi, everybody. Thanks for joining. I am a former chief product officer and now coach and advise product leaders and teams. I’ve partnered with Teresa for a number of years and have found a lot of value in these methods. So I’m excited for all of your questions and to help all of you get even better at your discovery practice.
Teresa: Perfect. Thanks, Hope. And then, for those of you who don’t know me, I’ve been working as a product discovery coach for the last seven years, teaching cross-functional product teams how to do continuous interviewing, discover opportunities, and run rapid experiments and rapid prototyping to evaluate solutions, and I blog at Product Talk. Hope and I are joined today by Melissa Suzuno. She is my blog editor and also helps out with a variety of content marketing activities for Product Talk. Melissa, do you want to go ahead and say hello?
Melissa Suzuno: Hi, everyone. I’m excited to be here today and look forward to hearing all of your great questions that come in throughout the webinar.
Teresa: Great segue, Melissa. Let’s talk a little bit about how the webinar is going to work. We will be taking questions at the end of each topic and at the very end of the webinar. So if you have a question for Hope and me, please go ahead and submit it in the Q&A box.
This is the final session in a three-part series. We’ve been talking about why there’s no single right way to do product discovery.
Hope and I both coach product teams. Especially from the teams that are really eager to learn, we hear a lot of, “Am I doing it right?” And we also hear, “Is this method better than this method?” We see a lot of dogma in the industry around the one true way to do things. And what we’ve learned from working with teams is that there are actually a lot of ways to do this. What’s most important is that we find the right fit for the team so that it’s sustainable over time.
There’s no one ‘right’ way to do discovery. What’s most important is that we find the right fit for the team so that it’s sustainable over time. – Tweet This
Hope and I started thinking about how product leaders can help their teams adopt more continuous discovery practices and give them the flexibility to work the best way that works for them. What are the key underlying principles that you should be looking for to evaluate if you’re doing discovery well? And that’s what we’ve been discussing in Part 1 and Part 2. In each webinar, we’ve tackled a couple of principles.
Recap from Parts 1 and 2
Teresa: I’m going to do a quick review of what we covered in Part 1 and Part 2. And then we’re going to dive right into Part 3.
In Part 1, we covered collaborative decision-making. The idea is to have a product trio—a product manager, a designer, and a tech lead—working and owning product decisions together. That’s the first principle.
The second principle is to externalize your thinking. We talked about opportunity solution trees, customer journey maps, experience maps, story mapping—there are lots of ways for teams to externalize their thinking so that they can align around it and stakeholders can follow their progress.
The third principle is being outcome-focused. We discussed the distinction between outcomes and outputs. How do we get teams to not just focus on shipping code, but to look at the impact of that code and the outcomes that are being driven? That was Part 1.
In Part 2, we dove into making sure your teams are discovering opportunities. We teach using customer interviews to discover opportunities. We know some people like Jobs to be Done as a way to do this. We know some people have the luxury of doing customer observations on a regular basis, but what are your teams doing to make sure that you’re constantly exploring customer needs, pain points, and desires?
The last thing we covered in Part 2 was talking about prioritizing in the opportunity space rather than the solution space. Rather than taking a whole bunch of solutions and ranking them against each other, comparing apples to oranges, it’s important to first make that strategic decision in the opportunity space and say which problems are most important for us to solve and which opportunities would have the biggest impact on our outcome.
Now we are going to go ahead and dive into the next few principles. Here’s how this is going to work. We’re going to introduce the first principle. Hope and I are going to talk about it for a few minutes. We’ll take a couple of your questions. Then we’re going to go on to the second principle. We’ll do the same thing. And then, in any remaining time that we have left, we’ll tackle any other questions.
Principle 6: Set up Compare and Contrast Decisions
Teresa: We are going to talk about compare and contrast decisions. This is a principle that I think not a lot of people think about when they think about discovery. It comes from decision-making research. Decades of research on decision-making show us that when we’re trying to solve a problem or meet a need, it’s better to consider more than one option. Product teams tend to get stuck on their favorite solution and they pursue one at a time.
Decades of research on decision-making show us that when we’re trying to solve a problem or meet a need, it’s better to consider more than one option. – Tweet This
Product teams often work on one customer problem without considering what else is out there, and we get really boxed in to what is often referred to as a “whether or not” decision. Should we build this idea or not? Should we solve this problem or not? Whereas, decision-making research tells us we’re actually better off setting up a compare and contrast decision. Hope, do you want to jump in and share a little bit about what you’ve seen with teams on this?
Hope: I think there is a spectrum. There’s an evolution. Sometimes when teams are first doing discovery, the fact that they’re considering more than one way to solve the problem is a great start, but it can still sort of devolve into, “How do we know whether or not we’ve made a good choice?” Or I see teams running endless experiments without clear success criteria for those experiments.
This principle is so impactful for product teams because when you’re explicit about how you’re going to judge one versus another, you can have more productive conversations, and you can be more focused on what you need to learn as quickly as possible. Framing decisions as compare and contrast can really help free teams from feeling like they’re in an endless loop of research and expecting some epiphany to just appear.
Framing decisions as compare and contrast can really help free teams from feeling like they’re in an endless loop of research and expecting some epiphany to just appear. – Tweet This
Teresa: There are a lot of ways to frame compare and contrast decisions. The visual that you’re looking at here is a tree structure that I like to use with teams. It’s called an opportunity solution tree. It helps to map out their path from an outcome all the way through to what to build. When we’re talking about compare and contrast decisions, the opportunity solution tree helps us think about where to compare and contrast.
If you’re working on an outcome, the first thing to compare and contrast is the opportunity space. What opportunities are available to us? If we were to try to drive this outcome, what customer needs, pain points, and desires should we be addressing? Instead of tackling the first problem you hear about—or what we see in a lot of companies is this ping-ponging back and forth based on the last customer you talked to. They shared a need. You jumped to solve it. You talked to the next customer. You jumped to solve their need instead of taking an inventory of the opportunity space, and comparing and contrasting, and saying, “What’s the most important thing that we can be doing?”
Once we choose a target opportunity, we can also compare and contrast at the solution level. Hope talked about this a little bit. When we’re experimenting, if we’re only working with one solution, it’s hard to evaluate if our experiment results are good enough or not. If we’re comparing and contrasting solutions, though, we can ask a different question, which is, “Which of these looks most promising? Which looks best? Is there a clear front runner?” Hope, do you want to add anything there?
When we’re experimenting, if we’re only working with one solution, it’s hard to evaluate if our experiment results are good enough or not. – Tweet This
Hope: Sometimes, there’s an element of debate. Sometimes, there’s an element of desire to get to consensus. And sometimes, you’re just working with one opportunity or solution at a time. It will actually help you get to more decisive calls if you frame your criteria for choosing among these.
And if you’re going to debate or trying to achieve consensus, it’s better to get it at that level—focusing on the criteria that helps you compare one of these opportunities or one of these solutions versus another. Then you’ll be better set up to revisit that criteria when you actually have data to make an informed choice. Often that step is just missing from the way product teams—and, frankly, their stakeholders—are deciding. It’s not explicit for a lot of teams.
Teresa: That’s a really critical point here. Your outcome is helping you determine what’s the best opportunity to go after. It’s the one that’s going to have the biggest impact on your outcome. So can we agree on what the outcome is, and then agree on how we might assess those opportunities so that it’s easier to come to a decision?
And then, it’s the same when you’re exploring solutions. Have you chosen a target opportunity? And your evaluation criteria is how well do those solutions address that target opportunity? And of course, we can answer that question with prototyping and experimenting. Why don’t we turn to a couple of questions? Melissa, what do you have for us?
Questions from the Audience: Compare and Contrast Decisions
Melissa: One of the questions that’s come up a couple of times is about the criteria that you consider. Could you define exactly what you mean by criteria or give an example of the types of criteria that you would consider?
Teresa: I think the ultimate criteria is your desired outcome. We talked a little bit in Part 1 about each team working towards an outcome. If you’re a consumer site and your primary customer metric is engagement, you might be looking at evaluating the opportunity space based on what’s going to drive engagement the most. There are a lot of ways to measure engagement. You might have teams focused on daily active users (DAU). Some teams even get more sophisticated and divide daily active users by monthly active users.
You might be looking at engagement as month-over-month subscription retention. How you’re defining that outcome becomes your ultimate criteria, but then as you work through the discovery process, it might get even more refined.
You might be looking at how well the solution addresses a given opportunity. And that’s often a stepping stone toward your outcome. So we’ve identified a need. We think if we solve the need, it will drive the outcome.
So now, our criteria is how well does the solution address the need? That criteria would be really dependent on what that need was. If the team is discovering the need together and has a shared understanding of that need, it gives you a jumping off point for a shared criteria. Hope, do you have any examples?
Hope: Yeah, I’ll give a couple of examples. These are when you’re at the solution level. A lot of people have worked in some sort of transactional, e-commerce website. In this case, you might be looking for ways that you can drive more e-commerce revenue, changing the purchase path, changing your product mix. There might be many paths that you’re considering, many opportunities to increase revenue from your customers.
Assuming you’ve picked the opportunity—let’s say it is streamlining the purchase path. You’ve learned in your discovery that it’s too cumbersome. It’s too complicated. It doesn’t have the payment methods I need. The things that you heard from your customers led you to believe that you need to now be investigating solutions for how to improve the checkout process.
If your outcome is revenue, you’re looking for something that is showing that you’ve actually increased purchases per visit, or you could be looking at checkout per add to cart. And it really depends on how you define success for your team, but when you’re doing that, what I have seen some teams do is create maybe three or four different designs, and they just run them in an A/B test looking for a winner.
Now on the one hand, they’re making a compare and contrast decision, because they’re waiting for a winner, but without explicitly saying, “This is the line in the sand that we’re drawing,” it must not only increase, say, checkout rate, but it must not lower average order value or something like that. If it’s not going to move the needle by at least 5% or 10% and have no negative impact on our average order value, you haven’t set explicit criteria to figure out which of your potential designs has the best chance at meeting that criteria for success.
And so, once you’ve defined that explicitly, it actually will also help you think critically about your solutions. It will help you assess whether they are even on the right track to deliver so that you can get qualitative feedback or quantitative feedback to figure out if they’ve met your success criteria for that solution.
Teresa: Let’s stick with this e-commerce example for a minute. A lot of people in the poll responded with, “We work with one problem or solution at a time.” So whether you’re explicitly outcome-focused or implicitly outcome-focused, you know generally you’re trying to increase shopping cart or order size. You know that’s really what the company cares about right now. A lot of teams start generating ideas and they just fill their backlog. They say, “Here’s what we’re going to do.”
One of the easiest ways to get to compare and contrast decisions—if you’re just working with one solution at a time—is to get in the habit of asking, “What else could we do?”
One of the easiest ways to get to compare and contrast decisions—if you’re just working with one solution at a time—is to get in the habit of asking, ‘What else could we do?’ – Tweet This
We have this great idea. It’s at the top of our backlog, so we’re about to start working on it. How do we bring in this idea of opportunity cost? Every time we do something, we have the opportunity cost of all the things that we can’t do.
How do we start to service those so we can compare and contrast? We have feature A that we love and we think is a great idea. What else might we do? So you get feature B and feature C. And then you can do that at the opportunity level, too. If you talk to a customer and they tell you about a new problem or a pain point, and you’re really motivated to solve it, take a minute and just say, “What else are we hearing from customers? How do we compare and contrast? Should this really be a thing we work on next?”
Melissa: The next question is about how long to spend on each step. How long do you typically spend debating opportunities? How long on solutions? When do you move on from prioritizing the opportunity to the solution?
Teresa: That’s a hard question. Like many things in the product world, the answer is: It depends. One of the things I encourage teams to do is to really think about their decision points as two-way-door decisions, and then to quickly follow them up with experiments to see if they got it right.
I encourage teams to think about their decision points as two-way-door decisions, and then to quickly follow them up with experiments to see if they got it right. – Tweet This
Let’s break that down a little bit. The idea of a two-way-door decision came from a Jeff Bezos Amazon shareholder letter. He used the terms “level-one” and “level-two decisions,” which are a little less clear.
The concept is the same—one-way doors and two-way doors—but the idea is that if it’s a one-way-door decision, you walk through the door. You see the consequences of having made the decision. And if it turns out you made the wrong decision, the door is closed behind you. You can’t go back. Whereas, in a two-way-door decision, you walk through the door. You get the benefit of having made the decision. You see what’s on the other side. If it turns out you don’t like it, you can easily turn around and go back.
A lot of the decisions that we’re making in discovery are things like: What problems should we solve? What solutions might work? If we’re doing fast, iterative discovery cycles, they’re two-way-door decisions. So we don’t want to spend a lot of time debating and deliberating. We want to be data-informed, and we want to make the best decision we can given what we know today, but we want to make a fast decision, and then follow it up with verification through prototyping and experimenting.
Melissa: Okay, great. And you want to take one more question on this topic before we move on?
Melissa: This question is about the opportunity solution tree. They’re wondering if you track all your decisions in one mega tree. When talking with customers, you might get dozens of opportunities per interview. So keeping them organized in a single tree seems like it would be difficult.
Teresa: You want to tackle that one, Hope?
Hope: Yeah. I’ll tell you what I recommend, because you’re right. If you’re doing customer interviews continuously, you could end up hearing many diverse points of view from customers. You might say, “What is the cost of me trying to capture all of this in the tree after each interview?” I recommend teams capture the opportunities in snapshots, which shouldn’t take too long.
If you’ve got your snapshots easily accessible through a tool like Miro, you can decide as a team how often you want to commit to revisiting the opportunities on your tree. And especially if you’re thinking about pivoting from the opportunity you’re working on to another opportunity, that’s a good time to consider refreshing your tree. But then you can just look at, “Are there things that we’re hearing a number of times that we think are absent from our tree?”
You can decide a rule of thumb. If you hear something five times and it’s not on the tree, you want to try to make sure that it’s captured in the tree. If you feel like you’re hearing diverse points of view that are actually representing different segments of customer needs, you might want to branch by customer segment and make sure that’s reflected in your tree.
I don’t think teams should be trying to have a perfectly maintained up-to-the-minute tree. It is really meant for you to facilitate good decisions and alignment with your teams and your stakeholders to achieve your desired outcomes. So it’s really about what’s the right level of investment for you to keep it up to date so that you’re making good decisions.
Teresa: I want to clarify a couple of things Hope said, because she used some terminology you might not be familiar with. One of the things that we teach is to create interview snapshots. They’re just a one-pager that summarizes what you heard in a single interview. What’s nice about that item is that you don’t have to worry about patterns. You don’t have to worry whether you’re hearing this from other customers.
You’re just capturing as much as you heard in an interview. So the question-asker said, “What if I’m hearing dozens of opportunities in a single interview?” All of those would go on the snapshot. We then encourage teams to map out the opportunity space. And when you’re doing that, you’re looking for patterns across your interviews. So you wouldn’t include everything. You would look for the common traits.
Hope and I teach teams using interview snapshots, and we use the opportunity solution tree, but I really want to emphasize there’s not one way to do this. So if you don’t use snapshots and you don’t use the opportunity solution tree, and you use something different, that’s fine too.
The key here is these principles. One of our principles was to externalize your thinking. We do it via opportunity solution trees, but you might do it through impact maps. Maybe you do it through a kanban board—whatever works for you. We just want you to make sure you’re externalizing your thinking in a way that allows you to compare and contrast. Are you considering more than one opportunity? Are you considering more than one solution?
Make sure you’re externalizing your thinking in a way that allows you to compare and contrast. Are you considering more than one opportunity? Are you considering more than one solution? – Tweet This
All right. Why don’t we do this—if you have questions in the queue that we have not gotten to, we will continue to answer compare and contrast decision questions at the end, but we are going to introduce our second principle for the day.
Principle 7: Surfacing and Testing Assumptions
Teresa: The second principle is surfacing and testing assumptions. This is common in the vernacular in the industry, but less common in practice. And we want to dig into why that’s the case. But before we do that, I’m going to launch our poll so we can get a sense for where you’re at with surfacing assumptions in your product practice.
All right. So it’s looking like a lot of you discuss assumptions, which is great. A few of you don’t do this yet. We’ll help you get started with that today. And then, a few of you are explicitly externalizing them or testing them. Hope, do you want to start with folks who aren’t doing this today? Do you want to talk a little bit about why this is so important?
Hope: Yeah. I find that not enough teams do this. It will transform the way you think about your responsibility and the ability to learn quickly if you make this a critical part of your discovery practice. Often when people are working in the opportunity space or the solution space, they have many, many beliefs and assumptions in their minds. Some of these assumptions could be right and are totally reasonable. Some of these assumptions may be completely biased and based on a very small, unrepresentative sample of their prediction of future human behavior.
What we need to do is to take these assumptions out of our brains and put them into a place where we can examine them. And by doing this, we can see how well we understand one another and our perspectives on what we think is going to be critical, to be true for our solutions, to deliver on our expectations and deliver on the opportunity.
What we need to do is to take these assumptions out of our brains and put them into a place where we can examine them. – Tweet This
It’s not a common practice. It’s not something that many people do. It might feel awkward and strange. But once you do it, you can actually start to see and really break it down into if this is not true, then it will really decrease the probability of our success in meeting our customers’ needs and delivering on our desired outcome. And that’s why it’s so important for this to be part of the team practice.
Teresa: It also really unlocks fast iterations. I see a lot of teams that work at the idea level, the feature level, the solution level. “We want to build this thing.” And they get stuck with, “How do we test it?” And we see an over-reliance on A/B testing to test whether it’s the right thing to build. And the problem with that is that you just did all of the work to build it before you learned if it was right or wrong.
The problem with A/B testing is that you just did all of the work to build it before you learned if it was right or wrong. – Tweet This
So A/B testing is great as a measurement tool, but it’s not the best discovery tool unless we’re looking at smokescreen-type A/B testing. It’s great for marketing and landing pages, obviously, but really we want to get out of this trap of testing the whole idea—whether that’s A/B testing or taking a couple of weeks to prototype a whole idea and running a participant through a whole prototype. If we can take the time to surface the underlying assumptions, what needs to be true for this idea to work?
And really, we tend to see assumptions fall into different categories. Desirability—why are we assuming people want this? There are usability assumptions. What are we assuming people are able to do? Will they understand it? Can they find it? There are viability assumptions. Is it good for our business to build this? Is the effort worth the reward? Desirability, usability, viability… What am I forgetting? Feasibility—a feasibility assumption is, “Can we build it? Is it technically feasible?” But it’s also, “Is it feasible from a compliance standpoint, from a security standpoint, from a company mission and vision standpoint? Is this something we, our specific company, can build?”
And then, finally we really encourage teams to look at ethical assumptions. What are we assuming about potential harm? How are we using data? Do our customers understand how we’re using data? Would they be okay if they knew how we were doing it?
Using those five categories—desirability, feasibility, usability, viability, and ethical assumptions—really helps teams to deconstruct an idea into its underlying assumptions. And then, it is almost always way faster to test a specific assumption than to test the whole idea. Testing assumptions rather than ideas allows us to test multiple assumptions in the same week.
Testing assumptions rather than ideas allows us to test multiple assumptions in the same week. – Tweet This
Marty Cagan says, “The best product teams run dozens of experiments a week.” If our experiment size is an A/B test of a full-blown feature, it’s impossible to do dozens of experiments a week. But if we’re testing teeny-tiny assumptions, and we can test them in a couple of hours or a few a day, it’s easy to get to dozens of experiments. Hope, I’m going to put you on the spot a little bit. Feel free to throw it back to me. Do you have any examples of quick experiments?
Hope: Yeah. And in fact, I’m doing this right now with a client that is working on a new product, and they believe they understand what the alternatives are today to that product in the market. That is fraught with assumptions. Assumptions about how they solve the problem, how their friend—who also works in the same space—solves that problem. And it may or may not be true, who the decision-maker is, who has the budget to solve that problem. There are many, many assumptions that need to be true for this solution to exist.
So we’re doing a very quick experiment—and again people think an experiment must be an A/B test, but that’s not true. An experiment is a way to get evidence and data that tells us whether the assumption is true or false. And in this case, we’re talking to who we believe the buyer is. So we’re going to run an experiment to see if we actually know who the buyer/decision-maker is by seeing if they care to have this conversation with us. If they don’t care to have the conversation with us, either we are not talking about a problem they care about, or we picked the wrong buyer/decision-maker.
And two, we’re testing how they solve the problem. Today, we have some beliefs and assumptions about how they solve it. If those are not mentioned, then we know that we do not understand how they solve it today. So those are going to be two experiments that we’re running with a single test, which is trying to schedule an interview with these people and seeing if they’re willing to talk about the problem and how they solve it today as an example.
Teresa: Yeah, those are great. One of the things that I really encourage teams to think about when you’re running an experiment, is how you define an experiment or a prototype. The key is that you need to simulate the experience the person would have if your product existed, but not the whole product experience. It’s just the experience they would have to test that particular assumption.
The key to running successful experiments is you need to simulate the experience the person would have if your product existed, but not the whole product experience. – Tweet This
I’ll give another example. I’m working with a team right now whose idea is to build a community forum where people can share pricing. I can’t give too many details, but I can say the price at which they sold a commodity good. They’re all constantly asking as they watch the commodity price fluctuate, “Should I sell today, or should I sell tomorrow?” They want to be able to collect original data about who’s selling when and what price they got to help them make this decision about, “Should I sell now or later?”
One of their assumptions is, “Our customers will be willing to share the price at which they sold.” So they don’t have to build this whole community site to test that. They want to simulate the experience. They want a customer to say, “Yes, I will share this data publicly.” Here’s what they’re going to do. They’re going to tell their customers they’re writing a blog post about the experience of five customers who recently sold their commodity and ask if the customers would be willing to share the price for that blog post.
They’re creating an easier instance in which to share publicly. They already have a blog. They actually don’t even need to write the blog post. They’re just going to ask the customers if they’re willing to participate in the blog and if they are, to tell them the price. So it’s not just, “Yes, I’m willing to share my price.” It’s, “Yes, I’m willing to share my price and the price was this.”
That’s something that they can do in a day or two. They can reach out to customers. They can ask for that information. They’re either going to get it or they’re not. They don’t have to build a single thing, but they’re able to test the assumption. Now, it’s not 100% perfect, but it’s pretty darn close. The blog readers are the same people that would be part of the community. It’s not anonymous. They’re going to learn pretty quickly whether it needs to be anonymous or not. It’s a really teeny-tiny activity that allows them to get a lot of data quickly.
Hope: Sometimes we’re testing desirability assumptions. The example I gave was really around desirability. Do we know who wants to solve this problem, and how much they like or dislike the way they solve the problem today? And what Teresa described was a customer feasibility problem. Are they willing to do it? Are they willing to contribute their price-paid information? It’s a feasibility assumption for the solution to exist.
Teresa: Yeah. And this is where the categories get a little bit noisy, because I would actually define my example as a desirability assumption. Are they willing to do it? Do they want to? Whereas, some people do categorize it as a feasibility assumption. Is it even possible? I think what assumption a category falls into doesn’t really matter. I look at the categories as lenses.
If we were to look at this solution from a desirability standpoint, a feasibility standpoint, a viability standpoint, etc., does that help us generate more assumptions so that you’re better able to deconstruct your idea? As you work through this, don’t split hairs about it. It doesn’t matter if it’s feasibility or desirability. It just matters that you identified it, and you can test it.
Hope: The other thing that helps you identify assumptions that you should really test the soonest is if that assumption needs to be true for any of your solution ideas to work. In Teresa’s example and in my example, if none of those are true, it doesn’t matter how great our solution is. There will be no market for it. That is a very important thing that we want to test early so that if it turns out to be a two-way door decision—we decided we didn’t like what we saw on the other side—we pivot right back up the tree and look for another opportunity.
Teresa: I’m noticing in the chat that Ben is highlighting that he likes that experiment design, because it’s easy for people to say they will do something, but it doesn’t mean they will do it. And that’s why we really emphasize when you’re prototyping and experimenting, we really want you to simulate the experience so that you’re measuring action. Not just, “Yes, I would do that.”
Here’s the thing. I’m going to eat vegetables every meal tomorrow, and I’m going to go to the gym every day of the week, and I’m going to get all of my work done because that’s what I believe in this very moment. But tomorrow, reality is going to happen and I’m going to run out of spinach. And I’m not going to eat it with my breakfast, and my day is going to be jam-packed. I’m not going to get my expected workout in.
We’re really optimistic about what we will do. So we want to make sure that when we’re experimenting, we’re simulating so that they’re required to do the action you need them to do.
Melissa, what kind of questions are we getting?
Questions from the Audience: Surfacing and Testing Assumptions
Melissa: One of the questions was about the difference between a good assumption and a good hypothesis. Could you clarify between those two points?
Teresa: I think an assumption is a belief. This is where we really run into limitations because product teams are not trained scientists. I’m going to distinguish between these, and then maybe Hope and I can talk a little bit about what’s really required. I would say an assumption is a belief—it’s something that we assume to be true and it can be formed in any way you want.
When we talk about a hypothesis, now this is a scientific term that has a very specific meaning and it can be tested through experiment design. And so, in order for an assumption to be tested as a hypothesis, there are a lot of things that we need to do to translate it so that it’s falsifiable. That’s really the key that distinguishes an assumption from a hypothesis. I’ve written about the criteria I look for, for something to be considered a hypothesis. I look for things like, “Is it specific enough that it’s falsifiable? Are you drawing a line in the sand?” So are you saying that “X people will do Y” rather than saying “people will do Y”? Because “people will do Y” is not falsifiable.
I can search infinitely. And as long as I find two people, it’s true. That’s terrible, because I can’t search infinitely. So it’s not really a falsifiable hypothesis. When we talk about assumption versus hypothesis, there’s this gradient between a belief, which can be pretty vague, and something that is falsifiable through experiment, which is a hypothesis. And this is where maybe Hope and I can discuss a little bit about where you need to be.
A lot of teams can turn an assumption into a pretty good testable hypothesis really easily. They are natural experiment thinkers. Some teams really struggle. So I have an experiment design template. Someone in the chat highlighted the “we believe” format. The “we believe” format works really well for some people. For other people, it’s not specific enough, because they don’t use it to get specific enough. Hope, do you want to talk a little bit about the gray area here and what you see from teams?
Hope: I think that product teams, once they’re introduced to creating a hypothesis template, that gives them pass-fail criteria that they can use. Then they can start thinking creatively about how they might scope down that experiment to quickly learn whether it’s true or false. I think sometimes the assumption language, especially working with other stakeholder partners, can trip people up. They don’t want to be constrained to that criteria. They think if you’re only going to test it with five people or ten people in a prototype, and then you decide that it’s a false assumption, that might limit your opportunity or you might be taking an idea out of commission that they still believe in.
It’s super important to be thinking about not just the product team, but who else will need to believe and have confidence in the decision and the action that you’re going to take based on how you draw the line in the sand, how you verified whether it was true or it was false so that you can make sure that you can, in fact, take the action that you believe. Are we going to iterate on our approach, or are we rejecting this as an opportunity and never coming back to it? It’s important for everybody to be clear about what decisions you’re going to make, whether that assumption turns out to be true or false based on your experiment design.
Teresa: This is awesome. Let’s get into a specific example. So if we run with the idea that people will share pricing data, that’s an assumption. It’s not testable. We can’t talk to all people, and find one or two people to prove that true. So to turn it into a hypothesis, we might say, “Three out of five customers that we ask will give us pricing data.” That’s a pretty good hypothesis that is also viable, but it may not be convincing for that assumption, depending on your organizational context.
For some product teams, three out of five of customers participating might be plenty convincing. For other companies, if you have a strong quantitative culture, you’re going to get pushback on three out of five. You might have to say, “100 out of 1,000 customers asked give us pricing data.” Now we’re running a much larger quantitative experiment. It’s going to take a lot longer, but if that’s what our organization requires, we need to be moving toward a more rigorous experiment.
One of the things I like to encourage teams to do is to start really small. Start with that 3 out of 5, because if nobody’s willing to share data—as long as you choose variation in those 5—odds are that 100 out of 1,000 are not going to share their data either, right? We run the risk with small numbers of some false negatives or some false positives, but there are enough product ideas out in the world that I think we can take that risk. Really what we’re looking for is, is anybody willing to do this? If they are, now we’re willing to take some time to run that bigger, harder, longer experiment.
Hope: Should we take some questions?
Melissa: We’ve got a couple of logistical questions about assumptions. One is where to put assumptions on the opportunity solution tree, and then the other one is who should be involved in defining assumptions.
Teresa: I’ll talk about the first part. I don’t think you should put assumptions on the opportunity solution tree, but there’s not a right answer here. In fact, I published a blog post on Product Talk from a product team that shared they got a ton of value from putting assumptions directly on the tree. So if that works for you, by all means, do it.
I really am reluctant to turn one document into everything. Your solutions are going to have lots of assumptions. And I think it’s a little too much information for the tree, but experiment. Do what works for you.
I see teams collect assumptions in a Miro board or a MURAL board, some kind of digital whiteboard so that they can move them around as they learn more and assess risk. And maybe that’s in the same document as your opportunity solution tree, if you want everything in the same place, but I think they serve different purposes.
Hope: I think the second part of the question was who needs to be involved in generating the assumptions. Is that right, Melissa?
Melissa: It’s who should be involved in defining assumptions.
Hope: It will depend on your organizational context. Typically, I find that if the product team is operating completely autonomously, probably just the product team. If you’ve got stakeholder partners that will be impacted by the decisions that you’re making, you are probably going to want to make sure that they’re participating in your assumption-generation activities. There are many options for how to involve them in this.
My personal opinion is that if this is a new product that’s being introduced to the market, then you probably are going to want to have some customer-facing teams like sales. If this is, “How are we going to make this product easier to use?” you may just want to have maybe partners from support, or maybe it’s just within your product team, and there’s design and usability emphasis.
It really is going to be dependent on the decision you’re trying to make. When you know what decision you’re going to make, when you have proof about the customer’s behavior in your simulation and your experiment, who’s going to take action on that proof, whether it’s when your assumption turns out to be true or false, getting their buy-in and participation in how you set that decision criteria and what action you’re going to take. If you have to convince them after the fact that you designed the experiment in the right way, it will probably be more cumbersome and slower than if you had involved them in the first place.
Teresa: One of the things I’ll highlight was, in Part 1, we talked about collaborative team decisions across that product trio. So when we talk about all six of these principles, we’re really expecting, at a minimum, that trio is participating.
And then, I think Hope’s spot on. Depending on where you are in the product cycle, depending on where you are in discovery, depending on the stakeholder management that’s required, you’re inviting other folks to help facilitate that process.
I want to give a quick recap of all seven things that we talked about—assuming I can remember them—and share a quick way you can learn more, and then we’re going to get right back to your questions.
Over this three-part series, we talked about trio-based decision-making, collaborative decision-making. We talked about externalizing your thinking through visuals, like opportunity solution trees, customer journey mapping, and experience mapping. We talked about being outcome-focused, so making sure that your teams are tasked with an outcome and not just delivering outputs.
In Part 2, we talked about discovering opportunities. We talked a lot about doing that through interviews, but you can also do it through customer observations. You can do it through analytics. You can do it through a wide variety of discovery activities. We talked about prioritizing the opportunity space and making sure that you’re not just ranking features against each other, but making strategic product decisions in the opportunity space.
And then, this time, we dove into compare and contrast decisions, making sure you’re choosing among a set of options, and finally surfacing and testing assumptions.
I like to capture this whole process in visuals so that people can keep track of where they are and what they’re doing and what to do next. That’s actually how the opportunity solution tree was born.
If you want to learn more about any of these methods or how Hope and I teach them, or how we work with teams to help them find and mix and match discovery methods that work best for them, you can learn more about what we offer at producttalk.org/learn.
General Questions from the Audience
Melissa: What tools do you recommend using to map the opportunity solution tree?
Teresa: I like Miro and MURAL. They’re both digital whiteboard tools. People ask me if one is better than the other. I work with teams that use both. They seem pretty interchangeable in my mind. Before Miro and MURAL really existed, I worked a lot with teams in Lucidchart and Draw.io. Some even used OmniGraffle.
Here’s the criteria I would look for. It’s visual. You can drag things around so you can make fast updates. Multiple people can edit at the same time. Everybody on the team has access to it. Beyond that, I think for most tools, it’s really about finding the right fit for your team.
Melissa: When you’re running your experiments, do you feel that it hurts the experiment if you offer to pay your customer to interview or do the test with them?
Teresa: Good question. Do you want to tackle that, Hope?
Hope: Every situation is a little bit different. You’re going to find there are going to be segments of prospective customers or existing customers that will gladly give you their time because they are passionate about the problem space that you’re solving. Maybe they’re passionate about you. If they’re an existing customer, they want to have their opinions heard, and you may not need to compensate them at all.
There are going to be other people where more of an incentive is required. They may say, “I’d be happy to talk to you about it, but my time is valuable and I want to make sure that this is a good use of time.” You have to do whatever works. It may not necessarily even need to be some sort of cash incentive. There may be some other value exchange.
A lot of companies use customer advisory boards, or people have opted in to participate in customer research. They may not exactly fit who you’re looking for. You may have some false starts in terms of who you speak with, but you want to make sure that you’re talking to the right people that make sense for the problem or the opportunity you’re trying to understand.
You should use lots of levers to make sure you have those conversations with the right people—friends and family incentives, value exchange, early access to see something. That can be enough of an incentive for people to participate in research. I don’t think there’s really any downside risk as long as you’re structuring your conversations to learn what you need to de-risk what you’re trying to achieve.
Teresa: Some teams run into problems with this idea of professional testers on these bigger testing platforms, and there are two ways around that. One is you have to write better screeners so that they’re not just checking a box saying, “I’m an accountant,” but you’re asking a question that exposes whether they truly are an accountant or not. Especially on these large recruiting platforms for discovery tools, think about how to screen out the professional testers in your screening process.
The second tactic you can try is most of those tools allow you to upload your own list so you can actually test with your own customers. That’s another way to avoid that person who is sitting at home trying to make a living by just doing a bunch of usability tests.
Melissa: How do you deal with a situation where executives or customers come to you with strongly held solutions in mind already?
Teresa: Some of the principles that we talked about can be really helpful. When somebody comes to you with, “Hey, we have this great idea. Let’s do it,” I recommend you go directly to the whiteboard and start story mapping what that idea might look like, because most ideas are not good in practice. And sometimes, we just need to take some time to think through how they might work to realize they fall apart. If you do that with your stakeholder and they come to that realization on their own, it gets rid of a lot of those problems.
The opposite can happen, too. You might realize, “Hey, this is actually a really compelling idea.” So the first thing is to take some time to make sure that you’re helping that person externalize their thinking. What do I really mean by this idea? You’re asking, “How might it work?” so that it’s becoming really concrete.
From there, it becomes a prioritization decision. I would use the tree or impact mapping or whatever tool for externalizing your thinking to work with the stakeholder and say, “Okay, this idea looks great. Can I ask a few questions? I’m supposed to be working on this outcome. Is that still the most important thing? It is? Okay, great. We talked about me working on this target opportunity. Is that still the most important thing? Okay, great. This idea doesn’t really address that. Can we table it until we’re working on an area where that’s more relevant?” You’re helping your stakeholders by saying, “Remember, we already made these strategic decisions. Do we really want to disrupt those?” Hope, do you want to add anything there?
Hope: I plus one everything you just said. Once you’ve got an idea, you get that rush of, “Oh, I only see how it could work.” And then you don’t want to be hit with a big old no in response, right? You’re trying to be a good collaborative partner. And so, again, going back to this concept of surfacing assumptions that must be true for that to succeed.
Let’s just assume for a minute that this great idea that has just been presented to you actually relates to your target outcome and opportunity, then maybe it is one of the solutions that you can compare and contrast against. And there are techniques that you can use to get the assumptions out on the table.
One of my favorite de-confidence-biasing techniques is a technique called the pre-mortem, which essentially says, “Okay, the idea exists in the world, however you conceived it to be, but it failed. What went wrong?” And getting people to think through what went wrong almost reverse-engineers what needs to go right for that solution and, frankly, for any of your solutions to succeed. That gets people back to what assumptions need to be true. How can we de-risk these to know that they are, in fact, true so that we can find the best solution to get us to our outcome sooner?
Teresa: We are right on the hour. So I want to thank both Melissa and Hope for their time. Thank you, everybody. For those of you on the West Coast, thanks for spending your lunch with us. For the rest of you, thanks for finding time in your day.
If you missed Part 1 and Part 2, be sure to check them out. And if you want to keep investing in your continuous discovery mindset, you should join our mailing list.