Product management is evolving quickly.
The days of gathering requirements from business stakeholders and documenting them in long product requirements documents are vanishing. We no longer take months or years to release value to our customers.
Instead, product teams are experimenting their way to viable solutions. We are putting our customers first, taking the time to discover unmet needs, and developing solutions that address those needs.
Last fall, I spoke at Productized, a product conference in Lisbon, Portugal, on this very topic. I looked back over the past 20 years to get a better understanding of how we got here. My goal was to put our varied methodologies and techniques (Lean, Agile, Jobs-to-be-Done, design thinking, etc.) into context and help product teams know what to use when. I also look forward to what might be next as product discovery continues to evolve.
The full video, show notes, and transcript are below. I hope you enjoy it.
Show Notes:
- Introduction [:26]
- Modern Product Discovery [:54]
- The Evolution of Modern Product Discovery [4:15]
- The Agile Manifesto [7:06]
- The Rise of User Experience Design [8:47]
- The Lean Startup: Eric Ries [9:49]
- The Jobs-To-Be-Done Framework: Clayton Christensen and Anthony Ulwick [10:42]
- OKRs and Design Sprints [12:12]
- The Goal of Modern Product Discovery [14:27]
- Putting Discovery Practices Into Context: The Opportunity Solution Tree [21:32]
- The Future of Product Discovery [29:42]
Resources Mentioned
Books:
- Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation
- The Design of Everyday Things
- The Four Steps to the Epiphany
- Jobs to Be Done: A Roadmap for Customer-Centered Innovation
- The Lean Startup
- Peak: Secrets from the New Science of Expertise
- Radical Focus: Achieving Your Most Important Goals with Objectives and Key Results
- Sprint: How to Solve Big Problems and Test New Ideas in Just Five Days
- What Customers Want: Using Outcome-Driven Innovation to Create Breakthrough Products and Services
People:
- Steve Blank
- Tim Brown
- Clayton Christensen
- Anders Ericsson
- David Farber
- Jake Knapp
- Braden Kowitz
- Donald Norman
- Eric Ries
- Anthony Ulwick
- Jessica Wattman
- Christina Wodtke
- Stephen Wunker
- John Zeratsky
Other Links:
- The Agile Manifesto
- Build-Measure-Learn Cycle
- Design Sprints
- Fix Delivery to Make Time for Discovery
- How You Build Matters. Finding Your MVP.Introduction to Product Discovery
- Jobs-to-be-Done Framework
- Objectives and Key Results
- The Rise of Modern Product Discovery
- Stop Brainstorming and Generate Better Ideas
- User Stories Are Better Than PRDs
- Why This Opportunity Solution Tree is Changing the Way Product Teams Work
- Why You Are Asking the Wrong Customer Interview Questions
- Why You Aren’t Learning As Much As You Could From Your Experiments
- Windows ME
Full Transcript
[Edited for brevity and clarity]
Hi everybody, I want to clear up a misunderstanding. Don’t worry, it’s not about brainstorming. [Contextual Note: Earlier in the day, I created a controversy by arguing that brainstorming doesn’t work.] A number of people have asked me why I have a Portuguese name [Contextual Note: This event was in Lisbon, Portugal and Teresa Torres is a common Portuguese name.] So let’s just talk about that for a minute.
As you can tell, probably from how I speak English, I am an American. In the United States, a lot of us are mutts. I don’t know if you are familiar with that term, but we come from multiple ethnicities. My father is both of Spanish descent and Mexican descent, hence I say my name as /’ter-REE-suh tor-ress/, which is maybe the most American way to pronounce this name. So if you come up to me and speak Portuguese I might try to understand what you are saying, but mostly I’ll say, “I’m sorry.”
So now we’ve got that cleared up, let’s talk about modern product discovery. I work as a product discovery coach, so naturally that means the most common question I get is, “What’s that?”
How many of you guys are familiar with the term product discovery? Some of you. Okay, good, so don’t worry. No knowledge is required; we’re going to start at the very beginning.
We are going to take a walk through history, and then look forward to what’s coming next. So that as these ideas start to come into your world, you have a way of making sense of all of the rapid changes that are happening in the world of developing products.
The best way to think about product discovery is to think about it in relation to product delivery. It’s not possible to build a product without doing both discovery and delivery. Discovery encompasses all the activities that we do to decide what to build. It includes all the decisions we make to decide what to build next, whereas delivery is all the activities we do to write code, package releases, ship products. It’s how we deliver value to our customers.
Now the benefit of distinguishing discovery from delivery, is that most companies overemphasize delivery, and underemphasize discovery. And when we separate these two categories, it allows us to ask, “How are we doing in each? Are we any good at discovery, are we any good at delivery?”
Most companies overemphasize delivery and underemphasize discovery. – Tweet This
If we look over the last 15 or so years, especially on the delivery side, we have seen a lot of progress, and we have a really clear yardstick. Probably every single one of us in this room would agree, that on the delivery side, our goal is to ship value as quickly as possible. So once we have decided what to build, our goal as a product team is to get it out the door as fast as possible.
And we’ve seen this in our delivery practices. They have evolved rapidly over the last decade and a half, and we have seen teams go from releasing once a year, to releasing every quarter, to releasing every month, to releasing every week, and many teams are now releasing software as soon as their software developers are done writing it. They are releasing many, many, many times a day, and they are getting value to their customers as fast as humanly possible.
Some of you might be looking at this and saying, “Wow, there’s no way we’ll ever release every day,” and that’s fine. What’s good about this clear yardstick of how we can ship faster, is that no matter where we are as a company, we know what good looks like.
We can look to the future and say, eventually, we want to ship faster, we want to get faster and faster, and as we are evaluating our delivery practices, we can evaluate them based on how fast our release cycles are. And we all know what good looks like.
So when I started to think about this, I naturally asked, “What’s the equivalent for product discovery? What’s our goal, and how do we know if we are any good at this? And given that so many of us are doing product discovery differently, how do we know if we’re doing a good job?”
When I started to ask this question, I looked back over time and I was pleasantly surprised to see that product discovery has had a very similar evolution to product delivery, and around the same timeframe.
I want to walk through the last 15 years, briefly, and talk about what we have seen in the world of product discovery. And then I want to use that to help you put all of these trends into context so you know what to use when, so that we’re not just guessing about what tool in our toolbox to use at any moment in time.
I’ll go all the way back to the year 2000, when Microsoft launched Windows ME. I have no idea if this product even made it to Europe, because it was a complete flop. This is one of Microsoft’s worst products. It was their follow-up to Windows 98. It was launched in June of 2000. By 2004, Microsoft was already announcing that they didn’t want to support it anymore, and by 2006 they ended support for it.
So what led to this failure? Let’s think about what product management looked like during this time period. Now I’ve never worked with Microsoft, they’ve never been a client of mine, and I have no inside knowledge of how this product was built.
So instead I’m going to describe how many companies worked during this time period, and I am going to assume that this is what things looked like at Microsoft.
A product manager or a number of product managers probably spent months gathering requirements from internal business stakeholders. They documented it in a really long product requirements document, they sent it off to their engineering team who spent months, if not years, building against that document.
When they were done, they stamped their software onto a CD, they packaged it in a shrink-wrap box that looked a lot like this, they sent it to stores, and only when it still sat on the store shelves later did they realize nobody wanted their product.
So if we look at this model of how do we decide what to build, it’s really clear now, 16 years later, to say, “Wow this isn’t a very effective model.” Maybe we should learn before we ship our product to stores that nobody wants it. How can we avoid flops, long before we finish building the whole thing?
And it’s this type of question that is really driving the evolution of product discovery. So fortunately, even at the time this product was launched, there were a lot of people in the software community who were already frustrated with this way of building products.
Nobody wants to spend years working on a product, only to learn at the end that nobody wants it. And so a bunch of people came together, they started sharing what they were doing, they started asking what can we do differently, and a lot of this frustration culminated in 2001 with the release of The Agile Manifesto.
Now, the Agile mindset gives us a lot of value. There are a whole lot of things that Agile helps us with. I want to highlight one thing in particular on this road to the evolution of discovery that I think is a really important part of this story.
In the Waterfall method we often spent months, if not years, building a product, and we would periodically tell our business stakeholders how things were going. We would often learn that we were on the wrong track. We weren’t building what they wanted because product requirements documents are a terrible way to communicate. Language is very vague, we misunderstood the requirements, and we got further and further away from what our business stakeholders wanted.
For those of us that have switched to Agile or are even early in our Agile transformation, what we start to see is that Agile encourages us to do two things. One, develop in much smaller batch sizes, so don’t write code for months and months and months, write code for a couple of weeks, and then two, show it to people and see if you are on the right track.
Now for many of us when we first started with Agile, “show it to people” meant our internal business stakeholders. So we started to ask the question earlier in the process. Are we building what our business wants?
This was a good step forward, but at the same time period we saw another trend develop. A trend that was pushing on another dimension, asking a different question, and that question was not, are we building what our stakeholders want, but, are we building products that our customers know how to use?
So right around the same time that people started pushing on Agile, we started seeing the rise of user experience design and design thinking. If we think back to the early 2000s, teams were starting to hire interaction designers, the design world was debating about information architecture versus interaction design versus user experience design. Good times.
And we also saw the rise in popularity of design thinking. We started talking about building empathy for our customers. It was no longer just about building for our stakeholders. We have gotten a lot more focused on building for our customers. This was phenomenal. These two trends together, I think, helped us take a giant step forward, in informing how we decide what to build.
About five years ago, we saw another giant step forward. The release of a specific book really propelled us forward. Many of you may guess what it was, but about five years ago Eric Ries launched The Lean Startup. And the Lean Startup was really informed by Steve Blank’s work, so I am going to make sure we give him credit as well, and this is where we didn’t just ask, “Are we building something that customers can use?”, but we started to ask “Are we building solutions that our customers want?”
Now user experience designers of course also started asking this question. I’m going to give a simple narrative to walk through our history here. But this is where we started to ask an even deeper question: Do customers want our solutions?
And what is great is both Steve Blank and Eric Ries started to give us a lot of tools for how to answer those questions. We started to learn about MVPs, we ran through the Build-Measure-Learn cycle really quickly. We are learning earlier in the process.
And then, more recently, we were starting to see the popularity of yet another tool. And that is the Jobs-to-be-Done framework. And again, the Jobs-to-be-Done framework was popularized recently by Clayton Christensen, but it is based on a lot of work that was done by Anthony Ulwick. And basically, what we’re seeing now is we are starting to ask a different question.
We’re not just asking, “Do customers want our solutions?”, but “Are we solving the right problems for our customers?” I can create a solution to a problem that you have, but it’s not as compelling as a solution to a much bigger problem that you have. So we have moved from just focusing on solutions to starting to ask, “Are we solving important problems?”
Now this evolution is not stopping. We are still going to see new tools, new frameworks.
In fact, even in the last 12 months, we’ve seen a giant push in popularity of two new methodologies. One is the Design Sprint, so this book came out earlier this year, it’s a five-day process where on Monday, a team starts with a big meaty challenge that they have no idea how to solve, and by Friday they have prototyped a solution and tested it with real customers. It’s a fantastic way of introducing human-centered, experiment-driven product management to a team. And I suspect many of us are just starting to think about how we use this tool.
The second trend that has really risen in popularity this year, thanks to Google building on work by Intel, is the use of objectives and key results, or OKRs. If you aren’t familiar with them that’s fine, what matters right now is not the details of all of these methodologies, it’s just that our practices continue to get better.
And OKRs are a way for us to think about how we focus on outcomes, and how we align everybody in our organization around the same outcomes. And in the world of product management, this is really important because we get obsessed with features, and we shouldn’t be obsessed with features. We should be obsessed with what outcomes we are creating for our customers and for our businesses.
We shouldn’t be obsessed with features. We should be obsessed with outcomes. – Tweet This
And this is where something like OKRs are really helping product teams focus on what’s the outcome that we are driving. So I started to look at all these trends over the last 15 or 16 years, and I thought, “Wow, it’s been a good couple of decades for product discovery, this is awesome.”
But here’s the thing. Most teams that I talk to have no idea what product discovery is. They know these tools, but they don’t know what to use when. And people are dogmatic, they say, the only way to build products is with The Lean Startup. They say, the only way to uncover users’ needs is to use the Jobs-to-be-Done framework.
This is doing us a disservice.
A lot of these methods come from the exact same underlying principles, and it’s not that one is the best and the other one is no good, it’s not about design thinking versus lean. It’s not about Agile versus… well it is kind of about Agile versus Waterfall.
But really, here’s the idea. We have all these tools and frameworks because all of us are pushing to get better and we are pushing together really fast. And so we are seeing a lot of different methodologies that have a lot of overlap. And so when I look across all these methodologies, the question I ask is, how do we know if these tools are helping us, and how do we know what to use when? And as a product team being constantly inundated with all these methodologies, how do I decide which ones are right for my team?
So I want to go back to the original question which is, for product discovery, what is our goal? Our goal is to learn fast. Instead of learning after we ship our product that nobody wants it, we want to learn as quickly as possible, if what we think we should build is the right thing to build.
The goal of product discovery is learn fast—to learn if our proposed solution will work. – Tweet This
And every single time we learn about a new tool or a new technique, we want to ask ourselves, what does this help me learn quicker than what I was doing before? If we look at the same history, we can see that with each technology, with each methodology, we are starting to answer different questions earlier in the process.
So one of the big questions we start to answer when going from Waterfall to Agile that we can answer much quicker, is “Are we meeting our stakeholders’ needs?” We don’t wait six months before we say, “Here is what I built.” We say every two weeks “I built a little bit more, what do you think?”
So this is great, it really helps us move that learning earlier in the process. What we get from user experience design is that there is a different question. If we build in usability testing earlier in the development cycle, we are able to answer, “Can customers use it?” And we can answer this long before we ship any products, which gives us time to fix it if the answer is no.
In 2016, a lot of this now sounds obvious, but for those of us that were building products in the late ‘90s and early 2000s, we used to have to fight to run usability studies. So being able to answer this question really early is phenomenal.
And thanks to people like Steve Blank and Eric Ries, we are now asking an even more important question much earlier in the process, which is, “Do customers want our solution?” We’re not assuming our solution is the best thing in the world. We are leaving some room for humility and doubt, and we are saying we might be wrong. How can we learn that earlier in the process?
And then again, with Jobs-to-be-Done and with other opportunity finding methods, we are now asking much earlier in the process, are we solving a problem that customers care about?
So in Jobs-to-be-Done, we are asking what job is your customer hiring your product to do? What problem or challenge or opportunity is so important to them, they are willing to hire your product? And how do you make sure your product really meets that need?
So this is a great, deep question that we are now asking much earlier in the process. And if I look across all of these questions, and if I mix and match these methods, and I learn all of this early on—long before I spend months building something—I’m able to answer the most important question: Are we driving towards a desired outcome?
As a product team, my goal is to increase engagement, to reduce churn, to increase revenue, to increase customer satisfaction, to make my NPS score go up. It’s not to just build feature A, B, and C.
And what’s great is if I look over this exact same progression over the last 15 years, what I see in these questions is we are shifting from a focus on output (am I meeting my stakeholders’ needs?), to a focus on outcomes (am I having an impact on my customers’ lives, am I improving the value of my business?).
Our product discovery practices are shifting from a focus on output to a focus on outcomes. – Tweet This
And what’s great about making this shift, is now we all know what good looks like, we now have a yardstick for what good looks like. No matter where you are in this process, if you are just new to usability testing, if you are just new to experimenting, you know that your goal is how do I learn faster than I did last time?
No matter where you are in this list of questions, even if you are only answering some of them, or you are answering all of them, you can take stock and look at where you are and say, okay, I may have confidence that I’m solving the right problems, but at least I know people want my solution. And that’s good, and I now know what the next step is. So no matter where you are in your product discovery process, you now have a yardstick for what good looks like.
And this is really powerful, I think we’ve been missing this for a really long time. But now as an industry we’re starting to realize, learning—and learning quickly—is the most important thing.
Now, I work as a product coach, so what that means is I work with a product team, usually for between three to five months, and I am working with them week over week, in the context of their own work.
When I first started doing this what I was teaching teams was how to conduct good customer interviews, how to run sound experiments, how to draw good conclusions from those research activities, and I would get really good feedback.
But I would always get the same question over and over again. What my teams would say to me is, “Teresa, we love that you’re helping us do interviews, and we love that you’re helping us run experiments, but we still don’t know what to use when. You always have to tell us what to do next. How are you doing that? How do you know when to talk to a customer versus when to run an experiment versus when you’re ready to go ahead and move forward with your product?”
And I started to think about this question, and it seemed really intuitive to me, and I said well if I’m doing this intuitively, how am I supposed to teach another team how to do this?
And fortunately, at the same time that I was asking this question, I was reading a book called Peak. Peak was written by Anders Ericsson. He’s a researcher who, for most of his academic career, has studied expertise, and he looked at what makes experts stand out from novices, and a big part of his research has found that experts rely on stronger mental representations that novices don’t have.
A mental representation is a structure you have internally in your head that helps you to organize information. It helps you make sense of the world around you and helps you to make decisions quicker and easier. So I started to ask, am I using a mental representation to guide my teams through this messy land of product discovery?
It turns out I do, and I started to play with it, I started to whiteboard it. This became my design challenge, and for the last five or six months, I have been using it with my product teams, and of all the things I have ever taught, this visual structure has had a bigger impact on improving the quality of product decisions than anything else that I’ve ever seen. And it’s because it’s a critical thinking aid, it helps people see their thinking, examine the connections, and really question if they are building the right thing.
And so I want to share it with you today. It’s called the Opportunity Solution Tree. This is not rocket science. If you are familiar with decision trees, it’s really just a decision tree that helps you make sense of this messy world of product discovery. When I started to play with this visual structure, I realized there’s a huge gap in the discovery world that we are just now starting to fill.
So the top of the tree is a blue box, and the blue box is a clear desired outcome. Now this sounds obvious, but how are we supposed to build a product that drives an outcome if we don’t know what that outcome is? But I can’t tell you how many product teams I meet that have no idea what their desired outcome is. They say things like, we are just trying to make the user experience better. How are we measuring that, how are we going to know when we’ve done that?
So one of the things I love about the popularity of OKRs, and OKRs are a qualitative objective, combined with quantitative key results, so my objective might be to have the easiest diagramming tool, but my key results force me to say how will I know if I’m making progress towards that objective. And I have to come up with a quantitative measure.
Now this is really important, the root of this tree needs to be a quantitative measure, because by the time we get to the very bottom when we’re running experiments, we are going to evaluate our experiments based on that quantitative measure.
So if our goal is to increase engagement by 10%, every experiment we run while searching for what will drive that desired outcome is going to be measured by how much it impacts engagement.
Okay, so first, step one. Define a really clear desired outcome. It sounds simple, we all know we should do it. Sometimes I spend weeks with teams on this, because if you don’t have a metrics-driven culture, you are not going to agree, and it’s going to take some time, but if you don’t define a clear desired outcome you are not going to drive that outcome.
The second step is the one that I see missing on 98% of product teams. And I think Jobs-to-be-Done is going to help us fix this. But the second step is we need to discover the opportunities that are going to drive that desired outcome. So what I mean by opportunities is a little bit of jargon. It’s because I am trying to appease people who have problems with the word “problem.” If I think about this from a problem-solving lens, we have to define the problem before we can solve it.
If you are familiar with the Einstein quote, “If I had an hour to solve a problem I would spend 55 minutes first defining the problem and the last five minutes solving it.” That’s this idea, I mentioned it earlier in the panel, how we frame the problem has a really big influence on the types and quality of solutions we can generate.
But as product teams, we often skip this step. Somebody says to us, “Your goal is to go increase engagement” and we start brainstorming solutions. I promised I wouldn’t talk about brainstorming. We start generating solutions and the challenge is we have no way of evaluating solutions if we don’t know what problem we’re trying to solve.
Now the reason why I call them opportunities and not problems is because “problems” encourages to us to fix things, and sometimes things aren’t broken and we can just make them better, right, so think of an opportunity as a pain point, it’s a need, sometimes it’s just a want or a desire.
Elon Musk really wants to go to Mars. That’s not really a pain point. I mean, he might say it is because he thinks Earth is done, but there are these aspirational things that we want and need. They aren’t necessarily problems, we just want them.
So “opportunity” is just a little bit more inclusive, and it is really getting clarity around what’s going to drive our desired outcome. So if my desired outcome is to increase engagement, I want to know two things. One, what prevents people from engaging today? This is the problem mindset, what are the obstacles, what are the barriers, I have opportunities to remove them. And the other side of it is the really positive, appreciative inquiry question, which is, for my customers who are engaging today, why are they doing it? What problems am I solving for them? Or in the Jobs-to-be-Done language, what job does my service or product do for them?
And if I can uncover that, I can reach out to everybody else in the world and say, hey, the job you should use my product for is this, and I can use what I learn here to go find more customers. So what makes me sad about the fact that most people skip this step, is I believe that the opportunity space is where product strategy happens. The opportunities we choose to go after are what differentiate us in our market.
The opportunity space is where product strategy happens. – Tweet This
Two companies in the exact same space are going to pick different opportunities. I really encourage teams to take time to explore the opportunity space, to assess which opportunities are most likely going to drive their desired outcomes, and to use their company’s mission, vision, and strategy as a filter.
Because Google is going to choose one opportunity, and Apple is going to choose a different opportunity. It’s not because one opportunity is bigger than the other, it’s that those companies have completely different DNA, and so they are going to filter opportunities differently. This is where I think the heart of the product strategy lives, and most of us are skipping it.
And finally, once we discover opportunities, we need to make sure that we discover solutions that deliver on those opportunities. And it’s the links between the two which help us evaluate our thinking, it’s how this tool acts as a critical thinking aid. We really want to ask this question, in fact our experiments should help us ask, one, is this solution viable, but then it should help us test the link. Does this solution deliver on the opportunity? And again, that sounds so obvious, but we have built all sorts of solutions that don’t actually deliver on the opportunity we’re targeting.
And then finally, we have to ask, does the solution deliver on the opportunity in a way that drives our desired outcome? Because even if we solved the problem for the customer, but it doesn’t increase their engagement, we didn’t actually create value for our business.
So what this structure does, is it helps teams remember, what is our goal, what is the outcome we are driving, and as they do all these research activities, they can track it. This becomes their discovery roadmap. This is how they communicate to the rest of their company, I have no idea if I’m going to increase engagement, that’s scary I know, but here are the opportunities that I see, these are the solutions that I’m exploring if they will deliver on those opportunities, and these are the sets of experiments that I am running that will tell me if I am going to reach my desired outcome.
And what’s great about this is that it helps us put all of our tools into context, so if I think about this, OKRs are helping me set a desired outcome, and my whole team knows what our end goal is, what we’re trying to accomplish together. Whereas Jobs-to- be-Done, or even design thinking is really good at helping us with opportunity finding, because we are doing empathy interviews and we are doing observations, and we are co-creating with our customers, we are learning how to live in their world.
And that is helping us see, for example, why they are engaging, or what’s keeping them from engaging. So these tools, while we get dogmatic about them, we argue about them, but they are helping us do the exact same thing. They are helping us discover the opportunities that will drive our desired outcomes.
And if we go down one level and we look at usability testing and look at The Lean Startup and we look at MVPs, what is this helping us do? It’s helping us test if our solutions deliver on our desired outcomes.
So here’s the thing. I know I went through this history pretty quickly. It doesn’t matter. These specific tools are awesome, if you’re not familiar with them I recommend you learn about them, but here is the more important message.
Five years from now, there are going to be ten new books that I could have included here. Our world is changing really fast. Right now, we’re talking about design sprints and Jobs-to-be-Done. Five years from now, we might be talking about two completely new methods.
It can feel really overwhelming, but here is the thing to remember. Our goal is not to write good user stories or do good user story mapping, our goal is to drive desired outcomes for our customers that create value for our business.
Our goal, as product teams, is to drive outcomes for our customers that create value for our business. – Tweet This
And the structure of this tree, five years from now, I don’t think it’s going to look any different. We are still going to be trying to drive desired outcomes, we are still going to have to discover opportunities that deliver on that outcome, and we are still going to have to discover solutions that deliver on those opportunities.
So our tools might change, but what we’re trying to do I don’t think is going to change. And the reason why I don’t think that, is that we have 100 years of research on good decision making, good problem solving, and good critical thinking. It is all consistent with this idea, of start with an outcome, explore the problem space, and use the problem space as a way to expand the solution space. And as you explore solutions, it feeds back into your understanding of the problem.
This is the heart of problem solving and decision making. So this to me is the stable part of what we do. And I find that it is really helpful for putting all of our tools into context. So when you read that next article about whatever comes next, all I would ask is that you take a step back and you say okay, here’s what I need to ask myself to know if I should use this tool, and when I should use this tool.
First, does it help me learn something faster? If the answer is yes, you want to adopt it. Second, what does it help me learn faster? Does it help me set a desired outcome? Does it help me discover opportunities, or does it help me discover solutions? And that helps you understand what to use when.
Now, I want to end by looking towards the future. There is a quote that I love by William Gibson, a science-fiction author, who basically said, “The future is already here, it’s just unevenly distributed” and I see that with the teams that I work with.
So I want to talk about a concept that may sound really foreign to you, but teams are already doing this. It’s happening today. And I believe this is the future of product management, and it is this idea of continuous product discovery.
So a lot of product discovery, especially these recent trends, grew up in this big project mindset. Let’s go off and interview a dozen customers, we are going to synthesize what we learn, we are going to put it in a research report, we are going to give it to our product team, and hope they use it to inform their product decisions.
And what we find is that this doesn’t always happen, because the people doing the work are too far away from the research. So what I teach teams to do, I work with the product manager, a user experience designer, and a tech lead, and I coach them on how to do continuous discovery.
So what that means is, instead of doing big research, they are talking to a customer every single week. They are doing a prototype test every single week. They are running a big experiment every single week. And by big experiment, that could be a landing page test from the Lean Startup, it could be a simulation of an experience in person with the customer.
But here’s the goal. The goal is to do smaller research activities every single week by the team building the product. So that in any point in time, when they need to make a product decision, they can stop and take stock, and have multiple data points from multiple research activities that very week and say, based on what I know right now, what’s the best decision I can make.
This to me is really important, because in the product world we are running experiments like 18th century scientists. We run one experiment, we decide it’s truth, and we make a decision based on it.
The example that I use with my product team is that I ask them how many of you have read an article in the newspaper that tells you that coffee is good for you, and everybody has read one of those articles. Then I say, how many of you have read an article that says coffee is bad for you, and almost everybody has read one of those articles. And the reason why, is because a journalist is finding one study that says coffee is good for you, and then a different journalist is finding another study that says coffee is bad for you, and they are each writing their own articles.
But that is not how we learn, that is not how research works. The way to know if coffee is good for you, we have to look at the whole set of studies that have ever been done on coffee, and do a meta-analysis and say, based on everything that we know today, what’s the best decision we can make about drinking coffee.
And here’s the thing, a year from now that answer might change, because during that year more studies are going to be done and the data might start to look a little bit different.
The same is true with product decisions, so we don’t want to make a decision based on one A/B test, we don’t want to make a decision based on one customer interview. We want to make our product decisions based on sets of data, sets of research activities.
Make product decisions based on sets of data & sets of research activities. – Tweet This
And so in order to do that continuously, we need to make sure that we’re continuously adding to our bank of interviews and to our bank of experiments.
And so part of this is, we have to develop good knowledge management techniques. We have to be able to document all the experiments that we’re running. We have to be able to capture interview snapshots and archive them, so that when we remember we interviewed this person who had this exact problem, I want to go back and find all the interviews where this came up. We have a way of doing that.
And this may sound like it’s impossible to do, but I can tell you that I work with some really big companies in the United States, in old industries like banking and insurance, and they are doing this. And they are doing it really well.
So, I just want to leave all of you with this: We have made a ton of progress in the last 16 years, and I really believe that five years from now most of us are going to be talking about smaller size research done week over week by the team building the products, and our products are going to get significantly better.
So this is what I want to leave you with. I am online and easy to find. Please if you have questions and you want to keep the conversation going, I think about this literally all day, every day, and I would love to talk to you about it.
Thank you very much.
[00:36:44] [END OF AUDIO]
Jonathan Rogers says
Wow amazing post – my team and I are busy learning and applying many of the techniques you reference but it can be overwhelming knowing which to apply when.
Your opportunity solution tree really helps to clarify this and provide a long term framework against which we can evaluate our progress and any new methodologies.
Teresa Torres says
Jonathan,
I’m glad to hear it. Let me know how it goes as you start to play with it.
Teresa
Thomas Blum says
Sweet.
Danny Henley-Martin says
Teresa, I have been struggling to find something like for a long time. I’ve always been getting stuck with helping people understand the difference between Product Discovery and Inception. And finally you’ve explained it in such a clear way. Thank you so much!
Will be following you for more insights.
Teresa Torres says
Thanks, Danny! Glad to hear it.
Danny Henley-Martin says
No worries.
Actually can I ask, do you normally recommend product teams to use a Product Canvas of sorts to summarise the product post discovery?
Cheers,
Danny
Teresa Torres says
Your question implies that discovery ends. Discovery should be continuous. There is no point where the product is post discovery. I don’t see any issue with using a product canvas to summarize any given point in time, but it should never be considered don.
Danny Henley-Martin says
Ahh yes…I’ll take this forward as a continuously evolving Canvas. I guess it would be the same as the customer experiment results you record, customer Personas etc.
Just one final question, Product Inception, what do you see this as?
Often I hear people use Inception and Discovery interchangeably.
Teresa Torres says
I don’t view inception as a discovery activity, but rather as the kickoff to a chunk of delivery work. It’s a great way to make sure that the team is aligned around what was learned in discovery and are aligned around what they are committing to in delivery. But it’s definitely not synonymous with discovery.