The Lean Startup has a flaw.
It’s a simple one.
It advocates the feedback loop: Build -> Measure -> Learn.
I agree wholeheartedly with this loop. The flaw is in where you should start.
I prefer: Learn -> Build -> Measure – Tweet This
It’s a subtle difference, but it’s an important one.
The Hidden Cost of Writing Code First
If you build first and then learn, you end up writing code that doesn’t matter.
Some of your experiments are going to fail. This means you are wasting engineering effort building features that don’t work.
That might work if you are an engineer or if you have an abundance of engineering resources.
But at most companies engineering is a scarce resource.
Even when engineering isn’t a scarce resource, building first introduces another problem.
What do you do when a feature doesn’t harm your metrics but it also doesn’t add value?
A good product manager would remove it. Less is more.
But this can be hard to do in practice.
You become attached to new features. Even when they offer little value. Even when nobody is using them.
It’s easy to convince yourself someone might use it someday.
Now the cost isn’t just engineering time, it’s also product complexity.
You have another feature that your customers need to learn, that needs to be integrated into the user experience, that marketing needs to promote, that engineers need to maintain.
We know that less is more. We’ve learned it over and over again.
But as we invest in our ideas, we become more committed to them. Even when they don’t work. This makes it hard to remove underperforming features.
The Counter-Intuitive Benefits of Acting as If You Have No Engineers
A better approach is to identify which features will deliver value before you build them.
Run your experiments before you write a line of code. – Tweet This
Act as if you have no engineers. If all you have is an idea, how might you know if that idea is worth pursuing?
If your goal is to test the idea itself, it might be hard to design an experiment without writing code.
But your idea is dependent upon a series of assumptions. You can test those assumptions without building the feature.
The more assumptions you test before building the feature, the more likely the feature will work when you do build it.
To get started ask yourself, “What assumptions have to be true in order for this idea to work?”
Suppose you are responsible for video integration in the Facebook newsfeed and you have an idea about auto-playing the video once the visitor scrolls to it. You could build this functionality and then user test it, but you would be better off examining your assumptions.
What has to be true for this idea to work?
- People want to watch the videos in their newsfeed.
- People want to watch the videos in their newsfeed right away.
- For people who don’t want to watch the videos in their newsfeed, it won’t be too much of a bother to stop an auto-playing video.
You might notice something about this list.
Assumptions are actually hypotheses in disguise. – Tweet This
You can test the first two assumptions by looking at your usage data. How many people play videos in their newsfeed? What percentage of their videos do they pay? How long after scrolling to the video do they push play?
Be sure to draw lines in the sand before you rush off looking for data. It will help to focus your data analysis and it will help to prevent bias.
If these assumptions hold to be true, you can move to the third assumption.
If you find that most people (say 80%) watch most videos right away, you might want to conclude that auto-play is a good idea. But you might want to first try to understand the use-cases where people aren’t watching video right away.
For example, if people aren’t watching videos right away because they are sneaking in a quick Facebook break during a boring meeting, auto-play might be disastrous.
If this is the case, the pain of auto-play for the 20% who don’t watch videos might outweigh the benefit for the 80% who do.
How might you find out why people aren’t watching videos right away? You could:
- Interview users and ask them when and where they use Facebook.
- Observe people using Facebook around you. How many are in a shared space without headphones?
- Survey people to understand their preferences on auto-playback.
- Conduct usability studies on a similar product that already includes auto-playback.
Notice how none of these options involve writing code. And yet they all help you collect data about whether or not your idea is worth pursuing.
This is what happens when you test assumptions instead of ideas. Ideas can be hard to test without writing code. But often you already have the data or can quickly design an experiment to test the underlying assumptions.
Surface your assumptions and do the work to test them before you write code. – Tweet This
Slow Down to Go Faster
This process is going to feel slow. Too slow.
You are going to get antsy. You are going to want to start writing code.
For any single idea, it’s going to feel faster to just build it. If it only takes a week of development, why would you spend a week or two experimenting before you build?
It doesn’t just cost a week or two of development.
It costs a lifetime of maintaining it.
It costs the learning curve for your customers to adopt the feature (or to ignore it).
It takes up pixels in your user interface.
And even when it doesn’t work, it’s going to be impossibly hard to remove.
But there’s a more important reason.
Expand your scope beyond one idea. Consider ten ideas. Should you build all of them?
Odds are only two or three are going to work.
Now you are building 7 or 8 features that won’t have an impact, that you’ll have to maintain, that will fill up your user interface, that will burden your customers.
It’s much better to run ten experiments before you write any code and only build the two or three ideas that actually worked.
Are you interested in improving your experimentation skills? Subscribe to the Product Talk mailing list to get new articles delivered to your inbox.
rcauvin says
This topic really goes back to the debate about what constitutes an MVP. Eric Ries defined “MVP” as follows:
Note that “product” is in quotes, implying that it may not be a full-fledged product built with code. A “product” built with code and put in the hands of customers forms the basis for one form of experiment. You can “build” experiments without code, however.
Indeed, some lean startup practitioners have suggested that a landing page for an experiment could be an MVP. Other practitioners have insisted that an MVP should be more ambitious and “fully baked”.
We can debate who’s right, but if you look at Eric Ries’ quote, I don’t think it’s entirely fair to portray lean startup methods as jumping prematurely into coding an MVP.
I see a different but related flaw in customer development and lean startup methods. A hint of this flaw is at the end of my recent blog entry on design thinking, and I plan to elaborate on it in a future blog entry.
Teresa Torres says
Hi Roger,
Yes, really I should have written there is a flaw in the way many people interpret The Lean Startup, as I think the intent of the loop is spot on. If you focus on many, rapid cycles through the full loop it doesn’t really matter where you start.
This post was in response to the many questions I get from companies who think they can’t experiment because their engineering teams won’t support it. So I wanted to emphasize that you don’t need engineers to experiment.
I look forward to your future posts. For my Master’s I’ve done research on design thinking and what it is that designers do that is different. I’ll blog about it eventually.
Teresa
Keith Gillette says
Hi Teresa. Debbie Madden just made a similar argument to start with “Learn” instead of build in the last section of her article “When and How to Build an MVP”.
I had a similar reaction to the Lean Startup cycle when I first read Ries. I came at it from the Deming/Shewhart Plan-Do-Study-Act quality management cycle and thought it odd to start with the “Do” (Build) without any planning or learning to inform it.
Roger makes a very good point regarding exactly what constitutes an MVP (a term that I think may be more confusing than helpful), or in terms of the Lean Startup cycle (and perhaps more usefully), exactly what the “Build” phase is building. If we’re talking about constructing an experiment instead of coding a product, then starting with Build is probably less problematic.
In the end, I think the Build-Measure-Learn/Learn-Build-Measure cycle is less useful than PDSA, which in itself leaves out a number of implied steps but at least starts at a more logical place. In PDSA, you Plan an experiment to measurably test a hypothesis, you Do the experiment (building whatever is required), you Study the results, you Act on the learning. If we were to really articulate the relevant elements of cycle fully, I think it would be something much clunkier like
Ideate (What’s my initial impetus?)
Surface Assumptions (What has to be true for my idea to be right?)
Create Hypotheses (How can I state those assumptions such that they could be disproven?
Design Experiment (How do I disprove the hypothesis?)
Build Experiment (Write a script, make a landing page, build an MVP)
Execute Experiment (Talk to people, by Adwords, put MVP in front of people)
Study Results (Has my hypothesis been disproven? What have I learned?
I’ll work on making those steps into some catchy acronym. 😉
Teresa Torres says
Hi Keith,
I think Eric Ries is responding to the over-planning that often happens before you really know anything and to the vast amount of poorly conducted market research. I don’t disagree with his criticisms and his focus on action. But we are starting to swing too far in the other direction. We should build based off an insight and sometimes we need to learn to uncover an insight.
What he did get right 100% is that iterations through the loop should be fast. And given that, quibbling about where you start is almost a moot point. I just don’t like that some people interpret it to mean you don’t have to understand your customer. So I’m just trying to provide the counterpoint.
I also don’t like that people are using a lack of engineering resources or the inability to make product changes as a reason for not learning. That’s just downright silly.
Tristan Kromer says
Kent Beck made this point at the very first Lean Startup Conference. I think it is spot on!
Unfortunately the talk was hosted on Justin.tv which has gone down. I have my summary here: http://grasshopperherder.com/build-measure-learn-vs-learn-measure-build/
Teresa Torres says
Thanks, Tristan. Great write up. I particularly like hypothesis -> metric -> experiment.
Tristan Kromer says
That’s funny, I like Beck’s formulation, but I don’t like saying hypothesis anymore. We tend to use this term interchangeably with assumption. They are different.
A hypothesis is well defined and falsifiable, an assumption is vague and tends to be subject to heavy confirmation bias. We can test a hypothesis with an experiment. We need to clarify assumptions with generate research, not experiments.
Teresa Torres says
Isn’t that an argument for using the term hypothesis instead of assumption when we our goal is to run good experiments?
Teresa Torres says
I missed the bit about general research on the first read. Are you suggesting that research an assumption and you run an experiment to test a hypothesis? In that case, I see your point. However, I’d argue that in most cases, a product manager benefits from defining their assumptions in the form of hypotheses when possible, as it helps to get clarity around what you think.
Tristan Kromer says
Yes, research assumptions to clarify into hypotheses and then experiment on hypotheses. Forcing experiments around vague assumptions like “If we put up a landing page, some people will click on the signup button” leads to lots of false positives on incredibly badly defined experiments with no fail condition.
Basically, we all suck at writing hypotheses and forcing ourselves to a hypothesis without doing the basic research to generate enough clarity seems to result in a lot of silly experiments. Might work for some people, but as a general rule of thumb, I have not seen a whole lot of success around forcing hypotheses.
Teresa Torres says
Now that I agree with entirely. Writing a good hypothesis is a skill (See The 5 components of a Good Hypothesis) and most people need to invest in learning it. However, as product managers get better at research and experimentation, it’s one they’ll have to get better at.
henebb says
Hi!
Great post! I agree! I think it’s even possible to start *anywhere* in that loop?
Perhaps Eric Ries saw it from the startup point of view? If you have nothing to start with and you’re thinking about a new (software) product, perhaps you have to start at the Build step? Well, you could perhaps do market research etc, but you won’t learn much until you really have something (product/MVP) to learn from?
Thanks!
Teresa Torres says
Hi henebb, welcome!
It’s this statement: “but you won’t learn much until you really have something (product/MVP) to learn from?”
that I think is the problem.
You can and should learn a lot before you write code. Coding is an expensive way to learn. You can learn a lot by talking to a couple of customers and you will likely learn that what you intended to build isn’t exactly right.
I’m less worried about what Eric Ries intended and more concerned with how The Lean Startup is interpreted. Eric is a bright guy who has spent a lot of time thinking about this. Most practitioners are busy professionals who don’t spend a lot of time thinking about this. If on first glance, their intention is to take Eric’s advice, starting with build is a costly first step. In most cases, they would be better off learning a little bit first.
However, as mentioned in an earlier comment, the key really is to move through the loop as fast as possible so that you maximize iterations and learning. You can spend too much time talking to customers without ever building anything, which is just as bad as building too much before you learn anything.
henebb says
Thanks for Reply!
I think i get it now 🙂 You’re right, it’s not about how Eric defines it. I think I just misinterpreted you when you said “it has a flaw” 🙂 I read it as a bit “mean”, sorry 🙂
I also think you’re right that you shouldn’t start with writing code, there are cheaper ways to see if you’re on to something (as you say). But just as someone said in previous comment, “Build” doesn’t have to be code, it could be anything. In my opinion, the “Build” part is the notion that it’s in the *doing* (whatever “doing” is) that we learn. We can’t really learn until we’ve actually done something. And to do something means “building” something. “Building” can be anything, i.e settings up the manual (or whatever) of what you want to “talk to a couple of customers” about.
Thinking gets you far, but not as far as when you really get out there and do something. But you’re right, “building” doesn’t equal “coding”. And that may be misinterpreted..? (Even though it says “Build” and not “Code”). That’s my two cents 🙂
Kind regards, Henrik
KP Karthik says
Teresa / Tristan – I think PDSA theory mentioned doesn’t differ much from the “hypothesis –> build experiment –> learn” method, but you seems to believe its different and better..
It would help if you can explain more about your views on PDSA.
Warm Regards,
Karthik
Product Manager
frilp.com
Teresa Torres says
Hi Karthik, in your comment what does PDSA refer to?
KP Karthik says
Thanks for the response Teresa. I have been following your writing for quite some time, every article of yours opens up my thought process further up.
With regards to my comment, PDSA – plan-do-study-act theory,https://deming.org/theman/theories/pdsacycle
Sorry my bad, PDSA was mentioned “Keith Gillette” not by “Tristan” on top of this thread.
Teresa Torres says
I’m not sure that I would argue that the underlying structure of PDSA and hypothesis -> experiment -> learn are fundamentally different. But the different language certainly leads to different interpretations and thus different applications.
I think the common structure is an iterative process of induction and deduction where you start with a series of facts or data points, through induction you generate a theory that you then verify through deduction.
In the Plan -> Do -> Study -> Act model, I suspect your plan is based on some facts / data points. You take action based on an inductive theory of the situation. You study the outcome, deducing whether or not you got an expected outcome and then you act again.
With hypothesis -> experiment -> learn, you hypothesis is induced from a a set of data points, you design an experiment to deductively test your hypothesis, and you learn from the results.
I think what gets most misinterpreted in both models, is that people skip the inductive test and thus they act based on their first (or sometimes second) instinct instead of acting as an act of deduction.
Thomas says
Hi, I’ve just read https://hackernoon.com/continuous-design-dba3e3b9eff1 where your article is linked to. I’m interested to know if you still have the same view on learn-build-measure?
Teresa Torres says
I do. I also agree with continuous design. By learning first, I’m not suggesting that we spend weeks researching before we build anything. But if you start with building, you are going to build the wrong thing more often than not. We can learn fast, design fast, build fast continuously. But if you just building your first idea to learn, you are going to waste a lot of time.