When you’re doing product discovery, there are a variety of methods you can use to de-risk your product bets.
Some approachable methods include customer interviews, research/analysis, and surveys. (To dive deep into each of these, check out my Customer Driven Guide.) also adopt.
These methods are effective ways to gather evidence around what customers want and what their journey to your solution looks like. Plus, web-based research, surveys, and interviews all help you identify valuable opportunities, create product one-pagers, and form strong hypotheses on a routine basis.
For smaller bets, these methods may even give you enough confidence to move on to delivery. But for big, risky bets, you often need additional evidence and discovery work to answer, “is this worth pursuing?”
John Cutler, Product Evangelist at Amplitude, frames new features and products as “bets.” Some bets are small and low-risk. Others are large and high-risk. Via https://medium.com/@johnpcutler/great-one-pagers-592ebbaf80ec.
The troublesome thing is, it’s not always clear which method you should use to de-risk your bet.
Prototypes and MVPs: the intensive end of the discovery spectrum
What are prototypes?
Some of the more time-consuming discovery options out there are prototypes and MVPs. But these two concepts are widely abused, so it’s worth defining them before we go any further.
Teresa Torres has a fantastic answer to this question on her Product Talk blog. She defines a prototype as something that “simulates an experience, with the intent to answer a specific question, so that the creator can iterate and improve the experience.”
Prototype: a discovery method that tests one aspect of an experience in order to answer one specific question
Many founders assume prototypes only address technical feasibility. But this isn’t true. In fact, most early prototypes won’t test technical feasibility because that’s rarely your riskiest assumption. Torres points out, “we can also prototype to test desirability. The car industry uses concept cars to assess the desirability of new models. Lean startups use landing page tests to assess the desirability of their services before they commit to building them.”
Jeff Hawkins, the founder of Palm Computing and Handspring, carried around a wooden block in his pocket weeks. This helped him test the ideal size for the initial PalmPilot.
Via Computer History
The founders of McDonald’s drew multiple kitchen layouts for their first restaurant in chalk on a tennis court to test how employees could move around with efficiency.
Prototypes don’t have to be code, and they don’t have to test technical risks. They can be a block of wood, a drawing in chalk, a landing page, or a clickable design that tests one part of an experience to answer one specific question.
What are MVPs?
On the YCombinator blog, Yevgeniy (Jim) Brickman defines an MVP as, “ a process that you repeat over and over again: Identify your riskiest assumption, find the smallest possible experiment to test that assumption, and use the results of the experiment to course correct.”
Torres takes a similar stance: “An MVP is not the smallest or the easiest product you can get out the door. It’s the smallest or easiest product you can release to learn what you need to learn.”
If this is sounding very similar to a prototype, that’s because it is. An MVP is an expensive prototype, but it’s a prototype nonetheless. It exists to test one specific, risky assumption so you can de-risk a future product build.
An MVP is not:
- Version 1.0
- A release you plan marketing activities around (you don’t know if it’s viable)
- A product that takes 3+ months to construct
- A product that requires a technical co-founder to build
It’s not even a “product” in the traditional sense at all; it’s a form of prototype.
The purpose of an MVP is the same as any other discovery method: to learn fast, de-risk key assumptions, avoid massive opportunity costs, and get to market quickly. You don’t need forgot password features, log in screens, or other peripheral elements (items most so-called “MVPs” include) to achieve that purpose.
MVP: a robust prototype that tests one aspect of an experience in order to answer one specific question; an expensive discovery method that de-risks a key assumption
But how do I know which discovery method to use?
While this isn’t often clear-cut, there is a process you can use to make a smart choice.
First, start with the question you’re asking. What kind of data do you need to answer that question? Quantitative or qualitative? And then: which prototypes could supply that data? Make a list.
Next, take a hard look at your constraints. Let’s start with two big ones: time and money. If you plot most of the available discovery methods on an time/expense axis, you’ll wind up with something like this:
Already, you may be able to throw out half the options.
To narrow this down even further, you’ll want to consider other key limitations. Those are level of uncertainty, risk, and potential opportunity.
Generally speaking, the higher these three are, the more robust your discovery method will be and vice versa. If the opportunity is worth millions of dollars, invest in steps beyond a survey; if creating an MVP would be more expensive than the outcome of the opportunity, don’t build an MVP (and maybe pick a more meaningful opportunity!).
How to run a great test, whichever discovery method you use
Once you select a prototype, you’ll need to do something with it. And when you go to test a prototype, approach matters at least as much as method.
Overall, make sure you’re not treating discovery as idea validation—that’s a dangerous trap. Idea validation starts with an idea and only looks for confirmation. One of the most inherent dangers of this approach is aptly named the confirmation bias, which is when you interpret new evidence as proof of your existing theory.
Instead, you want to treat discovery more like an experiment, which is exactly why we start with a hypothesis and not an idea. Like a scientist in an experiment, you’re looking to gather any and all related evidence—not just the kind that proves your theory right.
Here are some other principles you can apply for best learning results:
- Start with a worthwhile hypothesis: Make sure you’re spending time on something that matters to your business. If you’re right, what’s the big impact?
- Test ONE assumption: Test one—and only one—specific assumption at a time. Be very clear about what you’re trying to learn.
- Test with the right people: Your hypothesis identifies who will see a benefit if your assumption is correct. Make sure the people using your discovery method line up with the people in your hypothesis. Use the right participants.
- Define pass/fail: How will you know if your discovery test succeeds or fails? And what will you do if the prototype supports, refutes, or gives no clear indication for your hypothesis?
- Know what data you need: Some questions are better answered with quantitative data. Others with qualitative data. Do you need to answer why or which one? Specify what type of data will best answer your question and how you’ll collect that data.
- Set a strict time boundary: How long will you test the prototype before assessing results? Remember, one of the goals of a prototype is to help you get to market quickly. Stick to the plan: Don’t introduce new variables during the test time period. If you do, you’ll have a very difficult time determining why your test played out the way it did.
- Stick to the plan: Don’t introduce new variables during the test time period. If you do, you’ll have a very difficult time determining why your test played out the way it did.
- Be humble: Pay attention to all the data (don’t cherry pick the results you like) and have the integrity to admit when you’re wrong. Remember, you’re not validating a brilliant idea; you’re gathering evidence around a product hypothesis.
And one last note: avoid running haphazard tests and experiments. If the prototype is worth testing, the test is worth carefully constructing!