Table of Contents
- What is Customer Discovery?
- Why do Customer Discovery?
- Who does Customer Discovery?
- What are my Persona & Job-to-be-Done Hypotheses and how do I test them?
- What is my Demand Hypothesis and how do I test it?
- What is my Usability Hypothesis and how do I test it?
- What is the Growth Hypothesis and how do I test it?
- Appendix: Summary Table
- Appendix: Creating Effective Screeners
What is Customer Discovery?
Customer Discovery is a catchall term for doing just enough research and testing to make sure you’re not running off and building something no one actually wants. Essentially, it covers the areas you see here and answers the questions you see asked under each area:
After reviewing this guide, you will be able to:
- Identify the right discovery questions to focus on as your project evolves
- Answer those focal questions with effective, actionable discovery work
- Present the above to your team and stakeholders for buy-in
Why do Customer Discovery?
If you’re at all familiar with innovation practices like design thinking and Lean Startup, you’re probably familiar with the idea that you sometimes have to be on a learning mission and other times you have to be on a more traditional scaling mission. This guide is about doing that learning.
Riddle me this: What’s the difference between revenue and research? The answer is that you can’t have too much revenue. But, yes, you can have too much research. Worse still, you can do the right kind of research at the wrong time and have it be worthless, impeding your ability to do research and testing in the future.
Most of my work work as a product person and as a professor revolves around a practice called Hypothesis-Driven Development (HDD). Basically, the objective of HDD is to help teams think about everything they’re doing as a set of intentional experiments- everything from design to development to release/operation to closing the loop with customer analytics.
Crucially, part of the practice is also to consider them in small batches on agile cadences of around one week. So, instead of spending a lot of time making big decisions, the team unpacks and sequences their ideas in small batches to maximize velocity and minimize waste. Customer Discovery is a big part of this.
Customer Discovery falls under the area of ‘Continuous Design’, which I like to unpack as having the four major hypothesis areas you see here:
These areas are sequenced to help teams learn just the right thing at just the right time. The sections below step through these areas of practice and add to them practice around marketing or growth with a section on your ‘Growth Hypothesis’.
Who does Customer Discovery?
The short answer is: anyone who wants to take responsibility for minimizing wasted development and (thereby) maximize the chances for big wins. You might be a product manager, you might be an area lead, you might be a consultant. It doesn’t really matter. While there’s a skill set you’ll need to develop, I’ve created this guide to so it’s (I hope) accessible to anyone with this intention.
What are my Persona & Job-to-be-Done Hypotheses and how do I test them?
What is it?
A persona is a humanized view of your customer which allows you to create testable ideas about how they’ll behave (that ‘s the hypothesis part). There isn’t a fixed format, though you can see what I consider a good example here: Example Personas. Here’s the key thing, though: Personas are more of an approach to answering questions than they are a discrete item. You could improvise one in a hallway conversation about a customer-related topic, but there’s no single/permanent ‘done’ point or a perfect personas. All that said, there is a pretty reliable process for focusing your questions and getting effective answers. That’s what we’re going to cover.
Closely tied to the concept and methodology of testing personas is the idea of jobs-to-be-done (JTBD). These are statements of need or desire that are not solution-specific. For example, you could have a JTBD of ‘hanging a picture’ or ‘an afternoon pick-me-up’. From there, you would look alternative ways a persona might solve that problem- a fixture or a Diet Coke, for example.
A JTBD hypothesis generally has to do with learning what items are top of mind for a given persona in a given area of interest and what alternatives they’re using to solve that problem. For example, you might have this JTBD hypothesis: ‘Finding documentation is an important job for HVAC technicians’. You might also hypothesize about the alternatives they’re using: ‘Currently they Google for the documentation they need. This requires substantial time because it takes several tries to find the right document.’. Both of these statements are testable with the methods below in the How section.
Together, personas and JTBD help teams formulate innovation-friendly ideas and testing about who their customer is, how they’ll behave, and what jobs/desires are worth addressing.
When should I use it?
I would say that you should always be operating on a solid foundation of personas and JTBD. Without them, even something like concept testing Lean Startup style will lack focus and maximum testability.
Let’s say you’ve skipped over personas and you just want to test demand for dog washes in Centerville- that’s the next hypothesis area we’l cover (Demand Hypothesis). You create an ad on Google AdWords, something like ‘The fastest dog wash ever! Right here in Centerville’ and you show it to users who search for things like ‘dog wash Centerville’. You get a super low click-through rate. What then? You could try some new variations, but personally I’d want to make sure I’d had some interviews with my target persona to make sure the proposition solved a problem/desire they have and that I’m using the same kind of language they use to talk about dog washes. I’d also want to make sure they search for things like dog washes on Google.
Without a working set of personas and JTBD, the issues you’ll run into actually building and promoting product are likely to be even more serious. Additionally, I find that personas and JTBD are invaluable for collaborating across functional areas. For example, they’re a great tool for product managers to discuss and test what they’ve learned about product/market fit with marketers who are working to amplify that product/market fit.
How do I execute?
Basically, you draft, discover (via subject interviews), and apply your personas in a continuous cycle of learning about your customer. The process below starts with a ‘persona question’, which is basically a question or questions about how your customer will behave in a certain situation. These might pertain to adding functionality to a product or initiating a new marketing campaign. You execute and close with a tested answer.
Work you’ve done previously will need periodic refreshing, but you’ll find over time that your understanding of the customer and ability to execute on the basis of that understanding becomes more functional, improving your innovation capabilities. In terms of organizing to do this work, design sprints are a great tool- those are one-week iterations.
There’s a list of resources below with more on that and a few other items. Good luck and I think you’ll find these really help your work.
Where do I go next?
Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:
- Template for Persona Development and Subject Interviews
This is a Google Doc template: Interview Guide. The prior section is a template for the personas themselves. - Tutorial on Personas & JTBD
For more depth on the use of personas (including examples) and the process above, check out: Personas Tutorial. - Plan for a Design Sprint on Personas & JTBD
If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for persona and JTBD discovery: The Problem Discovery Sprint. - Online Course on Using Personas with Agile
For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Agile Meets Design Thinking.
What is my Demand Hypothesis and how do I test it?
What is it?
There’s a BIG (yes, that big) difference between this area and the last. You can ask questions in the right way and test your persona and problem hypotheses. However, you can’t ask a customer whether they’d like a product you’re thinking of building. They’ll always say ‘yes’. For a famous design story/legend about this, see Story of the Yellow Walkman.
Given that you can’t ask whether someone’s going to buy your product (or use your feature) directly, what do you do? That’s what the Demand Hypothesis is about: testing whether a customer is going to prefer your proposition over the alternatives they have for a given JTBD (without actually building and marketing a full offering).
In the last section, we talked about offering dog washes in Centerville. A simplified starter version of a related Demand Hypothesis might be: ‘If we offer Pedro the Professional a 10 minute dog wash, then he will buy it’. To actually test this, you’d probably want to decompose and detail it, but that’s the basic idea. For most Demand Hypotheses, I recommend this format: If we [do something] for [certain persona or segment], then they will [respond in a certain way].
Once we have a Demand Hypothesis (or hypotheses), we design an experiment to test it. Since we can’t just ask and we don’t want to invest the time and money to build a full product before we validate our hypothesis, we use a ‘minimum viable product’ or MVP.
You may be familiar with this concept from Lean Startup. The basic idea is that innovation is inherently risk- a well run program has something like a 1 in 10 success rate with new products. Given this, how might you test whether a product’s going to be successful without building it all the way out? This is what successful innovators do- and by dramatically cutting the cost of testing a new concept they improve the economics of their innovation program.
When should I use it?
We’ve gotten this far together- it’s time to have a more serious talk about which one of these methods applies to which questions/hypotheses. There’s a lot of bad practice out there and I don’t want that for you, dear reader.
This is going to go over 140 characters and there will be graphs. If you’ve come here from one of my classes, you know I’m on a university faculty. However, please believe me when I say what follows is not an academic treatment. I’m also a two-time college dropout and all this is based on hard lessons learned from starting companies.
Below is a summary view of our first four hypothesis areas along with some example questions. I’ve overlaid them on Donald Norman’s diagram about finding the right problem vs. finding the right solution. For the example questions, I’ve used a company that’s exploring PaaS- ‘potatoes as a service’. This is a hypothetical service that ships you potatoes at a certain frequency (I know- hilarious, right?).
The idea is that before we go build something we should first know what job we’re doing (or problem we’re solving) and for who. For this we have they Persona and JTBD Hypotheses. These are tightly related- I’d cover them in the same discovery interview with the same kind of open-ended questions. You ask a subject the last time they bought potatoes and they’ll generally tell you the truth the best they can remember.
However, you can’t ask them hypothetically if they’d use your service- they’ll say ‘yes’, but that won’t mean anything. Likewise, if you show them a prototype of your potato service site, they’ll just say they like it. If you really want to know whether they want your potatoes, you have to ask them for money (or something of value proportionate to the interaction).
There’s nothing wrong with prototypes. If you’ve found there’s a certain buyer that can’t hand you money fast enough for this potato service (or just some segment/persona you can reliably sell to), then you should use interactive prototypes to test your Usability Hypothesis. However, at this point you should make sure you’ve validated your Demand Hypothesis because otherwise you might be creating a highly usable interface for a product no one wants- and I see that all the time (and it ultimately makes everyone sad).
Below is the graph I warned you about. In this area of ‘Finding the Right Solution’, I really like BJ Fogg’s curve as a way of thinking about the relationship between motivation (Demand Hypothesis) and ability/usability (Usability Hypothesis). If you imagine a point in the upper left, that’s a product your customer wants so bad they’ll use it even if the usability is terrible. If you imagine a point in the lower right, that’s a product that’s so low interest that your usability has to be asymptotically great.
The reality is that most product teams neglect testing motivation/Demand Hypothesis with Lean Startup because it feels safer and more certain and they can’t usability their way around that even if they’re great at usability. So, use this excellent handbook (or whatever works for you) to make sure you’re doing the right testing at the right time.
And now back to testing your Demand Hypothesis- because that’s super important.
How do I execute?
The easiest way to think through testing your Demand Hypothesis is tried and true: it’s the scientific method! Obviously, it’s a waste of time to test a bad idea (step 01). How do you get a good idea? If you guessed customer discovery and testing your persona and problem hypotheses, you hit the jackpot.
One you’ve got a validated persona and a validated JTBD that’s important to them, draft your Demand Hypothesis (step 02). A great place to start is to design a value proposition relative to your persona and problem hypothesis:
A strong value proposition that’s worth testing is one that understands what it’s up against: the alternatives customer (persona) is currently using to deliver on your target JTBD. For example, with our dog wash our persona is Pedro the Professional and the JTBD is keeping his dog clean. From interviewing subjects we know his top alternative is just washing his dog at home, but it’s messy and he never gets to it as much as he likes.
Our value proposition is that we’re going to offer him a dog wash that’s convenient and affordable. What we’re hoping is that real people who fit that persona prefer our dog wash over washing at home. From there, we get to the Demand Hypothesis you saw above: ‘If we offer Pedro the Professional a 10 minute dog wash, then he will buy it’.
Following this, we need to design an experiment (step 03) and run it (step 04). The basic idea is that there needs to be some exchange of value to test the customer’s actual interest in your proposition. That could be as simple as having them sign up for an email newsletter if they come to your website from out of the blue. However, if you’re standing over their shoulder, signing up for an email doesn’t count. There are a number of established patterns for doing this testing (MVP types) that you can learn more about through the materials in the next section.
Finally, you’re diving to what Eric Ries (author of The Lean Startup) calls the ‘pivot or persevere’ moment (05). Key to this is having definite thresholds for your experiments where you can specifically conclude whether you got a negative or a positive on your experiment. For example, let’s say you run some Google AdWords to test your Demand Hypothesis- the basic hypothesis being that if a customer clicks through, then they have some amount of interest in your value proposition. When you design the experiment, you’d want to sent a click-through rate that constitutes a fail- say <3%. Pro tip: As part of your experiment design, write up the slides or email you plan to use when you present your results- begin the whole thing with that end in mind.
The resources below provide more details on how to start testing your Demand Hypothesis.
Where do I go next?
Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:
- Template for Designing Experiments to Test Your Demand Hypothesis
This is a Google Doc template: Testing Your Hypothesis. The prior section is a template for laying out your hypotheses. - Tutorial on Lean Startup
For more depth on creating testable value propositions and designing experiments: Lean Startup Tutorial. - Plan for a Design Sprint on Demand Testing
If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for persona and problem discovery: The Motivation Sprint. - Online Course on Using Personas with Agile
For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Hypothesis-Driven Development (Course).
What is my Usability Hypothesis and how do I test it?
What is it?
Great news- this one is relatively easy to understand. You’re testing to see how well your customer can use a given interface element (or elements) to complete a given objective. While it does require some kind of working prototype (or working software), this is actually one of the easier types of testing to master.
Based on where you are in your project, you’ll design a set of appropriate tests to determine how well your customer is able to use a given interface to accomplish a given objective. The interface is ideally some kind of a quick, low-fidelity prototype in the early phases. In fact, many teams require multiple divergent prototypes get tested for development of a given interface element. This is commonplace for teams at Google, as an example.
How about the objective? If agile user stories have been central to your development, you already have your objective: it’s the final clause of your various stories ‘…so that I can [realize some reward/objective].’. If you haven’t been using agile user stories, I highly recommend them. Aside from keeping your ideas testable, they’re a great way to explore and detail the experience you want to provide to the user.
There’s a simple test design to make sure you can actually sit down with a subject and test, but really all that revolves around your user stories and prototypes.
When should I use it?
ABT: ‘Always Be Testing’ is a little slogan I like to use in class. Really, the key thing with this type of testing is to focus on the right thing at the right phase of development. The diagram below breaks this down into three generally-accepted (though not universal) phases:
In the Exploratory phase, parallel prototyping and testing in small batches is important. It’s much better to batch up a few users, test, revise your interface and plan based on what you learned, then re-test vs. just end up seeing the same thing over and over again. Most teams will use interactive prototypes of some sort (vs. working software)- and this allows for anyone to mock up an idea. The idea is to push yourself (and your team) to consider a few approaches before you start investing in one. You can see the team at HVAC in a Hurry doing this in Example A of the Prototyping Tutorial. Use comparable’s to make sure you’re reusing existing well understood (by users) interface models.
In the Assessment phase, you pick and approach (or two) and articulate it out into more of a fully scoped user experience. And you see how it goes. Here, testing in small batches is still important and you’re likely using prototypes, though they may be somewhat higher fidelity/more detailed.
The Validation phase is probably what most people think about when they think about usability testing. Here you’re testing working software and basically making sure that what you think is usable really is. Here you might actually do stuff like time a user to see how long a task takes. Those benchmarks are useful for comparing against your analytics once you actually release.
You may go through multiple rounds of each of these before you release something to the public/your base. The idea is to test in each phase until you get a positive result that suggests you should move forward.
How do I execute?
First and foremost, just go try it out! I recommend starting with strong user stories. Even if you’re already developed something and release it, they’re a good way to go back and be explicit about what you’re trying to achieve for the user- and that will naturally help good testing happen.
Then draft your test plan- see item #1 in the section below. Finally, draft a prototype to test with (or use your working software). Many individuals like to start with prototyping because it feels more tangible and we all like that- but really the intent you establish with the stories is what should drive the design; not the other way around.
Where do I go next?
Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:
- Template for Designing Experiments to Test Usability
This is a Google Doc template: Usability Test Plan Template. - Tutorial on Running Usability Testing
For more depth on design and running usability testing: Your Usability Test Plan. - Plan for a Usability Sprint
If you’re not familiar with the idea of a design sprint, it’s basically to take a design (or research task) and execute it in a one week format. Here’s a guide to doing that for usability testing: The Usability Sprint. - Online Course on Using Personas with Agile
For comprehensive learning on this in the context of an agile program, I can’t help but recommend my online course (on Coursera): Testing with Agile.
What is the Growth Hypothesis and how do I test it?
What is it?
This hypotheses is about scaling product/market fit. The concept of product/market fit is a major part of how Silicon Valley operates and, unlike many business fads, it’s a pretty durable concept, particularly now that we’re in an economy that’s innovation-driven.
With this activity, you are run incremental experiments to see how you can tune and amplify product/market fit. There isn’t a one size fits all version of this, but for a team with this focus a minimum of one experiment per week is a general benchmark for strong practice. Teams I meet that do this well often have a Friday meeting where they discuss results and decide on what they’re going to test next.
When should I use it?
The basic idea is that with product/market fit you’ve innovated to a place where you can reliably sell a certain solution/offering to a certain customer on a highly repeatable basis.
In the terms we’ve been using, this means you’ve validated your Demand Hypothesis and you’ve looked after your Usability Hypothesis such that users aren’t running into trouble. Now your job is to scale what you’ve achieved before someone else beats you to it.
How do I execute?
Once you’ve got that basic product/market fit, it’s time to start observing, hypothesizing, and experimenting against some kind of customer acquisition funnel. I like the AIDAOR model (attention-interest-desire-action-onboarding-retention):
Why is that funnel so important? Because if you don’t break down the question/problem, you’ll almost certainly end up mired in confusion. Your funnel is your anchor point for both qualitative and quantitative data. I recommend starting with qualitative ideas since that will help you with the ‘why?’ and drive better hypotheses. One technique I like for that is storyboarding. You might have multiple takes on this, which is fine/great. Here’s an example from Enable Quiz, a fictional company that makes online quizzes that HR managers can use to screen engineering candidates:
For more on doing this, see the corresponding section of the storyboarding tutorial here- Storyboarding for Growth.
Now it’s time to form hypotheses. An example for attention might be something like:
‘If we deliver [a certain Google AdWord ad] against [a certain set of keywords], we’ll see a click-through-rate of [x%].’ You might have a similar hypotheses later in the funnel around pricing or certain offers. You might have others for content marketing.
Another things that I think is really important for doing growth with an interdisciplinary team and/or with, say, a product team and a growth team that have a strong, functional interface, is having a fundamental view of how growth relates to product/market fit.
My favorite tool for that is the Growth Hacking Canvas:
It’s a lot of stuff- I know! But we’re moving toward a pretty firm realization that just like silo’ed product development vs. engineering doesn’t work (and so we need agile), silo’ed marketing doesn’t work either. For more on using the Canvas, check out this tutorial: Growth Hacking Canvas.
Growth is hard and in my experience, everyone thinks they’re the only ones that don’t get it and everyone else is killing it. It’s not true. Just take a disciplined approach, keep experimenting, keep observing, and you’ll get there.
Finally, I recommend a quick think on your ‘engines of growth’ (a term coined by Eric Ries). The proposition here is that there are three principal engines of growth and a new ventures knows which one is most important:
Viral– customers/users tell each other about the offer. Crucial here is some measurement of ‘viral coefficient’, the propensity of one customer/user to share in some fashion the offer with others. In this case, your ability to drive sharing/word-of-mouth is crucial.
Paid– you have a certain cost of customer acquisition based on the use of marketing and/or sales resources. Here, ascertaining the cost of acquisition and the value of a customer are key to understanding the validity of your unit economics.
Sticky– the lifetime value of customers is very high because the relationship will deepen over time. Here, testing your ability to retain and maximize the lifetime value of a customer relationship is key.
Where do I go next?
Below is a list of items I thought you might find helpful. I sorted them from most immediate to most explanatory:
- Template for Designing Experiments
This is a Google Doc template: Experiments Template. This is the same template we used for the Demand Hypothesis, but I think it works equally well for growth experiments. - Tutorial on the Growth Hacking Canvas
For more depth on thinking fundamentally about growth and linking it to an interdisciplinary view of your product/market fit: The Growth Hacking Canvas. - Tutorial on Storyboarding
The tutorial is here: Storyboarding Tutorial. The section ‘Storyboarding the Customer Journey (Growth Hacking)’ is specific to what you saw above.
Appendix: Summary Table
This is a summary table for review and reference.
Persona Hypothesis | Do you know who the customer (or user) really is? What makes them tick?
This is an exercise in definition and discovery. New product teams often find they haven’t been granular enough. Teams working on existing products often find they have actually have various personas using the product for different reasons. |
|
Problem Hypothesis | Have you identified specific jobs, needs the persona actually has? Are those important? What alternatives are they using now?
Once you have an idea of who you’re looking at, you learn what they care about- the trick is not to ask then directly, because then they’ll just tell you what you want to hear. Tips and tricks below. |
|
Demand Hypothesis | For your key jobs-to-be-done, is your value proposition better enough than the alternatives?
At this point, observations and interviews start to run into limitations. You really need to present your proposition in a way that’s relevant to your target personas. That does not mean you need a full working product- proxies and prototypes are much better so that you can see if you’re on a valuable course before you over-invest in building something no one/not enough people want. |
|
Usability Hypothesis | Is the product easy to use? Are users using it in the way you intended? What does that mean?
Every (successful) product remains a work in progress. Personas are a great way to anchor and focus your understanding of the relationship between user and product. |
|
Growth Hypothesis | Do you have a profitable recipe to acquire and retain customers? How can you make that even better? Scale it?
Once you have the fabled ‘product/market fit’, meaning you have an adequately sized population where you can reliably sell what you have, then you’re looking to optimize and scale your recipes for creating and maximizing customer relationships. |
Appendix: Creating Effective Screeners
It’s not hard to spend 45 minutes with a subject only to realize they’re really not the subject you’re looking for. Particularly if you dive into detail early, they may be informative enough to keep you going even if you’re on a road to nowhere. This applies to all the hypothesis types and research techniques above.
For this, we create ‘screeners’. Basically the screener is a simple, factual question or set of questions can you ask a potential subject to be sure they’re relevant. It shouldn’t take much.
For example, let’s say we’re interested in JTBD around some aspect of network management, with the idea of possibly building an application for network engineers to manage transport elements like routers and switches. We have a persona(s) for the end user that we want to develop and validate. A good screener would be: ‘How many times last week did you log into a switch or router?’. Let’s say we’re building software for plumbers. A good screener would be: ‘How many plumbing jobs were you out on last week?’.
The screener is more important than you might guess at first. We have a natural bias to go with subjects that are convenient & comfortable, which can dramatically limit actionable learning. Don’t blame yourself, but do screen yourself!
You’ll find both the Enable Quiz example usability test plan as well as another that tests automation platforms for social media (Hootsuite, Buffer, etc.) in the References at the end of this page