

Discover more from Running Lean Mastery
3 Common Customer Interviewing Mistakes That Lead to Mediocre Interviews and What to Do Instead
10 Good Interviews > 100 Mediocre Interviews
At the early stages of a product, customer interviews are the best (and fastest) way to learn from customers. Many incubators and accelerators even pay or withhold follow-on funding from founders until they conduct a hundred customer interviews!
While I get the intent, I’m not a fan of this approach.
First, simply talking to a hundred people doesn’t automatically lead to breakthrough insights.
Second, when you focus on quality instead of quantity, you often need just 10-20 interviews to extract all the key insights in any study.
Knowing how to run good interviews is important; knowing who to interview is even more important.
The secret to running good customer discovery interviews starts with good prospecting.
In today's issue, I will lay out the most common prospecting misconceptions and show you what to do instead to improve the quality of your interviews.
Misconception #1: Targeting early adopters
Homing in on your ideal early adopter criteria is an output of customer discovery, not an input. Prematurely targeting your definition of early adopters could do more harm than good.
Here’s why.
Most early adopter criteria start as best guesses. The danger with relying on them is going too narrow, finding validation, and falling into a local maxima trap, i.e., chasing too small a market.
Example: If I define startup founders using the old stereotype of two guys in a garage in Silicon Valley, I’ll find them but miss the mountain of founders worldwide.
Ironically, falling into a local maximum is worse than finding invalidation because it’s a false positive that leads to spending needless time, money, and effort on the wrong market.
What to do instead:
Run two phases of problem discovery interviews:
Broad-match to map the overall market opportunity and identify your ideal early adopter criteria.
Narrow-match on early adopters to uncover problems worth solving.
Misconception #2: Targeting a specific job-to-be-done
If you’re familiar with the theory of jobs-to-be-done, you might be tempted to come up with specific jobs your product does and try to use them in your targeting criteria.
The most insightful jobs, however, are like magic tricks. They are obvious in hindsight but hidden in foresight. They, too, are outputs of the customer discovery process, not inputs.
Example: In the famous milkshake study, the researchers discovered the unexpected job of the milkshake through the interviewing process. They didn’t guess at the job, but instead asked why anyone would be buying a milkshake at 8 am in the morning.
What to do instead:
Use a starting job scope (typically a functional job tied to needs) to contextualize your product domain to identify direct or indirect existing alternatives (this is what you target).
Use broad-match discovery to discover and rank the bigger context jobs (typically emotional jobs tied to wants).
Use narrow-match discovery to home in on a specific job.
Misconception #3: Targeting active buyers
The most common pitfall, by far, is targeting potential customers of your product, aka active buyers. While building a pipeline of qualified prospects as a byproduct of customer interviews is tempting, active buyers don’t make for good interview candidates because they haven’t bought yet and maybe never will.
Happy customers are alike; each unhappy customer is unhappy in their own way.
- Adapted from the Anna Karenina principle
This last insight is the key to shrinking the number of interviews needed for problem discovery.
For any given product, there aren’t dozens of reasons people buy (or hire) them. But there can be dozens of reasons why they don’t.
The most actionable insights come from customers who have successfully hired and used a product (existing alternative), not active buyers still looking.
Golden rule of problem discovery: Measure what customers did, not what they say they’ll do.
Example: If you’re a home builder, the best people to interview are people who just bought a home, even though they are the worst people to sell your houses to.
What to do instead:
Only interview people who have recently attempted the job or, more specifically, recently used or consumed one of your existing alternatives under study.
Putting it all together
Your objective is modeling after successful customers who took action (hired an existing alternative), not unsuccessful prospects still looking.
Here’s how to frame your initial targeting:
Prospecting Criteria #1: Target prospects based on how recently they switched (or used) a direct existing alternative.
Prospecting Criteria #2: Target prospects based on how recently they switched (or used) an indirect or complementary alternative.
Through these interviews, you
Uncover and rank the non-obvious bigger context jobs people are trying to do (Broad-match problem discovery)
Home in on a specific job worth winning to uncover problems worth solving with the old way (Narrow-match problem discovery)
Design a solution to cause a switch from the old way to your new way
Sell your solution before you build it using a mafia offer.
3 Common Customer Interviewing Mistakes That Lead to Mediocre Interviews and What to Do Instead
I am currently preparing to launch the first cycle, the aim of which is to check whether the offer matches the problems. Having already had bad experiences with implementing new methods, I wanted to consider and even simulate the course of this cycle as widely as possible. I see the greatest danger in conducting interviews, which can easily be biased. To reduce this risk, I made the following analysis (I must admit it was inspired by your post). First of all, I see two purposes for interviews:
1. Verify the problem hypothesis (Leaner Canvas)
2. Identify the client's force model
The first one is crucial. The hypothesis contained in the Leaner Canvas should be treated more broadly: it is not only a question of whether there is a segment (and early subsegment within it) for which our order winner is attractive, but whether this group of people is as wide as we assumed (and therefore whether the project can really count on adequate revenues). Without checking this, we will identify the forces acting on the client, but we risk hitting a local minimum.
I see two dangers in a poorly prepared interview process. They may give a false negative or false positive result. In the first case, the interview process may lead to the conclusion that the canvas was constructed incorrectly, even though it was good. This may happen when, when sending an invitation to a conversation or starting one, we incorrectly defined the work we want to look at. For example, if we want to talk to the CFO about inventory reduction, we may have difficulty finding people willing to talk to us because potential candidates may consider it a task performed by someone else (e.g. the purchasing manager). The task may actually be implemented but under a different name or as part of a different task (e.g. improving cash flow). A false negative signal may be a small number of people willing to talk (especially if we are talking to our target customer). To minimize such risk, it is necessary to build a broader JTBD hypothesis. For example, reducing inventories for the CFO is a part of improving cash flow, but it may also be a subtask of something broader that I have no idea about, but is the everyday life of a CFO. This may be the topic of the first (more general) series of conversations: how much of the CFO's work is spent on improving cash flow? What else takes up this time? In other words, I think that during conversations I should deliberately ask questions that test the task in a broader context. Is it good idea?
A false positive result confirms the Lean Canvas hypothesis, although it is actually wrong. This is a more dangerous phenomenon because it leads to the launch of a project for which there will be little or no demand. I assume that this may happen when the issues discussed are important to customers, but not very urgent. This is well described by the BLAC model (https://youtu.be/q8d9uuO1Cf4?feature=shared 42:12). It's nice to do, but it's not critical. For example, when talking to a manager about employee development, it will be difficult to deny the importance of this issue. Almost every interlocutor will be able to talk about the forces that acted on them while carrying out this task. However, the fundamental problem may not be visible: this task is important but not urgent. Worse yet, it may be treated as something that would be nice to do, but relatively little importance is attached to it. The interlocutor will be happy to share his thoughts and experiences, especially if he is rewarded for them (although the mere opportunity to be heard may lead to the desire to talk about something that is not important to the interlocutor).
I think that I would limit this threat by following the advice given by M. Skok or E. Ries: during the interview, ask about the importance of the task in question: taking into account time, money, stress, does it occupy an important place on the interlocutor's list of tasks? What is it competing against/losing?
I suppose these are quite theoretical considerations, but, having been burned by previous mistakes, I'd rather embarrass myself with theorizing than building a zombie product 😊
For this reason, I wanted to ask you for a critical look - is there anything else worth considering? Have I missed something important?
Great stuff. Worth digging deeper.