10 Good Interviews > 100 Mediocre Interviews
I am currently preparing to launch the first cycle, the aim of which is to check whether the offer matches the problems. Having already had bad experiences with implementing new methods, I wanted to consider and even simulate the course of this cycle as widely as possible. I see the greatest danger in conducting interviews, which can easily be biased. To reduce this risk, I made the following analysis (I must admit it was inspired by your post). First of all, I see two purposes for interviews:
1. Verify the problem hypothesis (Leaner Canvas)
2. Identify the client's force model
The first one is crucial. The hypothesis contained in the Leaner Canvas should be treated more broadly: it is not only a question of whether there is a segment (and early subsegment within it) for which our order winner is attractive, but whether this group of people is as wide as we assumed (and therefore whether the project can really count on adequate revenues). Without checking this, we will identify the forces acting on the client, but we risk hitting a local minimum.
I see two dangers in a poorly prepared interview process. They may give a false negative or false positive result. In the first case, the interview process may lead to the conclusion that the canvas was constructed incorrectly, even though it was good. This may happen when, when sending an invitation to a conversation or starting one, we incorrectly defined the work we want to look at. For example, if we want to talk to the CFO about inventory reduction, we may have difficulty finding people willing to talk to us because potential candidates may consider it a task performed by someone else (e.g. the purchasing manager). The task may actually be implemented but under a different name or as part of a different task (e.g. improving cash flow). A false negative signal may be a small number of people willing to talk (especially if we are talking to our target customer). To minimize such risk, it is necessary to build a broader JTBD hypothesis. For example, reducing inventories for the CFO is a part of improving cash flow, but it may also be a subtask of something broader that I have no idea about, but is the everyday life of a CFO. This may be the topic of the first (more general) series of conversations: how much of the CFO's work is spent on improving cash flow? What else takes up this time? In other words, I think that during conversations I should deliberately ask questions that test the task in a broader context. Is it good idea?
A false positive result confirms the Lean Canvas hypothesis, although it is actually wrong. This is a more dangerous phenomenon because it leads to the launch of a project for which there will be little or no demand. I assume that this may happen when the issues discussed are important to customers, but not very urgent. This is well described by the BLAC model (https://youtu.be/q8d9uuO1Cf4?feature=shared 42:12). It's nice to do, but it's not critical. For example, when talking to a manager about employee development, it will be difficult to deny the importance of this issue. Almost every interlocutor will be able to talk about the forces that acted on them while carrying out this task. However, the fundamental problem may not be visible: this task is important but not urgent. Worse yet, it may be treated as something that would be nice to do, but relatively little importance is attached to it. The interlocutor will be happy to share his thoughts and experiences, especially if he is rewarded for them (although the mere opportunity to be heard may lead to the desire to talk about something that is not important to the interlocutor).
I think that I would limit this threat by following the advice given by M. Skok or E. Ries: during the interview, ask about the importance of the task in question: taking into account time, money, stress, does it occupy an important place on the interlocutor's list of tasks? What is it competing against/losing?
I suppose these are quite theoretical considerations, but, having been burned by previous mistakes, I'd rather embarrass myself with theorizing than building a zombie product 😊
For this reason, I wanted to ask you for a critical look - is there anything else worth considering? Have I missed something important?
Great stuff. Worth digging deeper.
Very helpful !
If I understand correctly, this means:
1. Early adopters/segments are useful for identifying JTBDs
2. The domain is a broader version of JTBD
3. Inside domain we have to identify alternative
4. We need to ask interviewees about their experiences in this domain. Moment of choosing alternative should define zero point on timeline