Build & Grow
Better Products

How to Prioritize Using Confidence as a Key Variable


To decide on what to focus on when improving your product, try to ask yourself:

“Where are we not doing a great job, where could we do it better and where does it matter?”

It is paramount to try to solve issues that are both important and disappointing to customers. Both are necessary, otherwise, you’ll end up working on items that no one cares about, or over-serving areas where you’re already doing a great job.

The rule of thumb is that prioritization always starts with metrics and goals. It is as simple as:

“What we want to achieve and what is the easiest way to get there?”

The fact that you have already identified the key metrics of the company with their additional branches and defined goals for them, means that a lot of prioritization work has already done.

Now have a look at the hypothesis bank. You should realistically anticipate that up to 90% of the hypotheses in there are simply not going to work. They will not yield any meaningful results or are going to cost more than their potential impact. Prioritization and experimentation are needed to find those few pearls that have the potential of moving the needle.

Start by taking out all the hypotheses from your list and think about the impact they would have on their specific goals. Now reduce that potential impact in half. 

If you think that some piece of functionality would increase a metric by 30% then it will probably do it by 15% (or no movement whatsoever).

Why? 

At least one study proves that the majority of people are highly optimistic most of the time. The problem is that one of the most powerful cognitive biases is the tendency of individuals to exaggerate their own talents and believe they are above average in their endowment of positive traits and abilities. The inclination to exaggerate our talents is amplified by our tendency to misperceive the causes of certain events.

In short, we are not very good at forecasting, even if we think we are.

Next, talk to your tech team and try to figure out what resources are required to validate the hypothesis. While in technology we mostly measure resources in dev time (like “two engineers and one designer for two weeks”) it is important to realize that reality is more costly. Think about the energy spent, the mental and physical effort, the cost of deviating from a specific routine or by not doing a different action.

Now that you have a cost estimation, increase it by at least one third. 

Why?

A phenomenon confirmed in many different studies showed that people and teams are consistently overly optimistic about the time it will take to complete a future task while undermining actual time. And anyone who has worked in tech for any amount of time could easily confirm it. I mean, when was the last time a task or project was finished on time?

An experienced product manager should always add a time buffer to any hypothesis that needs to be validated (and still almost no project finishes on time).

Impact vs Ease

While doing the two steps above you will end up with what is called Impact Vs Ease analysis. This is one of the most used frameworks of the industry (and you will find a great amount of info of it), but also one of the most misused.

The pitfall of Impact vs Ease is that in a lot of cases decision-makers fail to ask a basic question when imposing some work to be done:

” How confident we are that this will work?”

And even if they do ask it, the answer relies on optimistic predictions, superficial market and data research and, yes, bloody opinions.

So how do we make it work? If we are to enhance the Impact vs Effort recipe we need to add a third special ingredient in it: confidence.

Confidence

Confidence is a measure of the conviction required to build a feature and not regretting it later. Strongly leaning towards experiments and customer evidence, when done correctly, confidence eliminates the anecdotal and subjective biases in feature priorities.

In short, the confidence variable should help you chase the answer to: 

“How sure am I that this test will prove my hypothesis?”

Think about grading the confidence level of a hypothesis on a scale from 0 to 10. Now let’s see where some of the most popular decision making evidence would fit:

Personal opinion(s): 1

Opinions are meaningless until proven right by the market. Period.

The competitor has it: 2

In some cases, when the competitors know their game well and are successful, it might be tempting to think that copying them would also mirror their success. Wrong. Organization and products are unique and their confidence that “something is going to work because the competitor has it” should be pretty low. 

Analyzing competitors will not reveal if a hypothesis is good or not but can merely show if the competitors think that this is a problem worth addressing, how they position and price the solution and what is the market feedback.

In addition, most of the time, competitors base their actions on opinions. And opinions are meaningless until proven right by the market. Period.

Market research, data, field research, surveys: 3

While the above are essential and somehow sufficient for a quick assessment especially when the stakes are not that high (low resources needed) when the seriousness of the hypothesis increases (high resources, high risk) then they are obviously not sufficient.

Be careful with surveys in particular. Their results should be used with caution as they are highly influenced by sampling bias, misinterpretation of questions, non-genuine answers, and other drawbacks.

Feature requests, smoke tests: 4

Feature request could be a good indicator of confidence but only if some conditions are met:

First, if the percentage of users that request it is high. And second, if they are highly vocal (cannot be ignored). 

It is also important to follow up user requests with interviews to determine the underlying problem, if it is a real one (the severity of it), and if the requested solution is the best way to actually address it.

Smoke tests are meant to gauge the level of interest that users have in a potential solution to one of their problems. It works by providing potential customers with a convincing opportunity for them to act on a certain call to action (learn more, subscribe, sign up, pay, etc). At each step of the test, the conversion rates are analyzed and give clues on how desirable the product or feature is.

When the users complete the desired action we simply inform them that the product or feature isn’t ready yet and put them on a waiting list.

Smoke tests could be a useful tool in boosting confidence but, in my experience, most work well on areas such as testing product messaging and marketing strategies.

User interviews, interactive mockups: 5

User interviews, if done correctly, can indeed be sufficient in acquiring the right level of confidence required to move on with a specific project. I will address user interviews in detail at the end of this chapter.

Interactive mockups work by providing users with a reasonably realistic user interface and test the way they interact with it. The level of UI fidelity depends on how far we want to go with the test. For some, basic sketches linked together in software like Invision should be enough. But in other tests, we might want to use highly detailed HTML pages.

Interactive mockups work better than most user interviews because watching someone do a task will show you where the problems really are and not where the customer thinks they are.

It is important to acknowledge though that early tests could suffer from limited functionality and results might be flawed. Also, interactive prototypes only provide simulated answers. Looking at something and analyzing it is not the same as using fully it and analyzing it.

Manually operated tests: 6

Manually operated tests represent one of the best methods to get a high level of confidence without spending a ton of resources into validating a hypothesis.

The basic principle is having humans do the work or perform the service for the customer that the app would eventually automate.

The user is seeing a convincing UI that allows her to get the solution she is looking for, while behind the scenes a human is the one pulling the strings and not the app itself.

MVP: 7-8

In theory, the minimum viable product (MVP) is a product with just enough features to satisfy early customers and provide feedback that would maximize our confidence level in a hypothesis. 

But applying this theory would also make us realize that all the confidence-boosting techniques I have described above are all, in essence, small MVPs.

The principle behind it is always to validate with the smallest possible technique that produces the evidence you need to learn. Eventually, all these small MVPs will lead to the main one which, if it proves successful, should give us green light into fully developing the app or the feature.

Leave a Reply

Your email address will not be published. Required fields are marked *