
Sign up to save your podcasts
Or
Please just build it, discovery will only slow us down!
A common belief among SaaS executives is that thinking in hypotheses and trying to prove or falsify them ultimately costs too much time and therefore money.
This also makes sense. Many successful Silicon Valley founders believe that time to market is one of the most important metrics.
A recent example is Mark Zuckerberg's quote ‘Velocity wins’:
“If we’re gonna get it out first, have a good feedback loop, we’re gonna learn what people like better than other people.”
This is true. Particularly true for consumer apps. Product teams can cut corners on quality, just go for it and see what happens. At the end of the day, software teams also only create value when they deliver features to customers, and proven insights are best gained when something gets into the hands of customers.
At the same time, failure is expensive. Especially in B2B SaaS companies. Every product launch requires significant effort from multiple teams: product, UX, engineering, marketing, sales. This means that the launch of a feature is often too significant an event to be used as a point in a feedback loop or as the first step in an experiment. If a feature fails at this stage, it becomes expensive: maintenance, coordination, training, development time and ultimately the termination of the feature require significant amounts of time. The emotional attachment of many colleagues within the company will also be high. Several internal successes (‘yay, we delivered’) have been celebrated and giving up this feature could be demoralising for the team.
So how can you best balance the time spent on de-risking (prototype testing, customer interviews, internal discussions, etc.) with actually developing features, writing code and delivering features to customers? 🤷♂️
How do you do your job as PM best and reduce the risk of failure, but at the same time make your founders happy by keeping velocity high?
The Build vs. De-Risk Dilemma
The tricky part here is that both of these statements are true:
* Experimentation and de-risking can save up to millions of dollars that would otherwise be spent on developing, releasing, maintaining and discontinuing features that no one wants or needs.
* Software teams only create value when they bring new functions into the hands of customers.
This means that there has to be a middle ground between these two avenues: you need to spend a reasonable amount of time on de-risking. However, once you have reached a reasonable amount of clarity though you need to start building.
The challenge with continuous discovery / delivery
A curious reader of PM literature might say: ‘continuous discovery and delivery is the solution to this problem!’.
The ideal setup described in most product management books is for the PMs on the team to continuously discover, the UX designers and engineers to continuously design and code, and together they continuously deliver automatically validated solutions.
The double diamond of product management specifies that discovery and delivery do not take place one after the other, but in parallel. There is never a dependency where delivery can only start after discovery:
The challenge with continuous development and delivery is that it only works for existing products. Instead of a single, upstream research phase, you conduct discovery in parallel with delivery.
But what about situations where your founder asks you to deliver a new feature or product? An idea he or she recently had that simply needs to be built?
How to time-box initial product discovery for your big bets
For a new feature area or a new product, Initial Product Discovery comes into play. Initial Product Discovery blocks development efforts. In this case, it is true that discovery delays development.
Not doing discovery cannot be an option, especially for big bets. This is also because it will be very difficult to design a feature without ever talking to a customer...
A good compromise that often works is time-boxing your discovery. This means that you set an artificial deadline for your discovery efforts.
But when is the time reached when you can say: ‘Now we know enough, let's build it’?
To do this, we need to answer two important questions about the idea behind the big bet.
Where does the idea come from?
A crucial piece of information in answering this question is where the idea for the big bet comes from. What is the source of the idea? If we have countless interviews and customer requests asking for this feature, not much research is needed to validate the big bet. If it's just a random idea, we'd better validate it with at least one person outside the organisation.
Itamar Gilad's confidence meter is a great framework to quantify the validity of a source:
The entire right side of the confidence meter (0.01 - 0.5) is based on internal opinions. It starts with very unsystematic internal opinions, such as self-conviction, and ends with more structured internal opinions such as ‘an investor said so’ or engineers have performed a POC.
The entire left-hand side (0.5 - 10) is based on external opinions. It starts with weaker external opinions such as anecdotes (‘a customer once said...’) and goes up to the strongest external opinion: launch data. Launch data means that you have introduced the function and therefore know its effects. This is irrelevant at the moment. We have not yet introduced the function.
What's interesting about the Confidence Meter is that the highest score you can achieve before launch is 7. And this is only possible with the highest level of de-risking exercises, such as prototype tests, user studies and A/B tests. A considerable amount of time is already required to get this far. Most companies will only do this for a fraction of their new features.
A good score that can always be achieved through product discovery is 1-2, which means that you have spoken to a few (5-10) customers, looked at product data, spoken to sales or conducted a survey.
What is interesting about the confidence meter is that even at this stage it indicates that the certainty of not failing the feature is still very low. At this stage, only 2 out of 10 points are still considered reliable.
How high is the cost of investing into that idea?
The second question you need to answer in relation to the idea is: What is the cost of investing in this idea?
Since we are still at the very beginning of a new big bet, the cost may not be clear at this point. We can literally invest any amount of time into something new. A product team should not be done when a deadline is reached, but when a problem is solved. Instead, a good metric is ‘ease of change’. Are you investing in a new, standalone feature that can be deprecated without dependencies if something goes wrong? Or are you touching a core element of your platform's technology? I've categorised them into four categories:
* Trivial to Revert (Low Cost)
Examples: Minor UI changes, copy updates, or low-impact features.
Characteristics: Minimal development time, no dependencies, and can be quickly undone without impacting the product.
* Moderate to Revert
Examples: Small functionality changes or enhancements that require a bit more effort to remove but don't affect the core system.
Characteristics: Slightly longer implementation time, mild team coordination required, reversible without significant disruption.
* Challenging to Revert
Examples: Features that touch multiple systems or involve backend changes but aren't foundational. This should also include dependency and work from other units. Is it a feature that requires a lot of internal education, documentation, launch activities etc?
Characteristics: Higher development effort, dependencies on cross-functional teams, and requires planned removal efforts.
* Costly to Revert (High Cost)
Examples: Architectural changes, significant new functionality, or features requiring long-term maintenance.
Characteristics: High investment in resources, deeply integrated, and extremely complex to roll back.
How much time should you make for discovery?
By putting both metrics of each idea on a quadrant you will get these four categories:
If you have a large bet that has a high level of confidence (more than 10 customers would pay for it), and it's a standalone feature with few dependencies that's easy to set up, then go for it. You probably won't need much additional discovery for these types of bets.
The 70% rule by Jeff Bezos
Another more simple framework for knowing when enough is enough is the 70% rule by Jeff Bezos.
In a 2016 shareholder letter, Jeff Bezos suggested that “most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”
from the book “Working backwards” by Colin Bryar & Bill Carr.
This means that you can never falsify or validate all assumptions and that at a certain point you just have to build and deliver to find out.
It's a similar approach to the one above. If you start at 60% confidence, you invest another 10% to get to 70%. However, if you start at 5%, make sure you get to 70% by investing in the right discovery.
Please just build it, discovery will only slow us down!
A common belief among SaaS executives is that thinking in hypotheses and trying to prove or falsify them ultimately costs too much time and therefore money.
This also makes sense. Many successful Silicon Valley founders believe that time to market is one of the most important metrics.
A recent example is Mark Zuckerberg's quote ‘Velocity wins’:
“If we’re gonna get it out first, have a good feedback loop, we’re gonna learn what people like better than other people.”
This is true. Particularly true for consumer apps. Product teams can cut corners on quality, just go for it and see what happens. At the end of the day, software teams also only create value when they deliver features to customers, and proven insights are best gained when something gets into the hands of customers.
At the same time, failure is expensive. Especially in B2B SaaS companies. Every product launch requires significant effort from multiple teams: product, UX, engineering, marketing, sales. This means that the launch of a feature is often too significant an event to be used as a point in a feedback loop or as the first step in an experiment. If a feature fails at this stage, it becomes expensive: maintenance, coordination, training, development time and ultimately the termination of the feature require significant amounts of time. The emotional attachment of many colleagues within the company will also be high. Several internal successes (‘yay, we delivered’) have been celebrated and giving up this feature could be demoralising for the team.
So how can you best balance the time spent on de-risking (prototype testing, customer interviews, internal discussions, etc.) with actually developing features, writing code and delivering features to customers? 🤷♂️
How do you do your job as PM best and reduce the risk of failure, but at the same time make your founders happy by keeping velocity high?
The Build vs. De-Risk Dilemma
The tricky part here is that both of these statements are true:
* Experimentation and de-risking can save up to millions of dollars that would otherwise be spent on developing, releasing, maintaining and discontinuing features that no one wants or needs.
* Software teams only create value when they bring new functions into the hands of customers.
This means that there has to be a middle ground between these two avenues: you need to spend a reasonable amount of time on de-risking. However, once you have reached a reasonable amount of clarity though you need to start building.
The challenge with continuous discovery / delivery
A curious reader of PM literature might say: ‘continuous discovery and delivery is the solution to this problem!’.
The ideal setup described in most product management books is for the PMs on the team to continuously discover, the UX designers and engineers to continuously design and code, and together they continuously deliver automatically validated solutions.
The double diamond of product management specifies that discovery and delivery do not take place one after the other, but in parallel. There is never a dependency where delivery can only start after discovery:
The challenge with continuous development and delivery is that it only works for existing products. Instead of a single, upstream research phase, you conduct discovery in parallel with delivery.
But what about situations where your founder asks you to deliver a new feature or product? An idea he or she recently had that simply needs to be built?
How to time-box initial product discovery for your big bets
For a new feature area or a new product, Initial Product Discovery comes into play. Initial Product Discovery blocks development efforts. In this case, it is true that discovery delays development.
Not doing discovery cannot be an option, especially for big bets. This is also because it will be very difficult to design a feature without ever talking to a customer...
A good compromise that often works is time-boxing your discovery. This means that you set an artificial deadline for your discovery efforts.
But when is the time reached when you can say: ‘Now we know enough, let's build it’?
To do this, we need to answer two important questions about the idea behind the big bet.
Where does the idea come from?
A crucial piece of information in answering this question is where the idea for the big bet comes from. What is the source of the idea? If we have countless interviews and customer requests asking for this feature, not much research is needed to validate the big bet. If it's just a random idea, we'd better validate it with at least one person outside the organisation.
Itamar Gilad's confidence meter is a great framework to quantify the validity of a source:
The entire right side of the confidence meter (0.01 - 0.5) is based on internal opinions. It starts with very unsystematic internal opinions, such as self-conviction, and ends with more structured internal opinions such as ‘an investor said so’ or engineers have performed a POC.
The entire left-hand side (0.5 - 10) is based on external opinions. It starts with weaker external opinions such as anecdotes (‘a customer once said...’) and goes up to the strongest external opinion: launch data. Launch data means that you have introduced the function and therefore know its effects. This is irrelevant at the moment. We have not yet introduced the function.
What's interesting about the Confidence Meter is that the highest score you can achieve before launch is 7. And this is only possible with the highest level of de-risking exercises, such as prototype tests, user studies and A/B tests. A considerable amount of time is already required to get this far. Most companies will only do this for a fraction of their new features.
A good score that can always be achieved through product discovery is 1-2, which means that you have spoken to a few (5-10) customers, looked at product data, spoken to sales or conducted a survey.
What is interesting about the confidence meter is that even at this stage it indicates that the certainty of not failing the feature is still very low. At this stage, only 2 out of 10 points are still considered reliable.
How high is the cost of investing into that idea?
The second question you need to answer in relation to the idea is: What is the cost of investing in this idea?
Since we are still at the very beginning of a new big bet, the cost may not be clear at this point. We can literally invest any amount of time into something new. A product team should not be done when a deadline is reached, but when a problem is solved. Instead, a good metric is ‘ease of change’. Are you investing in a new, standalone feature that can be deprecated without dependencies if something goes wrong? Or are you touching a core element of your platform's technology? I've categorised them into four categories:
* Trivial to Revert (Low Cost)
Examples: Minor UI changes, copy updates, or low-impact features.
Characteristics: Minimal development time, no dependencies, and can be quickly undone without impacting the product.
* Moderate to Revert
Examples: Small functionality changes or enhancements that require a bit more effort to remove but don't affect the core system.
Characteristics: Slightly longer implementation time, mild team coordination required, reversible without significant disruption.
* Challenging to Revert
Examples: Features that touch multiple systems or involve backend changes but aren't foundational. This should also include dependency and work from other units. Is it a feature that requires a lot of internal education, documentation, launch activities etc?
Characteristics: Higher development effort, dependencies on cross-functional teams, and requires planned removal efforts.
* Costly to Revert (High Cost)
Examples: Architectural changes, significant new functionality, or features requiring long-term maintenance.
Characteristics: High investment in resources, deeply integrated, and extremely complex to roll back.
How much time should you make for discovery?
By putting both metrics of each idea on a quadrant you will get these four categories:
If you have a large bet that has a high level of confidence (more than 10 customers would pay for it), and it's a standalone feature with few dependencies that's easy to set up, then go for it. You probably won't need much additional discovery for these types of bets.
The 70% rule by Jeff Bezos
Another more simple framework for knowing when enough is enough is the 70% rule by Jeff Bezos.
In a 2016 shareholder letter, Jeff Bezos suggested that “most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.”
from the book “Working backwards” by Colin Bryar & Bill Carr.
This means that you can never falsify or validate all assumptions and that at a certain point you just have to build and deliver to find out.
It's a similar approach to the one above. If you start at 60% confidence, you invest another 10% to get to 70%. However, if you start at 5%, make sure you get to 70% by investing in the right discovery.