The False Trade-off Between Quality and Speed
One of the main risks that a growing software engineering team faces is a decrease in the team’s productivity. The decrease comes from several factors, from an increased need to communicate across many communication paths, to additional processes to be followed to minimise the risk for the business and its customers.
One thing that I have invariantly observed, even in the best teams, is that at some point the team starts to struggle with speed because the internal quality of the software has deteriorated to the point where new features take incrementally longer to build than before.
I have seen this happening even in situations where the whole team is sold on the idea of building high quality software, and they are supported by their management to do it. Yet, at some point, they start to struggle.
In this article I want to dive deep into why this happens, what tech leads and managers can do to prevent it, and what a reasonable alternative looks like.
What is quality?
When we talk about quality, it is important to clarify what we mean by it, as the term can have multiple meanings. For example, let’s consider this simple question:
“Which of the following is a higher quality car?”
It would be difficult to answer this question without a proper definition of quality.
One of the common mistakes when talking about quality is mistaking quality with availability of features. The Ferrari SF90 might have many more features than a Volkswagen Golf, or might have better performance in a number of features, but this should not be interpreted as having better quality.
There are many available definitions of quality in software development, but for this blog I will stick to the following. I will define high-quality software as software which is:
1. Correct: the software does exactly what it is supposed to do, and this can be validated through tests;
2. Secure: the software is protected from malicious use;
3. Maintainable: the software is easy to change and operate;
4. It does the right thing for the user: the software has a positive impact on the user.
Why does quality deteriorate over time?
When you ask an engineering team why the quality of the codebase has deteriorated, the most common reasons are the following:
- We have a hard deadline and we need to take shortcuts;
- This is a small feature which is going to be used by just a bunch of users;
- I will have time later to address quality issues;
- This is a new product, we still don’t know if users will like it;
- With the time we are going to save, we can add some extra features;
- We have a lot of features to build which are business critical;
- Our stakeholders are ok with lowering quality for faster delivery.
All these reasons seem to point to the underlying assumption that is possible to gain in the dimension of speed — faster delivery, more features delivered in the same time — by trading quality off.
This is hardly surprising — in the world of Software Development, we are constantly faced with trade-offs, and every decision comes with a series of positives and negatives. Often there is no option that comes with only positives, and the choice is about which negatives we are willing to tolerate in exchange for the positives.
But when we talk about quality and speed, I believe that this is a false trade-off, and understanding why is the key to enable the team to deliver high quality software.
In general, some teams believe that it is possible to gain speed by trading quality. Martin Fowler defines this as the Tradable Quality Hypothesis:
Quality is tradable, by enforcing less quality we gain in the other dimensions of cost, scope, or speed.
But why do people think this is true?
This hypothesis is predicated on the assumption that since the internal structure of the software is not directly observable by the user, we can reduce the internal quality of the system in order to gain engineering capacity which we can then invest in adding more features to the product, hence increasing development speed.
But this way of thinking leads to two big issues.
First of all, when you trade quality for speed, you get less speed, not more.
In the current world of service development, products are built as long-term services that keep evolving over time to adapt to their users’ needs. As a consequence of this, most product development work happens in the context of an existing code-base and an existing live product.
This has two important consequences.
First, if we spend more time modifying an existing code base rather than writing new code, the main contributor to the speed at which the software gets built becomes the cost of change: how easy/difficult is it to implement new requirements while ensuring that the software keeps working as expected? Anything that lowers the cost of change has a big impact on the speed at which the software gets built. Low quality increases the cost of future change, hence it increases the time to develop new features.
Second, low quality reduces the bandwidth available to the development team to focus on new features. Bugs and other operational issues prevent users from doing their jobs, and most of the energy of the development team is soaked up in dealing with issues and the team can’t focus on improving the product.
The cumulative effects of low quality in product development means that longer term, it becomes progressively more expensive to develop a low quality software than a high quality one. This is described very accurately by Martin Fowler in his Design Stamina Hypothesis. Martin refers to design, but in my opinion the hypothesis applies also to quality:
The problem with no-design, is that by not putting effort into the design, the code base deteriorates and becomes harder to modify, which lowers the productivity, which is the gradient of the line. Good design keeps its productivity more constant so at some point (the design payoff line) it overtakes the cumulative functionality of the no-design project and will continue to do better.
There is a point before which low quality can boost the speed of a team, but that point happens far earlier in the lifecycle of a product than people expect — weeks rather than months. For most cases and in most stages of the lifecycle of a project, building low quality software is slower than building high quality software.
Second, trading quality for speed creates a self-reinforcing loop that leads to less and less quality.
If we repeatedly apply the Tradable Quality Hypothesis, we quickly enter a situation where it’s hard to justify any investments in quality at all: if quality prevents us from adding new features and provides no benefit to users, then why should we prioritise anything else rather than keeping adding new features?
Once this way of thinking settles in, the entire team is carried away in a race to the bottom: nobody wants to be seen as the slowest developer of the team, and even more quality is being traded off for even less speed.
The alternative: focus on Sustained Velocity
Speed is obviously important in software development. In general, a faster team will be able to perform more iterations on a product, which will improve the team’s understanding of the customer’s needs and how the product is fulfilling those needs. Or a faster team will be able to enjoy a first-mover advantage where the market allows it.
My recommendation to avoid falling into the trap of making quality tradable is adopting a definition of speed that binds speed and quality together. The one that I have been using through the years is Sustained Velocity.
Sustained Velocity measures the velocity of a software development team now and in the future. When I assess a piece of work or evaluate a technical design, I don’t focus only on how quickly we can build it, but also on what the effect of this work will be on the team’s velocity, now and in the future. If the work makes our team slower, then it shouldn’t be allowed to ship.
In order for the team to adopt a Sustained Velocity mentality, there are some things that you can put in place as a technical leader.
Build a culture of ownership
Development teams need to feel that they are in charge of the quality of the software, and they are empowered to make decisions on how a project is executed or a product is defined.
If stakeholders are allowed to establish both what the project is and how long it should take to build, then the team is not set up for success and quality will suffer.
This point might be a bit controversial because you might think that the team will never ship anything if they are allowed to define how long something will take. But counter-intuitively, if you have hired a team of professional engineers who care about the quality of their work and you provide them with the right context and information, they will be able to define a high quality solution that works for the business.
Insist on high standards
Quality is essential in all software systems. Prototypes in our world have a strong tendency to become a permanent solution, so we need to ensure that all code deployed to production should be of high quality. These are some of the areas you should set standards for:
- Have metrics to describe the impact that the software is having on the users, and metrics that are able to tell when you are making things worse;
- Do peer code reviews for every change shipped to production and design reviews for larger designs;
- Implement a process for speedy security reviews;
- Test rigorously which a combination of unit, integration, end-to-end, performance tests;
- Design your failure modes so you can control how the system reacts when it fails.
One technique that I have used successfully in the past is to have a clear Definition of Done for any work item that the team is working on, and include quality checks as part of it.
Build quality into your estimates and schedules.
Estimates for software projects are notoriously inaccurate, and a team that is running behind an estimated delivery date might feel the pressure to treat the estimate as a deadline, and sacrifice quality to get back on target. As we have previously seen, this usually leads to a further decrease in the speed of the team.
Although estimates are inaccurate, we can improve their quality by allowing enough time not only for writing the code, but also for all the activities that ensure a higher quality product, like code reviews, design reviews, security reviews.
All these activities take time, and it’s important that they are factored into the project plan.
In the world of software development, the usual trade-off between cost and quality that we experience in other domains of life does not work in the way most would expect, and thinking in terms of trading quality for cost can lead to a less efficient software development process.
Higher quality reduces the cost of change, and helps a development team focus on providing new value instead of constantly fighting bugs and issues.
Thinking in terms of Sustained Velocity allows us to see that quality is speed, and it is the one thing we should never trade if we want to ensure a faster delivery.