Pragmatic ways to measure quality of a software project
- Igor' Arkhipov
- May 1, 2020
- 6 min read
When buying a product or service, everybody expects it to be of quality. When selecting from a range of options, we try to find the best quality for the price we are ready to pay. But let us stop for a second and ask ourselves: “What is quality?”
Quality is something that may be hard to define, especially when you try to create a brand new product or service so there is no baseline to compare with. How do you make sure your product is good enough? How to ensure the good level of quality is respected during the development process and is delivered to the customer? Let’s explore it.

How do we define quality?
According to ISO 9000 — an international standard on quality management:
“Quality is the degree to which a set of inherent characteristics of an object fulfills requirements”
I love this definition for its simplicity, but I hate it because it doesn’t give you any actual clue how to measure this degree. The way ISO standard sees it: when you have an object that was designed for a specific purpose, then how well the object serves this purpose will be a good enough measurement of its quality. It makes sense, right?
Let’s try to apply this logic to a typical software project. When a team is building a product, they don’t just create whatever they want. Typically, there is some kind of research that precedes the development that helps scope it.
This research results in a description of the final product (be it a software specification document or a story map or any other form of documentation) that explains the expectations from the product in terms of functional requirements (what the product needs to do) and non-functional characteristics (how well the product should be doing it). Based on this, it is fair to assume the degree to which the product behaves according to this specification can be used as an indication of quality.
While all of this is true, a business rarely invests in products solely for their functionality. There are usually business requirements behind solution requirements. Those would have been used in a business case to fund the development. From this point of view, a product will fulfil requirements only when it enables the business objectives to be met. A well functioning piece of software that misses the point and does not deliver expected value will be considered of poor quality by the business just like a buggy app that just cannot do the job.
At the same time, any project uses company resources to deliver the business value and the functionality expected. The efficiency of this process also needs to be considered when talking about quality. It is not enough to produce a functionally working solution that delivers on defined KPIs — this needs to be done on time and within the budget allowances to make it viable.
This means we cannot use the compliance to specification as the only measure of quality (let’s call it technical quality), so we need to add business objectives in the equation (let’s call it business quality) and process efficiency (let’s call it process quality).

How do we measure quality?
So how can we do it?
Quite often I see software teams using the amount of bugs as a measure of quality, because it seems easy. What is a bug? A bug can be defined as
“An error, flaw or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways”
This is a definition from Wikipedia. So a bug is an error. Less errors equals better quality — right? Unfortunately, it is not that easy.
It is important to find and document bugs of course, so the overall quality of the solution improves. To a sense, finding bugs is an integral part of the continuous product improvement process.
Where I think it goes wrong is when the teams stop focusing of how much value/quality they have delivered, and start focusing on how many bugs they’ve found. It is even worse when there are KPIs based on the amount of bugs involved. I remember someone talking about an amount of documented bugs per thousand lines of code as a quality KPI. facepalm :)
Based on the definition of quality, it does not really matter how many bugs have been found before the end users see the product. Some people think that if testers don’t find bugs during testing, they are not doing their job well. Some people think that if testers find bugs during testing, developers are not doing their job well. I advocate that if a software is shipped and delivers value it means the team does good job, regardless of how many bugs have been recorded in the process. It is irrelevant as a measurement of quality.
But what is important then?
It is important how effectively and efficiently your project solves business issues. According to ISO 9000, effectiveness can be defined as
an extent to which planned activities are realized and planned results are achieved
In a software project scenario this can be measured through the business and solution scopes of the project. A team should have a good indication of scope in terms of precise scenarios which it needs to deliver. The percentage of such scenarios fully delivered to end users will be a good measurement to use. I personally like the practice of producing scenarios in BDD fashion for user stories. Each story can be further defined with a few example scenarios that will be used for functional testing of the solution. Once the functionality for the story is shipped, a test against the scenarios will show a measurement of the technical quality of the product.
Say, you have a product that implements 50 scenarios. After the release of those scenarios, the team realises only 45 are working. Thus, you can say that the level of technical quality is 90% — out of 50 expected characteristics the product showed 45. Unlike the approach with counting bugs, you count working functional scenarios instead (which by the definition of BDD are valuable to users). This lets you focus on amount of functionality delivered as a measurement of quality.
Second, from the business quality point of view, the business objectives should be supported by some metrics — objective measurements that can be collected as evidence of achieving the goal. Sometimes, the objectives do not easily map to the solution. E.g. if the business goal is to increase sales, a change in website may or may not directly affect the sales figure. If the market is in depression, any change on the website may not increase the sales — but in this case this is not a problem with the website! Our metrics need to take this into account.
So it is important that the big business goals are broken down into smaller objectives that can be attributed directly to the solution. E.g. the same goal to increase sales may be broken down to “increasing the conversion rate” and “the average order size on the website” among the others. Those are closer to the scope of the solution, thus can better indicate the business quality of it. Such metrics need to be defined early in the software development cycle, as often they impose extra requirements to the solution to enable collection of data for them.
These were the measures of effectiveness — how well the product solves the problem it was designed to solve. What about the efficiency? According to ISO 9000 efficiency can be described as:
relationship between the result achieved and the resources used
Basically, it talks about comparing the value brought by the output and the resources consumed. If we come back to the definition of different scopes from the above, the project scope is the scope responsible for the efficiency.
There are numerous metrics and approaches of how to measure the efficiency and productivity of software development process; each has its own advantages and limitations. However, in my view, they all eventually come back to a simple measure: how well the team executes the tasks in accordance with the original plan.
If the team is able to reliably estimate the work and stick to the estimates, it is easy for the business to make weighted and meaningful decisions about investments: based on the expected value and the projected costs, the business may decide whether a certain project is desirable. However, when the team cannot commit to the estimates it provides or if the project process does not give the team enough details to give a confident estimate, this is an indication of inefficiencies in the process. In agile world, measures like velocity and burn-down charts give insights into this metric. In more traditional approaches, variance analysis of cost and schedule can be used.
What do I need to remember?
Quality cannot solely rely on subjective judgement, but it also shouldn’t be driven by meaningless metrics. Building a software product is about doing two things:
Delivering the scope of the product
Delivering on the business objectives for the product
So when it comes to measuring quality, two things need to be taken into account: how much of the expected scope is delivered and how well the solution performs in terms of achieving business outcomes.
This has to happen in a controlled manner that monitors process efficiency through comparing the estimated and expected rate of delivering results with the actual one. This allows to act quickly and introduce changes to plans once critical variance is identified.
References
ISO 9000:2015(en) Quality management systems — Fundamentals and vocabulary. https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-4:v1:en
https://www.projectmanagement.com/wikis/345511/Variance-Analysis
Comments