Introducing a measurement called "defect mass" helped a project to find the most impacted areas by developments and decide how many tests should be run for each impacted area. Victor Ionascu, engineering manager at Axway, spoke about how measurements can be used to create a better testing strategy at QA Challenge Accepted 2021.
Ionascu shared his story about taking on a project with many defects and insufficient knowledge about the architecture. They decided to pull out data and create a formula that would provide insight into the test effort needed for each functional area. The measurement they created was called "defect mass":
Defect mass calculates the number of tests that we need to do based on the number of incidents & their severity.
Using this measurement together with other KPIs helped them focus their testing. They managed to decrease the number of customer incidents, as Ionascu explained:
Any testing campaign has a limited amount of tests that can be executed, and we need to be sure that we can focus on the right area. Defect mass helped us to keep this focus.
InfoQ interviewed Victor Ionascu about using defect mass for testing critical product areas.
InfoQ: What were the main problems faced when starting the project?
Victor Ionascu: The project was ongoing when I joined. Issues were just piling up and not mitigated. When problems are ignored, they have the tendency to become more serious.
The main problems faced were:
- Insufficient knowledge transfer between the teams ( Berlin - Bucharest)
- High impact bugs were escaping the manual and functional testing campaigns
- High turnover caused old technologies; no clear roadmap to change from a sustaining product to a cloud-ready one
- No clear architecture maps as there were big parts of the product which remains untested and customers were facing bugs in those areas
There was a strong rivalry between the centres. The R&D Berlin team was dismissed and moved to Bucharest, and the people who remained in Berlin (mainly the services team) were not open to sharing their knowledge.
InfoQ: What strengths were there that you could leverage to address the problems?
Ionascu : The project had also some strong points that we used to bring some order:
- Tons of documentation
- Corporate processes
- The support team
The documentation provided some use case scenarios, and there was technical documentation that elaborated some interactions between the components of the product. This made it easier to build a logic chain and in the end, have more focused test cases.
The corporate processes helped us to have some structure in the way we were delivering and how customers were used to providing feedback.
The support team interacted directly with the customers and they were on the first front line. They provided customer use cases and also tips & tricks from the field on how customers used the product, and how they configured it. When we had issues on our side, we talked with the support guys to see if there was anything that we missed in our configurations.
InfoQ: What was your approach for setting up a test strategy that helped you decide what to test and how to test it?
Ionascu: The project was pretty much in the dark, so we needed to start gathering all the data under the same umbrella. Some project information was in Jira, other information in HP Quality Center, some other information in Salesforce. We added all the data in Jira to be able to identify our most impacted areas and to estimate the capacity (in man days) to properly focus our effort.
The things that we did were:
- We started to use Functional Areas where we gathered all our user stories, epics, bugs (internal & customer ones), and tests ( manual & automated).
- We set the right priorities for the tests by having meetings with the product owner and establishing the importance of each test or category of tests. This helped us to better understand which were the important tests.
- We pulled out data and created a formula that showed us the impact of each functional area.
- We estimated the test effort to decide how much testing was enough for each of these impacted areas.
Defect Mass is a concept that calculates the number of tests that we need to do on a release, based on the number of incidents & their severity:
- Component= is a functional area of the product
- Mass = the total weight of the Epics & Bugs
- 1 Epic= 1 Blocking defect = 10 points
- 1 Major defect = 5 points
- 1 Minor defect = 1 point
- Capacity per day = Number of tests executed per day
- Number of days = Duration of the test campaign
- Total Mass = The sum of the whole project mass
The formula that we created was:
Component Mass * Capacity per day* Number of days)/ Total mass* – rounded to 0 decimals = nr of tests that need to be executed to be sure that the release is good to go.
We created a tool to measure defect mass. This tool was used to help me to extract a number of tests from my test sets and plan them for the testing campaign.
If you want to play with it, you can download it from github: DefectMass
InfoQ: What have you measured and how did the measurements help to improve the quality of the product?
Ionascu: When we started using defect mass, we constantly investigated the evolution of our bugs by measuring the creation of customer incidents, and it actually showed an accelerated decrease, keeping the trend for more than one year.
The graph below shows how the trend of open defects changed after defect mass was implemented.
InfoQ: What have you learned?
Ionascu: One thing that became really clear was that we needed to keep our focus on the most impacted areas, but before we could focus, we needed to find out what those areas were.
Also, it’s important to be consistent and patient (give your team and yourself time); it takes some time to see results and follow them.
In addition to measuring defect mass, we used these two KPIs to help us get better visibility on our product quality:
- Defect Leakage = (Total Number of Defects Found in Product after Release/ Total Number of Defects Found Before Release) x 100
- Three Months Defect Inflow
For our project, the first results appeared after three months. We realised that we needed more indicators like the one presented above (defect mass is not a silver bullet) that could help us to improve our test approach. Defect mass is not the only tool on which you need to rely in order to find out about the quality of your project, so I used other KPIs as well.
In conclusion, don’t be afraid to experiment with new approaches.