Measuring Quality in an Agile Environment - Part 1
I have written before about how it’s important that we don’t rely on metrics exclusively to make decisions, but rather use the data in context to better understand what is going with our software development. That doesn’t mean that metrics shouldn’t be relied upon, but rather simply that we need to learn what the right metrics are for the right situations.
When it comes to software quality and the agile world, this is extremely important as you want to balance your need for measuring the quality of your software with the need to promote efficiency and automation. Below is a short guide to the type of metrics you would want to implement in your projects to ensure you have a handle on both these areas when measuring the quality of your software.
There are two types of Agile testing metrics:
General Agile metrics that are also relevant for measuring the quality of your testing.
Specific test metrics applicable to an Agile development environment.
I’ve decided to tackle these in two separate articles and provide a write-up on where they can add value in measuring the quality and effectiveness of your software testing in your teams.
Part 1: General Agile Metrics as Applied to Testing
Sprint Burn-Down charts depict a graphical representation of the rate at which teams complete their tasks and how much work remains during a defined sprint period. This can be applied to all areas of working in the sprint team, but can also be highlighted to separate tasks associated to tested as well.
The typical burn-down chart plots ideal effort hours for completing a task using remaining hours of effort on the y-axis and sprint dates on the x-axis. The team then plots the actual remaining hours for the sprint.
In the above diagram, for example, the team fails to complete the sprint on time, leaving 100 hours of work left to finish.
Relevance to testing:
Testing usually forms part of the definition of done exit-criteria used by teams.
Because every “story” completed by an Agile team must also be tested, stories completed reflect progress in testing the key features required by the customer.
Number of Working Tested Features / Running Tested Features
The Running Tested Features (RTF) metric tells you how many software features are fully developed and passing all acceptance tests, thus becoming implemented in the integrated product.
The RTF metric for the project on the left shows more fully developed features as the sprint progresses, making for a healthy RTF growth. The project on the right appears to have issues, which may arise from factors including defects, failed tests, and changing requirements.
Relevance to testing:
Since RTF metrics measure features that have undergone comprehensive tests, all features included in the metric have passed all of their tests.
More features shipped to the customer means more parts of the software have been tested.
Velocity takes a mathematical approach to measure how much work a team completes on average during each sprint, comparing the actual completed tasks with the team’s estimated efforts.
Agile managers use the velocity metric to predict how quickly a team can work towards a certain goal by comparing the average story points or hours committed to and completed in previous sprints.
Relevance to testing:
The quicker a team’s velocity, the faster that team produces software features. Thus higher velocity can mean faster progression with software testing.
The caveat with velocity is that technical debt can skew the velocity metric. Teams might leave gaps in the software, including gaps in their test automation, and might choose easier, faster solutions to problems that might be partial or incorrect.
The Cumulative Flow Diagram (CFD) shows summary information for a project, including work-in-progress, completed tasks, testing, velocity, and the current backlog.
The following diagram allows you to visualize bottlenecks in the Agile process: Coloured bands that are disproportionately fat represent stages of the workflow for which there is too much work in progress. Bands that are thin represent stages in the process that are “starved” because previous stages are taking too long.
Relevance for testing:
Testing is part of the Agile workflow, and it is included in most Cumulative Flow Diagrams.
By using a CFD, you can measure the progress of software testing.
CFDs may be used to analyze whether testing is a bottleneck or whether other factors in the CFD are bottlenecks, which might affect testing.
A vertical area in your CFD that widens over time indicates the presence of a bottleneck
Looking at any metrics in isolation is dangerous, which is why before we even start looking at testing specific metrics, you understand these metrics and how they affect the operation of your team. Using these metrics to gain understanding on the performance constraints of your overall teams and where testing is effective or not is often a bigger precursor to your more standard quality and testing related metrics which we will cover in my next article.