How to develop a Test Strategy
Test Strategies might be something that fewer people are focusing on in the shift to highly automated test systems and lower level automation (which is in itself still a strategy), but they remain an important aspect of articulating the process of testing to all involved. Testers, developers, business analysts, and indeed product owners and the business itself.
The reason why test strategies are often looked down upon though is that we don’t take them seriously. While a lot of effort often goes into the creation of a tester strategy as a high-level document that fills management teams with confidence that certain measures are in place to address software quality, the truth is development teams themselves don’t follow them too closely, choosing to focus rather on trying to solve the problems that are in front of them than spend time trying to get up to speed with certain strategies that should be followed even testers will get caught up in trying to find the most efficient ways to test something and deliver on their sprint goals than look to adhere to a test strategy that often has very little in common with their approach to testing.
And that latter line is often the biggest issue with most test strategies. They are simply too high-level to represent the real need of the development teams. They paint a real-world scenario that doesn’t often cater to the critical decisions teams are making on a day-to-day basis. Without sufficient technical detail that takes into consideration the evolving nature of software development and the latest tools and best practices, it will simply not be relevant enough or practical enough to serve a purpose. Yes, it might give the auditors and managers a fuzzy feeling inside, but doesn’t paint an accurate picture of where the testing processes should be.
The obvious counter to this problem is that test strategies are needed at a corporate level, to provide an understanding to the business of the processes involved with software testing and quality, to ensure that their investment in resources and quality delivery is been effectively measured. It also provides an understanding of what testing is involved and a blueprint for them to understand the various testing metrics that they will need to consume.
Place the focus on quality
I’m going to start by changing the name entirely and saying that we shouldn’t be focusing on writing strategies about how we test software, but rather how we build quality software. I prefer the name of quality strategy because quality is something that is designed into software from the start and if you want to achieve the goal of delivering quality software, you will need to incorporate things like approach to software development, architectural standards, and testability.
When we call something a testing strategy, we tend to focus on the testing aspect of it, whereas our scope should be far bigger and incorporate every person in the team. And quality should be the responsibility of every person in the team and as such, the main focus should be on that, rather than just an overall approach to testing. This also means though that you will need to work with people from all aspects of the business when putting your strategy together to incorporate approaches that will meet the different needs of the business while still prioritizing quality.
I’ve put this point in because I think there can often be a lot of confusion around what actually constitutes a test strategy. While it has traditionally been something that is used at an organizational level to define a corporate approach to software testing and software quality - it is often applied at a team or department level, especially in large companies with diversified software portfolios. Either way, the test strategy needs to provide an authoritative approach to testing and match the context of its purpose. Keeping the focus on what the key objectives are of the business/project and align itself to that. If a test strategy is simply generalized without this alignment – while it might be correct, it is unlikely to be useful to the organization and drive the right level of change.
And herein is the first major point to developing a test strategy – that there is really no one strategy. What needs to be documented for business will need to be presented differently to software development teams and because we’re trying to create a one- strategy fits all approach, we end up appealing to no one. Rather our different strategies need to be tailored to the relevance of their intended audience and focus on what is required. This doesn’t mean that the communication should be diluted or that multiple strategies will exist – there should still be alignment and clear focus, but different versions should be presented with regards to how it will look at a business level and how it will be implemented in the teams at a technical level. And while it should still be part of a uniform approach to testing, the different non-functional aspects like performance and security should also have documented approaches.
For instance, the business is not really interested in the details of automation. Yes, it can be mentioned as a strategy, but what they want to focus on is how automation is going to allow for faster regression and how it can improve their ability to deliver software at a more rapid pace. At the same time though a test engineer isn’t interested in the high-level definitions and process of quality and simply wants to know what needs to get done to determine success.
Emphasize your quality gates
There are many different ways to make software and often we can get too prescriptive in how teams go about this. Yes, most companies utilize some form of agile methodology and are increasingly trying to push a DevOps approach, but some projects and teams may work better with different methodologies and processes. And yes, there are even projects that may be better suited to a Waterfall approach at times. So, rather than trying to get caught up on trying to explain testing processes, rather focus on quality gates (traditional entry and exit criteria) that will provide the required governance of ensuring that teams are delivering quality in the right way.
These quality gates will act as audit controls of sorts, but not something that should create any form of bureaucracy or administrative overhead, but rather some identifiers that can make it clear for teams to know when certain phases of software are ready to progress and should actually provide empowerment for teams to make this move themselves because the measurement criteria are clear.
And quality gates are not just things that are required for the development team, but guidelines should also be set in place for what is required from a business perspective before a specific project is invested in and started up to prevent wasted effort and scoop creep due to unclear understanding of requirements. It should also be clear to developers what type of unit testing and quality they need to deliver. A guideline for software support and maintenance should also be provided so that quality can be governed for the entire life-cycle of the application.
Some examples of what I mean by quality gates are included below:
I think one of the reasons so many organizations fail to understand software testing and the cost of quality is because we do a poor job of showing its value. Any good test strategy needs to be able to detail exactly how testing and quality will be measured and provide an outline for how the metrics will be determined, what will constitute measurable success, and the cadence of these measurements. Tracking the success of testing and quality in an organization is vital to continued investment and ensuring testing gets the right level of adopting and support within the company.
Again though this is something that needs to be tailored to its different audience. Your development team needs to know how to measure quality to ensure they are on track to delivering the required quality. Whereas business would be looking at the ROI of testing efforts versus the impact of production defects on customer satisfaction and see an improvement in that regard. Improved quality should also lead to easier maintenance and this should also be factored in as success criteria of the testing effort.
This doesn’t mean success will be shown straight away and often if a change is introduced, numbers will show an aspect of regression before they get better, but as long as these things are understood and can be easily measured it becomes easier to justify the strategy to both a business and development teams.
Some additional measurement criteria for quality gates that you may want to incorporate into your agile boards are detailed below as well:
Business value defined
Measurement criteria for epic success defined
Where no business value exists, the technical purpose of the story is clearly defined
All clarifications/ambiguities around epic have been resolved
Risk analysis of epic has been completed with mitigations in place for each risk
All functional requirements are marked as complete and tested.
Non-functional requirements have been adhered to
Technical analysis of story has been detailed with no issues outstanding
Implementation of feature does not contradict any of the existing architectural principles
The feature has a clear list of priorities for all the components required for its completion
Detailed requirements clearly defined
The user story is prioritized from a business context
Impact analysis on the user story and which aspects of software it will touch has been understood
The user journeys have been detailed with all the possible flows of information
The structure of payloads, data, or message formats are detailed
All existing dependencies are detailed, along with any risks associated with these dependencies
The user story does not contradict the details of any other user story or the existing NFR conditions.
All clarifications and requests to clear up understanding details and ambiguity of the user story have been resolved.
All error conditions around the User Story are detailed.
All code has been peer-reviewed and signed off by the team
Security scanning is in place for relevant modules
Code coverage of unit tests exceeds 80%
Unit tests have been signed off by a tester on the team
Security scan results have passed
Code can be deployed and tested in a pipeline
Deployment (which should coincide with the execution of testing tasks)
Automation tests are in place for all requirements and tests that execute as part of the CI pipeline
All non-minor defects resolved (with root causes in place). Where minor defects do remain, a plan should be made for the team to rectify them in the future.
Where applicable - performance testing has been done and meets the required benchmarks
Monitoring systems are in place for tracking operation and maintenance of any particular module or application
Any tech debt that has arisen in the epic, should be prioritized and added to the team backlog.
Measurable test coverage meets 100% coverage mark
Automate and build tooling around them
A key reason test strategies may never be followed is because we try and govern them through many processes. However, we can build our tooling to meet our needs and automate a lot of our quality gates and ensure they meet the specific measurement criteria. We can do this by utilizing our CI pipelines to monitor certain criteria, while also leveraging other tooling and pulling them into one centralized location which allows for easy visibility and monitoring of our quality gates and quality measurement. All of this can be used to not just drive adoption, but enforcement too. There may be some initial pushback with the effort put into building these automated checks, but once companies start to realize that the cost of defects and code maintenance is reducing, they will see the benefit.
Focus on people
One of the mistakes I’ve found in a lot of test strategies is that they emphasize tools and processes that they want to be used, over what actually is the most vital component of software quality – the people that deliver it. Part of a test strategy should be dedicated to the growth of the profession, training, and engagement. This might not be something that companies ask for or think about it when they ask for a test strategy, but without a clear focus on ensuring the testers are properly engaged and developed, then you aren’t showcasing the full picture to the business and it's important that this be made visible.
As much as it is important to talk about the importance of test analysis, automation, and best practices – you want to talk about things like the role of the tester in a project, how testers will be line managed, developed, engaged, and recruited – along with strategies to address coming issues raised by testers (and how these improvements will be measured) so that these things can be addressed.
People might frown upon unnecessary documentation and administrative overhead, but the truth is that test/quality strategies are still critical to any organization's quality success. The difference is that they have started to move away from bloated documents, to multi-faceted documented approaches that impact each part of the development organization, with an emphasis on building them into tools and governance models.
Don’t shy away from putting strategies in place to drive and improve organizational quality and add the value that proper software design and testing brings.