top of page
  • Writer's pictureCraig Risi

Tips for evaluating testing tools



I started this year with a series on testing tools with an article titled Forget about Finding the perfect testing tool. In it, I made it clear that I’m not the biggest fan of companies trying to push tools as the silver bullet to great testing and software quality and that instead, companies should rather focus on things like processes, framework, culture, and people if they want to achieve success in developing quality software. However, that doesn’t mean that tools don’t serve a purpose and that many companies need to spend time evaluating which is the right tool for them.


So, in this post, I want to discuss the approach to evaluating the different testing tools to ensure that the time and money spent on whichever testing tools are considered is the right one and that as a company you can maximize their potential in your testing space.


One of the biggest mistakes that many companies make is that they start out their tool search with a preconceived notion in mind. Whether it be the desire to choose an open-source or commercial suite or to choose the latest popular tools just to appear popular and trendy to their development teams, too often companies end up making detrimental tool decisions because they simply rush the process or fail to understand what they really need.


Hopefully, the below guidelines can prevent you from making those same mistakes:


1) Define the Testing Requirements.

Much like you need to fix the bigger issues in your testing space and take the time to fully understand your processes and frameworks, you need to take the time to understand the tool requirements you really need. And it's only once you’ve fixed the gaps in those sections that you can perhaps understand why and where you may need a tool to help you.


This might not always be something to spend a lot of time on. For instance, if you need to find a tool to assist with your test automation or solve a critical problem you are having, you can’t always spend time trying to make these decisions or wait to build a perfect framework when you have nothing to actually work with. However, in cases like this where you don’t have the time to fully understand a problem, I would recommend at least trying to find an open-source solution first to get something in place, and only if that doesn’t come right, try and select a commercial tool – to prevent any unnecessary licensing costs, though you could argue that the lost time in any wrong tool decision is unnecessary. It’s just more palatable and easier to fix if that wrong tool happens to be an open-source one than a commercial one.


In all situations though, the problem you are trying to solve needs to be clear and there needs to be a clear set of priorities and requirements for the things that matter to you. Do you want something that runs on the cloud or does it need to be on-premise? How important are traceability and meeting audit requirements in your testing? Do you need your tool to fit into any existing CI pipelines or other existing development and testing tools? What about programming language requirements, existing skills within the team, or migration features? Often times it can also be a little more specific like a need to automate a specific type of application, object, or track a specific performance or security measurement. Having a clear understanding of what you are looking for is key and you need to identify all of these things before moving on to the next step.


2) Prioritize the requirements.

You might now have a clear picture of the things you need, but the truth is not all features are as important as others. For instance, things like audit or security requirements are likely things you don’t want to take chances on and where you are most willing to spend a lot of money in getting the best solution in place and then saving money elsewhere if needed.

A good way of doing this is simply associating a cost to the specific risk should it materialize. For instance, what is the hypothetical cost of a security incident versus a performance issue, failed audit, or late delivery /excess maintenance cycles due to poor automation? This might not always be possible to easily quantifiable but should give you a clear indication o the requirements you need versus things you can possibly compromise on.


Also, as already mentioned, there is no such thing as a perfect tool or a tool that does everything well, so it's important when making your evaluations that it is clear which things you are willing to compromise on and what features are most important to you. Don’t get side-tracked by looking for the tools with the most features or which seemingly offer the best value for money or are entirely open-source without fully understanding how those different features might impact your organization.


3) Know your budget.

Unfortunately, all companies have a limit to the money they can spend on their tools, and even for important things like security, there isn’t an unlimited budget and so having a sense of the budget and the funding available will help to know what sort of tools money can be spent on and where money should be saved as much as possible.


It’s more than just understanding some sort of dollar value though because you also need to factor in things like overall development costs of different projects, hardware, server, or existing cloud cots along with staffing salaries to not just understand the money that can be spent, but also how you can effectively calculate ROI on any tool you want to implement and see where it can also save money.


The people reading this article may be mostly technical people who care little for some of these financial matters but at the end of the day, it's vital to the success of your business, team, and project and so understanding the bigger picture is vital here.

4) Identify the Available Testing Tools.

This is possibly the easiest step in the process, but also possibly the easiest to waste time on too. Because while it's easy to look through features and requirements of testing tools on the internet or read many different blog posts online, most tools' features are either oversold or simply just not specific enough for your needs and there is often no way of knowing if it will really work without trying it out.


So, how do you then select the right ones worth trialing out amidst the myriad of testing tools out there possibly all offering something similar? Well, that is the tricky part, but if you’ve done your requirements analysis in step 1 correctly, you should be able to narrow your list down to at least 3 tools. And if you can’t, you may need to revisit your requirements and perhaps go into more detail on specific technical or business requirements that might narrow down your decision.


It's also where that prioritization and budget understanding comes in handy as knowing what is most important and where money can be spent will help to identify the tool that best suits those specific requirements and budget needs.


However, even if there is often a clear winner in a department, try not to narrow the choice down to just one and ensure you have at least 3 tools to evaluate from. Because it's only once you put the tools through their paces and your existing frameworks and processes that their true abilities and feasibilities will be uncovered.


5) Evaluate the Testing Tools.

And now we get to the important part of actually trying out different tools. This is where you want to get key architects and technical stakeholders involved not just in the initial setup or configuration of the tool done, but also in taking it through a short mock project or even an actual sprint iteration if possible.


Choosing the right people and team to get involved is also critical. Even though it’s important for companies to standardize processes and tools as best as possible, the truth is not all teams are equal and some might be better suited to evaluating the problem you are facing. For instance, if one team is struggling with the existing set of tools to effectively automate again their application or struggling to grasp the existing tools, it would be good to get them involved with the technical oversight of a key architect than a team that doesn’t have these challenges. Though, it's perhaps worth reminding you that there may also be other process or framework issues that could best resolve a team’s issues rather than a tool, so establish that those aren’t really an issue before doing so.


The same goes for selecting a specific use case that you want to perform with each tool. It’s important that you make it clear what actions need to be performed and how to measure specific outcomes to understand which tool might best meet your needs.


Once you have the right people in place and a common use case, it’s important that they evaluate the tools across the following criteria. Note though that this is simply just a guideline, and your aforementioned requirements may dictate that other things are more important, so use discretion when performing these evaluations:


Functionality: The testing tool should be able to perform the required tests and provide the desired testing outcomes against your set use case and evaluation criteria. This is where you look at a tool's ability to completely test the solution, provide the right results and also look at the tools against certain measurement criteria. For instance, if you are evaluating a performance testing tool, you will want to ramp up your scripts against a consistent sample size and see which one provides the best results or often can provide the most data to find underlying issues.


Installation and setup: The ease of installing and setting up a tool is critical. If a tool is on-premises, how easy was it to install and integrate into the rest of the existing processes and tooling? If it is cloud-based, how easy is it to configure for effective use within your organization, and are there any security hurdles that need to be overcome to get it integrated with other tools?


Integration: The testing tool should be able to integrate with the company's existing tools and processes. For example, if the company uses a continuous integration (CI) and continuous delivery (CD) pipeline, the testing tool should be able to integrate with the CI/CD tools. It should also work well within the context of a team development cycle, programming languages, and frameworks.


Ease of Use: No matter how good a tool is, if it is not easy to use, teams will struggle to adopt it or use it effectively. It’s important that the testing tool be user-friendly, have a clear user interface, and should not require extensive training to use. Something that fits into existing skillsets or experience would be helpful – though you also don’t want to tie yourself to an approach – especially if some of the skill sets and experiences are built around legacy applications whereas the tool you are evaluating is needed for something more modern. Even then though, a tool should be easy to pick up and use for those with the right level of expertise.


Cost: You will likely have already chosen tools that oy know you can afford, but even then, it’s worth looking at the cost of the different tools against their feature sets and which one the team feels provides the best overall package for the cost. In evaluating this though, evaluation teams should try and quantify the cost of licensing, maintenance, and training that will be required to operate the tool and get other teams to use it effectively. Sometimes the tool with the least features but is easiest to use may lead to the biggest long-term savings and support costs, so it's important to try and evaluate the entire package in this regard.


Support: Any tools need to be supported and maintained and so it's important that these capabilities are also evaluated. The testing tool should be able to operate easily on a given server if on-premises and have a strong support network, including a dedicated support team, online resources, and a user community. Teams should actually log issues and questions to the support team during the evaluation process and scan the documentation carefully to evaluate just how effective the different aspects of this part of the tool and development company are. Obviously, for an open-source tool, there may not always be a full-time team or company behind the support processes but then the effectiveness of the documentation and online community is even more critical.


Data, security, and privacy: How tools deal with any aspect of user data, test data, and all things confidentially is critical, as are the different security protocols in place that should ensure there are vulnerabilities in the chosen tool. Do a thorough evaluation wherever possible to ensure that the tool meets these important criteria, as you don’t want internal systems to be compromised in any way by a tool that lacks in these departments. Most commercial and open-source tools meet the strictest of standards, but that doesn’t mean that they aren’t prone to issues, especially when using older versions of the tool.


6) Select and implement the Testing Tool.

Once the testing tools have been evaluated across different criteria, a tool is then chosen based on the most important criteria of the evaluation as identified by your requirements. Once the tool is chosen though the work doesn’t stop and the team should start to then prepare a bigger rollout and implementation for the tool, along with any specific training that needs to be done across different teams.


Often a phased approach works best where tools are rolled out one or two teams at a team to allow them to be properly supported in the implementation process. During this time any critical issues or problems should also be logged and tracked on an implementation-specific board and dealt with by a specific team or people who are assisting with the plan (some commercial companies will provide these specialized resources) to ensure that these are resolved as quickly as possible. If any critical issues, then do arise during this process that cannot be resolved a company can choose to still pull out of full implementation of the tool.

7) Monitor and Evaluate the Testing Tool.

At this point, with all the effort that has gone into a tool, you should think it will fit in well with the organization and fulfill its purpose. However, that doesn’t mean the evaluation stops as you want to be able to track its effectiveness over a long time to ensure it continues to meet the needs of the organization and can deliver on its expected ROI. This monitoring process typically involves pre-existing monitoring criteria of team performance, as well as a few other specific measures introduced as a result of the tool. Initially, performance improvement might not be immediate or often seem slightly detrimental during the adoption phase, but you should start to see steady improvement over a 3-12 month period before a tapering off takes place and the use of the tool has reached maximum effectiveness.


During this time though it’s also important to evaluate things like tool adoption, as teams may go back into old habits and not be using a tool effectively or some of the development needs may have changed that the tool no longer meets and then the team may need to re-evaluate if the tool should be continued, or a new evaluation process started.


Navigating the world of testing tools is not an easy one, which is why I will also in the coming weeks, be highlighting several of the popular testing tools and look at their different pros and cons to help make some of that decision-making easier.

Thanks for subscribing!

bottom of page