top of page
  • Writer's pictureCraig Risi

The Secrets to Independent Test Design


In this last segment about making a scalable framework for your business, I want to talk about making your automation tests independent of each other. A company tying that tends to break automated execution is the dependency on other tests or systems that cause unnecessary failures or prevent a multitude of tests from running. In order to ensure your automation pack mitigates this, you need to ensure each test can run independently – without putting a strain on your performance. The following below tips should help in ensuring your tests are suitably independent to be fully scalable.


Each test must be able to set itself up

Each test should be able to get the system to a point where it needs to execute without needing a process of previous tests to setup reliance accordingly. This is all pretty easy to do, but difficult to get right in a way that doesn’t mean simply repeating setup and shut down steps during each test phase.


This can be done by preferably identifying a condition that a test can easily check to determine if the necessary setup already exists for a test to proceed. If it does, proceed with the test, if not – there should be a simple JSON file or config job that can be round to get it to where it needs to be. This can be done by using SQL to change system settings, but as SQL performance is often slow, using some form of config file or JSON file would be ideal.


The trick to making this work is ensuring the condition that identifies if a setup is required is an easy one to process. A fast way to do this is to have a simple flag setting in a file that is set or not. Try not to do this in tests themselves though as this creates dependencies that can cause problems if a previous tests set the flags incorrectly. If possible, this should be something on the application under test itself that can be easily read through the API that doesn’t require a DB or UI check to determine this.


Test should be short, simple and specific

I’ve mentioned this before, but tests should not be long-winded, but quick to execute, with a clear objective. This can be difficult for end-to-end tests which often try to step through a series of processes together to determine its final success/failure. A lot of the processes should be able to be configured through API or config file changes which should mean that the test can still provide full solution perspective, but test only what it needs to do. Those other steps should be tested in their own procedures, specifically through unit or component tests and not form part of the end to end tests. Don’t repeat your tests, it’s a waste of time.


You might argue that what is the point of doing end to end regression tests if you aren’t covering the full functionality. I would argue that if your unit testing can’t test it to expectation, then the fault is there and should form part of an end to end test. End-to-end or solution tests should rather focus only on those things unit tests will never get to but at the same time. Repeating steps that exist in the unit tests should be unnecessary and using preset configurations should achieve this.


Tests should be traceable

The reason a test should execute should be clear in the way it is mapped back to specific product requirements and it should be evident that when tests failed, exactly which product requirements are affected by this. This is not something you would typically build into your automation framework, but rather in your code deployment tool that sends an XML file to your test management tool.  


Along with traceability to requirements and defect, each test should report on its status- whether passed, failed or blocked and take a snapshot and details of the error wherever possible. While the steps to do this should be called form separate functions, each test should report on its progress before moving onto the next test. This ensures that you have peace of mind and understand the behaviour of your tests better. Having a set of tests not report their progress does not give you confidence in your failures. It might not seem like this last step should relate to test independence, but again it’s all about your tests being able to give you the full picture regardless of when and how they execute.


Traceability also helps you to identify when tests are no longer required and can be removed from your regular regression run.


A good test of knowing whether your tests are suitable independent or not should be to execute them both on their own and part of a larger pack and they should execute equally throughout. In your debug logs though you should be able to measure when test setup occurred or not and the execution time of your test to know if the measures you are putting in place are effective.


The beauty of having tests that run independently and efficiently is that you can scale your execution as you need and even have multiple tests executing against different instances of your system without needing to worry about interdependencies. We expect our applications to scale with use, why not our tests too. 

0 comments

Thanks for subscribing!

bottom of page