top of page
Writer's pictureCraig Risi

Diving Deeper into UI Automation

Updated: Jul 16, 2021



You come up with your relevant UI test scenarios and strategy, but now want to work on getting them automated. UI testing is time-consuming and so you want to try and automate this so you can increase your agility as a development team. Now, if you’ve followed the first few steps towards consistent UI object design, it should mean your application is more testable and is a good place to get started. If this is not followed, UI automation is still achievable, but may just require more effort to do so.


The biggest challenge of UI automation though is that it is incredibly flaky in execution as UI testing is not as easily defined and constrained as other lower testing levels, meaning that even though you will have fewer tests in this area, it could often be the most complex aspect of the system to automate effectively. Ultimately you want to prioritise your automation efforts on Unit Tests, with API Components and integration tests also taking up a large part of the required coverage, but where you do need to still to automate at a UI level, the below guidelines should help to ensure you do this right.


Note, I am focusing on certain aspects of automation that are specifically important to UI automation. How to build a core automation framework is something I’ve written about previously in a series of articles and I would encourage you to apply some of those basics to your overall framework:


Automate User Behaviour and not technical requirements

As mentioned in my previous article, where other levels of testing should focus on all the technical details of the application, your UI automation should be focused on the behaviour of the software. You aren’t trying to test how the software works technically, unit and components test can do that, UI automation should focus on ensuring that the software works the way a user would use it. However, this doesn’t just mean that you need to automate these behaviours, but also design your tests around these behaviours – essentially following a form of behaviour-driven development. 


This doesn’t just make tests easier to understand at a business and non-technical level, but also a strict code organisation pattern to avoid code duplication. This is done by having separate components called steps or actions that will be the building blocks for your tests so that our actual test themselves remain relatively simple:


@Test

public void someTest() {

   // given

   Something something = getSomething();

=   // when

   something.doSomething();

   // then

   assertSomething();

}


 Try Snapshot Testing

Again, a lot of the testers fail to understand the purpose of their UI tests. While part of UI testing is to ensure the successful integration of your different components through the UI, it is also there to ensure that visually everything renders and displays as expected. The thing is though depending on your architecture, you don’t necessarily need to actually have a site fully operational to see what the UI will look like fully rendered.


Some frameworks can allow you to render the UI components at requested resolutions and screen sizes in a static manner where it can then be compared against a visual snapshot from a previous successful run. This has pros and cons though. While it certainly speeds up the execution of your automation effort as these tests will run a lot quicker, it doesn’t cater for how certain devices or browsers may still render components a little differently, plus also places a lot of reliance on your screenshots being compared accurately, something which can be easily offset by the slightest rendering error. I think snapshot testing is a great approach to visual regression given its speed, but probably not catch everything visually. You will still want to automate other integration tests from a UI perspective, but they should simply verify object operation and not the rendering of your screens.


Use Page Object design patterns and principles

UI testing is a hard and treacherous road full of different potholes. However, the same design patterns that make for good code apply to test automation as well and if you design your automation in a modular and maintainable way, you will alleviate many of the issues that cause frustration with automated tests always needing to be reworked.


The concept of the Page Objects is to make UI automation tests consistent, to avoid code duplication, to improve readability and to organize code for web page interaction. During web tests creation you always need to interact with web pages and web elements that are presented on these pages (buttons, input elements, images, etc.). The Page Objects pattern takes this requirement and applies object-oriented programming principles on top of this, enforcing you to interact with all pages and elements as with objects.


This essentially means that for every object on a page that you need to interact with, that a function should exist that contains the different behaviours of the project, so that when you writing an actual test, you simply reference the object on the page, the action you want to be performed on it and where required, then assert the correct outcome on it.


For example, if you need to click on a button, you don’t need to care about how to retrieve this button in the test, as it will already be handled in page objects. You should have the object of the page you are looking for and it should already contain the object of the button you are looking for inside it. All you need is to use a reference to this button object and apply the “click” action on it. You can think about all pages and web elements like this:


For each page and element you need to interact with, you should create a separate object that will be a reference to this web element in your test. Below is an example of how this will work and aid in writing a better test:


WebDriver webDriver = thisDriver;

webDriver.navigate().to("www.anysite.com");

String heading = webDriver.findElement(By.cssSelector(HEADING_ELEMENT)).getText();

Assert.assertEquals(heading, "Welcome to the Site Header”);

Assert.assertEquals(webDriver.getTitle(), "Site Header");

Select objectsFromSelectElement = new Select(webDriver.findElement(By.cssSelector(OPTION_TO_SELECT)));

Select ObjectsToSelectElement = new Select(webDriver.findElement(By.id(OPTION_TO_SELECT)));

objectsFromSelectElement.selectByValue("1");

objectsToSelectElement.selectByValue("10");

webDriver.findElement(By.cssSelector(FIND_OBJECTS)).click();

        Assert.assertTrue(webDriver.findElements(By.cssSelector(TOTAL_NO_OBJECTS)).size() > 0);


If we instead apply a Page Object pattern and refactor the same test, we see it become much simpler to navigate and understand

homePage.open();

homePage.waitUntilPageLoaded();


Assert.assertEquals(homePage.getTitle(), "Site Header");

Assert.assertEquals(homePage.getHeadingText(), " Welcome to the Site Header!");


homePage.findObjects("1", "10");

Assert.assertTrue(selectPage.getObjects().size() > 0);

To sum up, the Page Objects brings you these benefits:

·      Makes your tests much clearer and easy to read by providing interaction with pages and application page elements

·      Organises code structure

·      Helps to avoid duplications (we should never specify the same page locator twice)

·      Saves a lot of time on tests maintenance and makes the UI automation pipeline faster => reduces costs


This approach will take a considerably longer amount of time to automate at first but reduces your maintenance effort considerably. While it is faster to just write an automated test case that simply performs a given action on the object within the test a lot quicker, it also means that if you interact with the object multiple times, you will have a duplication of the code. Every time the objects specifications or test need to change, then you need to update the code in every location as opposed to just one place. Particularly important when you consider the handling of error conditions within your object.


If you want to make this test even cleaner and more maintainable, you can introduce one more level of abstraction - steps or keywords. In different frameworks, you might see different names of these modules but the principles are the same. Steps (keywords) form modules of actions that you can reuse in any test. Once these steps (keywords) modules are written, all you need is to make a reference to the module in your test and you can use all the functionalities provided by these specific modules. The biggest issue with keywords though is that the abstraction is removed to such a degree that almost the tests become too easy to write from a technical perspective and a gap develops between people that can actually understand the test in totality and those simply scripting it. This creates key people dependencies and slows down the overall scripting process as the person writing and running the tests has to wait for someone to modify the keyword or object logic rather than do this themselves. 


Avoid timeouts unless there are specific test requirements

Timeouts which essentially consist of sleep() or wait() commands are often inserted into scripts because the automation test often runs at a speed faster than the UI itself is able to respond at. This forces the test to wait a pre-set amount of time before continuing with its actions. With the behaviour of web applications depending on many factors like network speed, machine capabilities or the current load on application servers, it is possible for environments to slow down and become unpredictable, which is where sometimes adding a timeout can come in handy to give the system enough time to recover from the unpredictability and ensure more accurate test runs.


The problem with timeouts is it slows down the execution of our test pack considerably. Something which does not seem like much if a test contains only a few seconds worth of timeout commands. However, if you consider that there may be a large number of tests that need to be executed many times a day, that number of wasted time actually escalates considerably. This doesn’t just slow down your automation pipeline but makes it less capable as each time you grow your UI coverage and recall those same object interactions with timeouts, you only exasperate the slow down in test execution.


Timeouts also mask actual performance faults within an application where you can simply just extend the timeout to keep the script working, rather than force the script to fail when things have waited far too long.


Do not run ALL tests across ALL target browsers and platforms

When it comes to manual testing, I think we generally understand that we can’t test everything and so wen tend to prioritise our testing efforts to where they are best optimised. When it comes to automation though, teams try and build in everything that they hadn’t gotten to previously through their automation on all tests. This too is wasteful and unnecessary. Not only is it unnecessary testing, but it also increases the number of tests that need to be automated and maintained which just makes it increasingly wasteful the more browsers and platforms you try and cater for.


Browser-compatibility automation can instead be applied on a limited test suite, which has tests that interact with all web elements and perform all the main workflows at least once. To provide an example let’s assume that we need to verify search functionality for three browsers that we need to support (Firefox, IE, Chrome), as well as different search terms combinations (let’s say we have 100 terms).


What would you do in this case? Are you going to run all 100 combinations for each browser? No, that doesn’t sound wise... Let’s start with the browser compatibility. All we need is to ensure that the search input, search button, and search result list elements all work fine for all 3 browsers. Should we run the search 100 times to verify that? Of course not! Just one time would be more than enough to verify the elements’ behaviour under the different target browsers.


All the other 99 combinations are required just to verify the relevancy of the search. They are not related to browser compatibility testing itself and therefore can be done by using even one browser or even better, through a unit or component test, which don’t require rendering and are therefore even faster to execute. Even if you can’t test it at a lower level, 99 tests in 1 browser instead of 3 browsers is clearly the better approach. 


Take screenshots for failure investigation

Debugging a visual problem requires a visual aid and to do this, you will want your UI automation to take a snapshot when it identifies visual errors. This will help you save a lot of time when investigating the reasons for a test failure. You can implement a mechanism that will make a browser screenshot in case a test failed.


Most tools should come with a mechanism in order to do this quite effectively, but if not, the following code can come in handy which you can write as part of a separate include file which each test can all when needed (example provided in Java using Selenium web driver):


 public void onTestFailure(ITestResult result) {

            System.out.println("***** Error "+result.getName()+" test has failed *****");

            String methodName=result.getName().toString().trim();

       ITestContext context = result.getTestContext();

      WebDriver driver = (WebDriver)context.getAttribute("driver");

            takeScreenShot(methodName, driver);

   }


   public void takeScreenShot(String methodName, WebDriver driver) {

             File scrFile = ((TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);

        //The below method will save the screen shot in a drive with test method name

           try {

                                FileUtils.copyFile(scrFile, new File(filePath+methodName+".jpg"));

                                  System.out.println("***Placed screen shot in "+filePath+" ***");

                 } catch (IOException e) {

                                      e.printStackTrace();

                      }

   }


What is important is to save your screenshots in a format that doesn’t take up too much space (.jpg is often better than .bmp or .png), plus you also need to be aware of how much space these images take up on your repo, so its important to clean them out on a regular basis to not cause any infrastructure problems. This does require teams to ensure they investigate and make use of these screenshot failures and not let the opportunity to investigate them go to waste.


Keep the pipeline green

This is not specific to UI automation, but given the flakiness often experience with UI automation, I find issues with this concept occur most often here, which is why I have made mention of it in this article.


On the one hand, this is one of the simplest principles to understand. but on the other, most engineers ignore this rule. By the “green tests policy” I mean that at times you should expect certain tests to fail for a variety of reasons and you will want to then make sure that despite this, you achieve a 100% rate In our pipeline. After all, if a test fails, it should indicate a problem and you don’t want to have multiple tests fail on a regular basis in your pipeline just because you’re expecting it.


There are situations when an application already has a list of bugs that are prioritized lower, and the team is not going to fix these issues in the foreseeable future. In this case, most of the engineers just ignore these tests. They leave them inside the run and finish up with many red tests at the end of the test execution. Once the test execution is finished, they just go over the failed tests and verify all the red tests are those that are expected to fail due to these existing bugs, or whether there some new issues.


This is not a very good way of doing things though. Firstly, each time when execution is finished you have no idea if you have some unexpected issues or not. If the result was red and it is still red, the execution run status tells you nothing. Second, to understand if you really had some unexpected errors or if all of these errors were expected, you need to spend some time doing the investigation. If it were only once that would be fine. But tests results validation is a repeated process that you will likely do many times a day. This should not be where an engineer is spending their efforts. You are losing a huge amount of time and effort by conducting the same unnecessary checks again and again.


Instead, if you have failed tests in your run that are expected to fail, the best thing you can do is to move them to a separate run and ignore them in the main test execution. This will save you a lot of time when investigating failed builds. When you separate all the expected failures from the build, you know that if test execution results in at least one red failed test, then it is a real and new issue. In any other case, they should all be green.


Use data-driven instead of repeated tests

While repeated tests are more predictable and give reliable outcomes, they are not system-specific and will often only work in isolated unit and component tests but not fully functioning integrated tests where you can’t rely on mocking and want to instead ensure your test is customer-specific and flows through the entirety of the system.


You want to do this because your customers use data in a way that can often not be predicted and is incredibly difficult to accurately create. Now I’m saying you should use actual customer data for this because that would be illegal in many countries and you should never have access to this anyway. Rather you should use data that is masked and transformed from production data or alternatively has been extensively created to be as representative as possible.


What this then allows you to do is rather than just rely on the same inputs and expected outputs every time your tests run, you can call from a wide variety of data that doesn’t always look the same and therefore can test more different behaviours and permutations in your tests – and hopefully find some unexpected defects along the way. Data-driven tests are not just good for UI, but API tests as well, but I think add their most value here which is why I have made mention of it.

It’s important to remind that automation is only a part of your UI testing strategy and your automation strategy can only be as good as the overall test design allows it to be. Thankfully though given the complications of UI testing, we can reduce a lot of the inherent problems to make your automation tests more focused, better designed and a lot easier to maintain. If you can achieve this and ensure the majority of your functional testing is done at a modular level, you should be able to easily create an effective automated UI test suite that can add value to your company and team for many years to come.

0 comments

Comments


Thanks for subscribing!

bottom of page