top of page
Writer's pictureCraig Risi

Diving Deeper into API Automation



We’ve learnt in a previous article some of the important things to be aware of when it comes to API testing and how you should go about it. Ultimately though, API testing is the one aspect of your software testing process that is primed for automation and so you wouldn’t want to even think of testing an API without a clear strategy of how to go about automating it.


API automation is a lot faster and often easier than UI automation, though that doesn’t mean there isn’t any complexity in it and we should focus on it more. Instead with the majority of your tests should be focused at an API level, depending on your architecture and so as such, you want to ensure you understand what goes on here in detail. Getting your API automation right is also a needed foundation for any attempts at continuous delivery as it enables you to easily tick all the automated integration boxes without as much difficult as the UI level, and so if you are looking to make full use of your Agile or DevOps process, you will want to definitely incorporate this effectively.


So, in this article, I will go into some more technical details around the make-up of an API to aid in the process and hopefully help you get started on your road to automating your APIs.

The beauty of automating API tests is that they can often be done without the need for a completed code (or at least more easily than frontend tests) are often simpler to write because there are fewer steps involved. You’re simply inputting data (creating a request), potential mocking responses and then reading outputs (receive a response) and so those are the important things to focus on in the automation process.


Based on the above one of the key things with APIs then is that you can test them quite effectively at a low level. Whereas you will want to test and automate your frontend in a way that requires a reasonable amount of end-to-end operation, in your API you can easily automate it at a unit level. Almost the entirety of an API can be tested in a contained manner with only a small number of integration tests required for integration purposes.


Setting up your environment

First is getting your environment ready with the API on it. This may be a shared integrated environment with the API under test in an environment with other APIs that it needs to integrate with. Often though, if you are looking to automate before an API or its dependencies are ready, you will need to set up the environment on your local machine or a person VM. If you are using some sort of VM/containerization tool, this should be relatively ready to do and replicate across.


Choosing the right tool

To automate an API call, you will need something that allows you to write and read API calls in your particular environment. There are a lot of tools that can do for you like TestNG, Jest, Junit, Mocha, Pytest and Robot which are all common tools that can help with this. Whereas Junit and Pytest are more for core Unit testing, they can also work just as easily for your integration tests. There are other unit testing frameworks that can do this, but Junit and Pytest are perhaps the easiest to work with.


For many tests, the like of TestNG or Mocha are generally preferred due to the simplicity in which tests can be written and easily integrated into any CI environment, without requiring too much programming knowledge. I could go into detail about each of these tools, but it would probably be best for you and your company to evaluate the different options and see what works best for you, especially considering the different programming languages that you are familiar with.


Making API calls in your test Framework

The first place to start then with automating your API messages is to ultimately begin to write your different API request messages. Most frameworks have support for making API calls by including an HTTP request library for REST APIs or you can simply just send an XML body to any given location if using SOAP.


When it comes to API automation though I prefer simplicity and the advantage about most frameworks is that there are already a wealth of libraries often built-in that cater for the different API messages that you will need to send, So, I wouldn’t recommend reinventing the wheel and simply just taking advantage of the many already excellent libraries out, like superagent for Mocha or supertest for Jest that streamlines the approach and provides all the details you need for a specific message:


request

 .post('/api/pet')

 .send({ name: 'Manny', species: 'cat' }) // sends a JSON post body

 .set('X-API-Key', 'foobar')

 .set('accept', 'json')

 .end((err, res) => {

   // Calling the end function will send the request

 });


This can handle all different types of GET, PUT, POST, DELETE along with being able to pass headers, cache, and query parameters into the message in an easily parametrised fashion. This saves on needing to build the actual API message in its entirety and makes customising it for different tests a lot easier


Handling different responses

One of the challenges with API testing is that message responses can come in a variety of formats. Like some may send you a JSON response, others could send an XML, CSV or other encoded data format based on however they are built. Thankfully, we’re in luck again here as most frameworks will support reading of all these. Ideally, you will want to have some idea of how the response message should look so that you can build a response message similar to how you built the request message. The difference here though is that you will also need to make use of an assertion library which will allow you to check for certain parameters in a message, as shown in the below example using chai, which is an assertion library for mocha. (One of the advantages of Jest is the assertion library is built-in, making it just one library to maintain):


response.status.should.equal(200)

foo.should.be.a('string');

foo.should.have.lengthOf(3);

tea.should.have.property('flavors')

 .with.lengthOf(3);


If you’re using REST, this is made a little easier with its different HTTP Response Codes at least providing some form of standardisation that you can search for and immediately assert success based on the response you received.


Informational Response Codes (1xx)

100 — Continue, 101 — Switching Protocols, 102 — Processing

Success Response Codes (2xx)


200 — OK, 201 — Created, 202 — Accepted, 203 — Non-authoritative Info

204 — No Content, 205 — Reset Content 206 — Partial Content, 207 — Multi-status

208 — Already Reported, 226 — IM Used, 250 — Low Storage Space


Redirection Response Codes (3xx)

300 — Multiple Choices, 301 — Moved Permanently, 302 — Found, 303 — See Other

304 — Not Modified, 305 — Use Proxy, 307 — Temporary Redirect

308 — Permanent Redirect


Client Error Response Codes (4xx)

400 — Bad Request, 401 — Unauthorized, 403 — Forbidden, 404 — Not Found

405 — Method Not Allowed, 406 —Not Acceptable, 412 — Precondition Failed

415 — Unsupported Media Type


Server Error Response Codes (5xx)

500 — Internal Server Error, 501 — Not Implemented, 502 — Bad Gateway

503 — Service Unavailable, 504 — Gateway Timeout, 505 — HTTP Version Not Supported

506 — Variant Also Negotiates, 507 — Insufficient Storage, 508 — Loop Detected

509 — Bandwidth Limited, 510 — Not Extended, 511 — Network Auth Required

550 — Permission Denied, 551 — Option Not Supported

598 — Network Read Timeout Error, 599 — Network Connect Timeout Error


Handling Test Data

Easily the most difficult part of testing is handling your data. There are multiple approaches from creating your own data, reading data from an existing data, another external file (your payload) or possibly even just choosing to randomise the data you send in and out of your API between given attributes.


Which approach you will need to use ultimately depends on the purpose of the test. Because I prefer APIs to be tested largely at a Unit tests level, where it's okay to send through specific messages that return a specific response to determine if your API is handling that message correctly In a case like this, it's easy to set up a JSON or XML file with all the relevant data permutations that can then be inserted into your request message parameters. This body of data for your message is referred to as your payload.


However, while this works for a contained portion of your API, there is a need to test APIS at a more holistic or integrated level and then it makes sense to use data from an actual database t provide more production-specific data or possibly even random data, though I would advise using random data for APIs with very simple message sets and for the purpose of detecting outlier and strange defects.


Many times, while testing though you will require to pass a response from one API as request data to another. You can do so by making use of hooks. Functions like Before, Before each, After, After each, as the name suggests, get executed before/after any or all tests. So you can use this to arrange for an API test to call and assert on one before it executes another. Given the first point though on where to test, I would recommend keeping this to a minimum


Below is a code sample in JavaScript (using Mocha and Chai) of something that shows some of how these things could be done:


const assert = require('assert');

var calculateSavings = (income, expenditure) => {

   return income - expenditure;

}

describe('Savings suite', () => {  //describe represents a suit of tests

   var income, expenditure, monthlySaving, totalSaving;

   before(() => {

       //Set all values to 0 and set Income to 1000

       income = 1000;

       expenditure = 0;

       monthlySaving = 0;

       totalSaving = 0;

   });

   beforeEach(() => {

       //Randomly generate an expenditure before each test

       expenditure = Math.floor((Math.random() * 500) + 1);

   });

   after(() => {

       //Reset all values to 0 after all tests are run

       income = 0;

       expenditure = 0;

       monthlySaving = 0;

       totalSaving = 0;

   });

   afterEach(() => {

       //Add monthlySaving to totalSaving after each test

       totalSaving = totalSaving + monthlySaving;

   });

   it('should test savings of Month 1', () => { //it represents a new test

       monthlySaving = calculateSavings(income, expenditure);

       assert.equal(monthlySaving, income - expenditure);

   });

   it('should compare savings of Month 2 to totalSavings', () => {

       monthlySaving = calculateSavings(income, expenditure);

       assert.notEqual(monthlySaving, totalSaving);

   });

});


Configuring your mocks

In these above examples, I have assumed you are testing your API with another API or live endpoint. Often though, you will need to rely on mocks for your automated testing. You might be asking why you would want to mock an API at all when you can test against the real thing. The problem with this is not only are you often developing an API without all of its different dependencies available but that with APIs changing so frequently and integration environments therefore not always proving as reliable as desired, that you would still want to rely on some form of mocking to ensure that your API is always operating as expected, regardless of its dependencies. Yes, there is a need for those non-mocked API tests to check for breaking changes and dependency failures, but the majority of your APIs functionality is internal and you want to ensure those automated tests remain repeatable and consistent and this is why mocks are so useful. I will explain Mocks in more detail in a separate article, as doing this is a complex matter in itself and deserves a separate focus.


Running your tests

Given the tool you are using, your test will either be executed from an in-built tool, via a command prompt or through your CI tool. These will all look different depending on what you require, so won’t go into too much detail on these and would rather encourage you to investigate which solutions work best for you.


What is important is that those environments or environment variables that you configured at the start will need to be called by your automation scripts to prepare your test environments for execution. This is most useful if you are set up for Continuous Integration and require your environments to be pulled together from a different version.


There is ultimate a lot more to API testing than what I go through in here, along with a couple of other things to think about like asynchronous messaging in your APIs, integration into CI and how best to execute and report on these. Not to mention differences in software architectures, programming languages and frameworks which could alter slight details of the approach. Perhaps I will leave some of these for another time. Overall, you should find the above details will get you through most of your API automation just fine.


There is a lot to API automation, but despite all these details, API automation is actually far easier to maintain and easier to implement that UI automation, once you get your head around it. And because it's so much faster to execute and caters for a lower level testing allows you to do a large number of test permutations on a short space of time. 

0 comments

Comments


Thanks for subscribing!

bottom of page