top of page
Writer's pictureCraig Risi

The structure of a quality-driven CI pipeline



Continuous Integration is the aim of many development teams wanting to quickly add new features and make changes to their software and deliver them to production as fast as possible. Doing so while maintain a good view of quality though is something that is far more difficult to get right. And a lot of this is because teams might be quick to adopt a CI pipeline, they are not willing to implement the right measures or quality gates into the process that will ensure the software is of a high quality.


Many of the reasons for this range from a lack of sufficient skills in a team, especially at a testing level where the team is not able to automate the appropriate level of testing where it is needed, a lack of understanding of what needs to be done – or often and most importantly, - lack of time to truly implement measures that will make this work, as teams focus on delivery of software rather than investing time in quality best practices.


It’s unfortunately a battle that often gets lost on the quality front as other things take priority of the needed investment in skills and measures to produce high-quality software> Though if you are in a position to start a new pipeline governance system from scratch and need some assistance in knowing what is needed to make an effective high-quality pipeline, well then the below guidelines should help you.


While not every aspect of the pipeline quality measures is critical, they certainly all add significant value and so if for whatever reason a team has decided to skip them, is important that a team understand these risks and puts in a mitigation to check for this elsewhere.

The below pipeline outline should also apply regardless of whether your systems are hosted internally or on the cloud, though I do understand the complexities and costs involved in having complex test environments that match production. In my opinion though, the costs are worth it, though if you have a system that is completely built around Infrastructure as Code, then it’s possible to possibly get by without a permanent environment and use just a scaling one – as this is what production would like imitate anyway.


Structure of CI Pipeline




The purpose of having our CI pipelines structured like this is to ensure that there is a check at every level to verify the correctness of the code, build and deployment. Each step is described in more detail below:


Setup and Checkout: The developer checks out the code. A setup file should be present to ensure that the environment is then built to be consistent for each developer, as they follow the correct standards. Part if this setup includes several linting standards that will also check that certain coding principles are been adhered to and will prevent code from being deployed where it does not meet these appropriate linting standards.


Quality Check: Linting standards need to be met before code build can be successful.


Build: Once a developer is ready to submit their code, the pipeline will take their code and build it in a container/mocked environment which will form the basis of the unit tests.


Unit tests/CI Tests: These tests include both the unit tests written by the developer for the modules under development and some broader component tests which will represent the execution across all the modules, but at a mocked level.


Quality Check: 100% successful completion of all tests with a 90% code coverage achieved at a unit testing level.


Static Analysis: The relevant static analysis scans and security scans are run against the code base to ensure that certain coding and security best practices have been adhered to.


Quality Check: Successful completion of scans with 100% code coverage.


Deployment to QA: It is only at this point that the code is deployed into an integrated test environment where it will exist against other unmocked modules that will run tests developed by the testing team to cover a wider range of unmocked integration tests and end-to-end tests. This environment can be scaled up and down based on the testing needs and does not need to live all the time. This environment will also form a wholly usable state of the system with only minimal mocking in place to allow the team to do a few manual verification checks should they be needed.


Post-Deployment Checks: These are contract level tests that will run to ensure that the test environment meets the required expectations of the deployment code and some lightweight tests of the code to ensure that it is working effectively within the test environment, Should it fail here, the code is rolled back and the QA environment restored.


Quality Check: Successful passing of all post-deployment and smoke tests.


Functional Tests: This is where the remainder of the automated tests identified by the testing team are executed, which will span a wider coverage of the codebase and include some unmocked tests as well, with more realistic data that better resembles that of production.


Quality Check: All tests need to be passed.


Dynamic Code Analysis This is another scan that is run against executable code (unlike the static analysis scans which are run against pre-deployed code) and provides an additional measure of quality and security checks like SQL queries, Long input strings (to exploit buffer overflow vulnerabilities), Negative and large positive numbers (to detect integer overflow and underflow vulnerabilities) and unexpected input data (to exploit invalid assumptions by developers). These are all vital checks that are best run against an actual working environment.


Quality Check: Successful completion of scans.


Deploy to staging: The code is then passed on to a staging environment, which is another integrated environment, but one that better reflects the state of production. Also, unlike test environments which can be scaled up and down, this one should be permanently available and configured as the code would be in production. Any final manual or automated validations from the testing team can also be conducted at this time by the testing team, though won't necessarily form part of the automated tests, unless the testing team deems necessary.


Post-Deployment Checks: As was conducted against the QA environment, a set of post-deployment tests are run to ensure that the staging environment is in the correct state and smoke tests are executed to ensure the deployed code is in a usable state.


Quality Check: Successful passing of all post-deployment and smoke tests.


Non-functional Test execution: It's at this stage that all load, performance and additional security tests are executed to ensure that the code meets all the required NFR standards before being deployed into production.


Quality Check: Successful completion and passing of all NFR tests.


It is only at this stage that the code is deemed sufficient enough to be deployed to production.


There is obviously a lot more detail into making a pipeline like this work, especially in ensuring that the right testing skills and tools are in place, but if a team can develop a framework that measures all of these aspects and the team endeavors to meet them, regardless of how strict, then a team should easily be capable of producing mostly error-free software to prediction in a rapid manner.

Comments


Thanks for subscribing!

bottom of page