A lot of the functional testing focus in software delivery is focused on the automation of traditional test cases which execute against parts or the entirety of an application. I’ve written before about the importance of unit tests and how they represent a critical base for our testing pyramid where the majority of tests need to be focused.
These all cover aspects of the code that can be determined in its execution. But there are many aspects of code that are simply never executed or visible during the functional execution of a code that can still cause big quality concerns. And this is where static analysis comes in, as it evaluates the code without needing to execute it.
Some of the things that static analysis helps to uncover include:
Undefined or unused Values – Most compilers and IDEs should generally identify these values that have been declared and not used or not defined correctly, but sometimes these can be missed. Or, often crucially, be correctly used and defined, but in a piece of code that never gets executed or may cause violations when used in extreme circumstances that are not part of the unit tests.
Unused or undefined values might sound like something inconsequential if it doesn’t affect the overall functionality of the software. However, it may cause dependency or maintenance issues over time when aspects of this code may become obsolete or incompatible with other systems. Any compiled code also adds to the overall size of the final deployment and this needs to be factored in.
Coding Standard Violations – We’ve spoken about the maintainability of code and the benefit that certain good practices have on a software’s long-term viability. These issues can only be picked up through proper static analysis and linting rules. It’s important that the clear guidelines that an organization sets out for its code are strictly adhered to and checked in this process.
Syntax Violations – Something which the majority of compilers should pick up, but tooling can be expanded to identify where these were missed or even identify the most obscure references that could cause a problem.
Security Vulnerabilities – We’ve already spoken about security tooling and most static analysis tools should provide for this and identify many of the predefined security rules and alert developers to them when their code does not adhere to these standards. And even if you are not utilizing a comprehensive security scanning tool, just setting up your static analysis tooling to search for the same things should make a significant difference.
Memory Issues – Code that is not well optimized for memory usage will cause potential performance or functional issues and even system crashes in the case of buffer overflow errors. The way software handles its different forms of memory is different to test for functionally, with even many performance tests unable to stress code enough to potentially identify its long-term effects of it. Having static scanning in place will help to identify the different issues in your memory declarations and usage and potentially save a lot of future issues.
The Benefit of Tooling
A lot of this analysis is not something that needs to be done manually. In the past, there were many forms of white-box testing techniques which could help developers and testers in the code review process to identify these certain patterns and get them fixed. A process that was incredibly time-intensive and so fraught with human error that it often didn’t provide enough reward for all the labor to be suitably justifiable for companies to bother with it.
Thankfully, with scanning tools they can easily be run at a code repository layer and provide companies with meaningful reports of where issues in the code are and also trend to identify organizational methods of improvement, companies can utilize the static analysis process to incredible benefit and learn a lot about how they code in the process as the coding patterns can be analyzed too.
Scanning tools can also provide the added benefit of ensuring that all code is scanned and reviewed for some measure of quality. Whereas your unit tests will only test what you have designed for it to test and can only test parts of the code that are executable – these tools can ensure that all parts of the code are read and will have some measure of conformity and quality built into it.
They can also be executed incredibly quickly and obviously are not prone to human error, providing thorough and consistent results.
So, we should get rid of code reviews then?
With tools and tests that can essentially check everything in our code, does that mean that companies should do away with any formal code review process then? Well, not exactly. There are still some limitations to static analysis scanning and benefits to ensuring we always get a human eye on our code.
For a start, the following are limitations to static analysis tooling:
False positives can be detected.
A tool might not indicate what the defect is if there is a defect in the code.
Not all coding rules can always be followed, like rules that need external documentation.
Static analysis can't detect how a function will execute.
System and third-party libraries may not be able to be analyzed.
However, despite these concerns, static analysis will still save a lot of time in covering the entire code and its finding can then be looked at in the code review process, where developers can both discuss its results and determine if changes need to be made. And also learn from it in the process as they better evaluate the logic of the code and see how certain parts of their coding can be executed better.
And it’s that learning process that is perhaps the main reason why I would always adhere to a code review process. It’s not just about saving time but helping developers to learn from one another will only have lasting benefits for any organization and provide a big boost to both the quality of their work and their development speed, as they will get better at delivering quality code the first time.
Types of static analysis
Most analysis tools may be automated and do the work for us, but it’s still important to understand the type of work they do and how the different analysis done by these tools helps our software quality:
Control analysis - focuses on the control flow in a calling structure. For example, a control flow could be a process, function, method or in a subroutine.
Data analysis - makes sure defined data is properly used while also making sure data objects are properly operating.
Fault/failure analysis - analyzes faults and failures in model components.
Interface analysis - verifies simulations to check the code and makes sure the interface fits into the model and simulation.
In a broader sense, with less official categorization, static analysis can be broken into formal, cosmetic, design properties, error checking, and predictive categories. Formal meaning if the code is correct; cosmetic meaning if the code syncs up with style standards; design properties meaning the level of complexities; error checking which looks for code violations; and predictive, which asks how code will behave when run.
Reviewing your code improves your code
At the end of the day, one of the biggest benefits that all forms of static analysis bring is that it helps developers to improve their coding abilities. If companies are serious about training and improving the skills of their development team, then enforcing processes around static analysis and mutation testing will allow them to do just that, while also allowing them to be productive and deliver better quality software.