“What do you do?”
In the United States, people ask this almost immediately upon meeting one another for the first time. These days, I answer the question by saying that I do IT management consulting. That always feels kind of weird rolling off the tongue, but it accurately describes how I’ve earned a living.
If you’re wondering what this means, basically I advise leadership in IT organizations. I help managers, directors, and executives better understand how to manage and relate to the software developers in their groups. So you might (but hopefully won’t) hear me say things like, “You should stop giving out pay raises on the basis of who commits the most lines of code.”
In this line of work, I get some interesting questions. Often, these questions orient around how to do more with less. “How can we keep the business happy when we’re understaffed?” “What do we do to get away from this tech debt?” “How should we prioritize our work?” That sort of thing.
Sometimes, they get specific. And weird. “If we do this dependency injection thing, do we really need to deploy as often?” Or “If we implement static analysis, do we still need to do QA?”
I’d like to focus on the latter question today — but not because it’s a particularly good or thought-provoking one. People want to do more with less, which I get. But while that particular question is a bit of a non sequitur, it does raise an interesting discussion topic: what is the role of static analysis in testing?
Static Analysis in Testing: An Improbable (But Real) Relationship
If you examine it on the surface, you won’t notice much overlap between testing and static analysis. Static analysis involves analyzing code without executing it, whereas QA involves executing the code without analyzing it (among other things).
A more generous interpretation, however, starts to show a relationship. For instance, one could argue that both activities relate deeply to code quality. Static analysis speaks to properties of the code and can give you early warnings about potential problems. QA takes a black box approach to examining the code’s behavior, but it can confirm the problems about which you’ve received warnings.
But let’s dive even a bit deeper than that. The fact that they have some purview overlap doesn’t speak to strategy. I’d like to talk about how you can leverage static analysis as part of your testing strategy — directly using static analysis in testing.
Risk Detection to Prioritize Your QA Efforts
Static analyzers can give you a lot of interesting data. Many people use them to get statistics about their code, such as method length, cyclomatic complexity, and the like. And I fully encourage that. You should have this information about your code.
But if you mine this information properly, you can make some inference about the risk of change. Metrics like afferent coupling and rank give you information about how much your codebase depends on a given element. You can then reason about hot spots in your code.
Going a bit further, you can cross-reference those hot spots with information like the amount of churn in a file or whether the file is covered by automated tests. (More on this shortly.) Taken together, this lets you reason about the riskiness of changes. You can, conceptually, assign a risk score to various commits and understand what use cases in your software they will affect.
This, in turn, lets you prioritize QA efforts. When a release contains a sequence of high-risk changes, prepare for more testing efforts and remediation. If you’ve made low risk changes, you might not need as much.
Insights About Your Automated Tests
QA isn’t the only arrow in your testing quiver (hopefully). Sure, you’ll have folks performing exploratory testing and confirming that implemented functionality lines up with requirements. And those things are important.
But you should also have a comprehensive automated test suite. This should, ideally, involve many unit tests, some integration tests, and the occasional end-to-end test. With a robust test suite in place, you’ll have a lot of test code. And static analysis can help you assess the quality of that test code. Are you keeping your tests simple and maintainable, or are they becoming a maintenance burden? You can also check your tests to make sure they exhibit properties of a good approach.
Some analyzers will incorporate data about automated test coverage. In my management consulting role, I discourage managers from becoming preoccupied with unit test coverage. But developers can keep track of this information and use it to assess holes in their automated testing.
Code Quality and Regression Likelihood
Taken comprehensively, the output of static analysis speaks in many ways to code quality. Do you have massive, complex types and methods? Do you have a messy dependency graph? Does your codebase generally cause static analyzers to light up with warnings like some kind of tinder-dry Christmas tree? All of this speaks to the composite quality of your codebase.
When you have code of questionable quality, you have code that people fear to change. And when they do change it, they tend to make mistakes, causing regressions. Implementing features becomes like a game of whack-a-mole, where each bug fix causes a new bug to appear or some old one to reappear.
So, in some respects, static analysis speaks to the likelihood of defects in general. This is subtly different than the aforementioned riskiness of a given release. Rather, this is an assessment of the riskiness of your software itself.
You can thus use static analysis to inform your broader testing strategy for a codebase. If static analyzers tell you that you have a lot of problems, you can dedicate more people or time to the testing of that software, permanently. Or you can scale back feature development to allow the development staff to pitch in with testing.
Static Analysis Augments Your Staff
I’ll close by offering a more philosophical take on static analysis from my management consulting approach. You should really think of static analysis as a specific kind of staff augmentation. As programmers, we live to automate. And static analysis is a form of automating the process of reasoning about your codebase.
Through this lens, it only makes sense that you’d use static analysis in testing. That’s because you’d use static analysis — automated reasoning about code — in pretty much everything you do with your codebase. And I encourage that mentality.
I could go on brainstorming ways to leverage static analysis in testing, but you should find use cases that make sense for your specific situation and group. In our line of work, resources are always scarce. And on top of that, nothing gets squeezed more than testing in many shops. So whatever you can do to take automated intelligence about your codebase and apply that intelligence to your testing, you should do. It’s the key to doing more with less.