I often get inquiries from clients and prospects about setting up and operationalizing static analysis. This makes sense. After all, we live in a world short on time and with software developers in great demand. These clients always seem to have more to do than bandwidth allows. And static analysis effectively automates subtle but important considerations in software development.
Specifically, it automates peer review to a certain extent. The static analyzer acts as a non-judging, mute reviewer of sorts. It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment. And, finally, it helps you out by acting as architect. Team members can learn from the tool’s guidance.
So, as I’ve said, receiving setup inquiries doesn’t surprise me. And I applaud these clients for pursuing this path of improvement.
What does surprise me, however, is how few organizations seem to ask another, related question. They rarely ask for feedback about the efficacy of their currently implemented process. Many organizations seem to consider static analysis implementation a checkbox kind of activity. Have you done it? Check. Good.
So today, I’ll talk about checking in on an existing static analysis implementation. How should you evaluate your static analysis process?
Has It Prompted Measurable Improvement?
First of all anything you implement to improve should have a measurable effect. Otherwise, you have no way of knowing whether your actions help or not. This holds true for your static analysis process.
Presumably, you put it into place for a reason. Perhaps you wanted to make issues in the code visible. Or maybe you wanted the reassurance of compliance with a tool that differentiates good from bad practice. Whatever the case may be, you started somewhere and now you find yourself somewhere else.
Have you seen improvement? Have you achieved your objectives? Take stock of what the tooling has done for you. If you can’t quantify that or it hasn’t done much, you have obvious next steps. Take advantage of the tool.
Does Your Process Prevent Backsliding?
Measurable improvement represents table stakes for any kind of success. But you need sustainability to boot. A process that prompts a spasm of improvement followed by regression doesn’t actually benefit you. Just ask the parents of a messy child that cleans his room when ordered, only for it to become a mess again the next day.
When you implement static analysis as part of your process, you should find issues and improve on them. But your process around the tooling should ensure that those issues go away and stay away. So you can evaluate success on the basis of improvement and on the basis of the lack of recurrence of issues.
Has The Team Learned from the Tooling?
Without exception, I have learned from every static analyzer that I’ve ever used. They tend to come loaded with design suggestions and bug prevention suggestions both philosophical and pragmatic. Even as an old hat in a language or with a tech stack, you won’t know as much as the tool authors. After all, they research this stuff for a full time living, whereas you and I don’t.
So a successful tool adoption and process incorporation necessarily involves learning. The team should, at the very least, pick up tips and tricks. But on a deeper level, it should start to generally improve their craft. Go see what they’ve learned. Can they articulate it? This serves as a good litmus test for effectiveness.
Is Your Process Helping You Find New Issues?
Let’s move on from some of the easier items and get to the intermediate course, so to speak. First up, let’s say that you’ve successfully improved, learned, and prevented backsliding. All of that means you’ve realized ROI from the tool. But do the tool and your process around it continue to pull their weight?
Your static analysis usage shouldn’t mean a “one and done.” You should continue to see improvement over the duration of the process. Some of this might come in the form of updates to the tool itself. As new language versions, frameworks, etc. appear, the tool authors should issue updates. This alone can help with continuous improvement.
But beyond that, your process should also help by itself. I definitely recommend learning how to customize the tool so that you can identify, add, and fix issues of your own. As your static analysis approach matures, you should go beyond what comes out of the box with the analyzer.
Can You Operationalize Learning from Your Mistakes?
Speaking of customizing the analysis tool, you have a motivation beyond just continuous improvement. In other words, you could go in periodically and add custom rules with the general notion of improving code and design. But you can also add custom rules with more laser-like precision, specifically in response to issues.
For example, let’s say that you find something that represents a pitfall for the team. Perhaps some of the developers make an understandable but costly mistake unless someone who has already been burned points it out. Is the team incorporating this into your static analysis process? Do they take that learning, and translate it into a rule, effectively saying, “this won’t burn us again!”
A mature static analysis process allows for exactly that. In fact, it demands it.
Does the Team Understand the Why?
Now I’ll focus a little more on the human factor in the process. If your static analysis represents continuous improvement, both with learning and with mistake prevention, you’re in good shape. But to really feel good, you need the team’s buy-in.
The process should represent something that helps them, rather than something to which they find themselves subjected. And a big part of this means that they understand the “why” of things beyond “the tool says you shouldn’t do it, so cut it out.”
You’ll actually have an easier time measuring this than you might think. To do so, I recommend having hallway conversations about the analysis process and some of the rules. Ask team members about those rules. Do they understand them? Do they agree with them? Can they think of situations that might actually call for a violation?
If these types of questions prompt earnest discussions and feedback, I’d say you’re in good shape. If you get blank stares or rolls of the eyes, you have work to do.
What Is Management’s Interest in the Process?
I’ll close with a consideration that can really make or break a static analysis process. Examine to what extent and in what way management involves itself in the process.
As an example of what not to do, consider something ubiquitous. Seriously, I see this all the time. Management decides it has a quality problem on its hands, so it installs a tool that monitors test coverage, makes it visible, and presents it in a management-consumable report. “Get this figure up to 70%” they tell the team. Every time I see this kind of policy, I see two things: the team achieves the coverage goal and it does so by writing some of the worst unit tests you’ve ever seen.
I’ve heard it said that, when you give a team a metric to hit, they will hit that metric even if they have to burn the company to the ground. Management has to take a great deal of care when it comes to measuring in the technical space. A successful static analysis process is one used by the team, voluntarily to do better work. A dangerous one is the one used by management to dole out grades.
So make sure that you have a process that helps and that you continuously improve. But also make sure that the team buys in and owns it themselves.