I often get inquiries from clients and prospects about setting up and operationalizing static analysis. This makes sense. After all, we live in a world short on time and with software developers in great demand. These clients always seem to have more to do than bandwidth allows. And static analysis effectively automates subtle but important considerations in software development.
Specifically, it automates peer review to a certain extent. The static analyzer acts as a non-judging, mute reviewer of sorts. It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment. And, finally, it helps you out by acting as architect. Team members can learn from the tool’s guidance.
So, as I’ve said, receiving setup inquiries doesn’t surprise me. And I applaud these clients for pursuing this path of improvement.
What does surprise me, however, is how few organizations seem to ask another, related question. They rarely ask for feedback about the efficacy of their currently implemented process. Many organizations seem to consider static analysis implementation a checkbox kind of activity. Have you done it? Check. Good.
So today, I’ll talk about checking in on an existing static analysis implementation. How should you evaluate your static analysis process?