As an adult, I have learned that I have an introvert type personality. I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person. So, learning about this characterization surprised me somewhat, but only until I fully understood.
I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding. This describes me to a tee. However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously. Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.
I received just such a question the other day. The question, more or less, was, “if we have continuous testing, do we really need static analysis?” And, just like that, I was stumped. This didn’t square, and I wanted time to think on that. Luckily, I’ve had a bit of time. (This is why I love blogging.)
Continuous Testing, Defined
Before we go into the relationship between the concepts, let’s first clarify them. That way we’ll have no inadvertent misunderstandings via buzz word.
My first introduction to continuous testing was through a tool called NCrunch. It bills itself as an “automated concurrent testing tool,” which certainly offers more precision than “continuous testing.” NCrunch is awesome. If you practice TDD or have unit tests, give it a look. It runs your tests continuously as you write code, providing you real-time, in-IDE, visual feedback as you make changes. Accidentally delete a line and watch the side of your editor window go immediately red.
In the interceding years, I have seen a broadening of this term to be a follower for concepts such as continuous integration (CI) and continuous deployment (CD). In agile environments, we integrate constantly and, ideally, deploy (somewhere) constantly. Why give testing short shrift? With continuous testing, your environments constantly pepper your build candidates with runtime tests, providing early feedback instead of near the end of the sprint.
So we have two concepts that we can generalize. The first concept involves tightening the unit test feedback loop, while the second involves the same for integration and acceptance tests. In both cases, we consistently test our code’s runtime behavior with a fast-feedback loop.
Static Analysis, Defined
You probably think of static analysis as NDepend. Or, perhaps you think of it as what your linting tool or your productivity ad-in do. They tell you about problems with your code.
But with static analysis, note the inputs and outputs. The input to static analysis is source code, and the output is feedback about the source code. Static analysis concerns itself with code properties that require no building or running of your application. The entire idea is to reason about the code structure and what that structure will mean at runtime.
Understandable Conflation
You may note that I have deliberately framed these definitions in such a way as to make the contrast obvious. In case I wasn’t obvious enough about it, the distinction is runtime vs compile time. Testing your code (or continuously testing it, as it were) happens with an examination of runtime behavior, while static analysis happens via an examination of code properties.
But even with that distinction, I can understand why someone might think continuous testing obviates static analysis. “If you constantly test your code and thus know that it works, why bother with upstream analysis? It probably works, so ship it.”
Not so fast.
Static Analysis Value Proposition
To be clear, many static analysis tools catch potential issues that can be caught by an automated test suite. Static analysis of code can flag issues like, “this may be a null dereference” or “this code allows SQL injection.” And, of course, automated tests can detect these situations. Beyond that, continuous tests detect them quickly.
But static analysis detects them before you build or run. If NCrunch catches that null dereference, then great, static analysis wins by a matter of seconds. But what about a case where the unit test suite misses that? What if you commit it, and promote it to a build candidate, where it passes all tests? What if other people pull that code? Great that you get the feedback in 10-30 minutes, but wouldn’t immediately have been preferable?
Let’s look beyond that, though. Early warnings about runtime behavior offer only a slice of the static analysis value proposition. The bigger pie comes in the form of insight into the ownership cost of your codebase. That may sound like a leap but bear with me.
Static analysis can, at a glance, tell you whether you have a pristine, maintainable codebase on your hands, or whether you have a snarled Death Star of dependencies and unreadable nonsense. These considerations matter — they can make new features orders of magnitude more expensive and they can slow-down your team and your business.
Runtime testing can’t tell you that the application is a maintenance nightmare, even run continuously. You need insight into the code itself for that.
Complimentary Technologies
I cannot overstate that static analysis and runtime testing complement one another. I avidly stump for test driven development, acceptance testing, and static analysis. All three, all the time, all good. Seriously.
Writing code that does what users want matters. They wouldn’t pay for your product otherwise. But writing code that you can adapt to what users want tomorrow also matters. And static analysis helps with that.
So setup a good test suite, run it continuously, and keep track of your code with static analysis. Doing this will ensure you please your users today and in the future.