NDepend Blog

Improve your .NET code quality with NDepend

CRAP Metric is a Thing And It Tells You About Risk in Your Code

CRAP Metric Is a Thing And It Tells You About Risk in Your Code

November 9, 2017 6 minutes read

I won’t lie.  As I thought about writing this post, I took special glee in contemplating the title.  How should I talk about the CRAP metric?  *Snicker*

I guess that just goes to show you that some people, like me, never truly grow up.  Of course, I’m in good company with this, since the original authors of the metric had the same thought.  They wanted to put some quantification behind the common, subjective declaration, “This code is crap!”

To understand that quantification, let’s first consider that CRAP is actually an acronym: C.R.A.P.  It stands for change risk anti-patterns, so it addresses the risk of changing a bit of code.  In other words, methods with high CRAP scores are risky to change.  So the CRAP metric is all about assessing risk.

When you get a firm grasp on this metric, you get a nice way to assess risk in your codebase.

The CRAP Metric: Getting Specific

Okay, so how does one quantify risk of change?  After all, there are a lot of ways that one could do this.  Well, let’s take a look at the formula first.  The CRAP score is a function of methods, so we’ll call it CRAP(m), mathematically speaking.  (And yes, typing CRAP(m) made me snicker all over again.)

Let CC(m) = cyclomatic complexity of a method and U(m) = the percentage of a method not covered by unit tests.

CRAP(m) = CC(m)^2 * U(m)^3 + CC(m).

Alright, let’s unpack this a bit.  To arrive at a CRAP score, we need a method’s cyclomatic complexity and its code coverage (or really lack thereof).  With those figures, we multiply the square of a method’s complexity by the cube of its rate of uncovered code.  We then add its cyclomatic complexity to that.  I’ll discuss the why of that a little later, but first let’s look at some examples.

First, consider the simplest, least CRAP-y method imaginable: a method completely covered by tests and with no control flow logic.  That method has a cyclomatic complexity of 1 and uncovered percentage of 0.  That means that CRAP(m) = 1^2 * 0^3 + 1 = 1.  So the minimum CRAP metric score is 1.  What about a gnarly method with no test coverage and cyclomatic complexity of 6?  CRAP(m) = 6^2 * 1^3 + 1 = 37.

The authors of this metric define a CRAP-y method as one with a CRAP score greater than 30, so that last method would qualify.

Why Complexity and Test Coverage?

Let’s pull back out of the weeds for a minute here.  Why are we concerning ourselves with those two particular metrics, notwithstanding squares and cubes and whatnot?

Cyclomatic complexity measures the number of paths through a method.  So as methods get more complex in terms of conditionals, loops, and the like, their cyclomatic complexity goes up.  The CRAP metric score varies directly with complexity.  More complexity, CRAP-ier method.

Test (branch) coverage is the percentage of statements in the method exercised by the unit test suite.  The CRAP metric score varies inversely with unit test coverage.  More test coverage means a lower CRAP score.

Taken together, this tells a story.  Generally speaking, the more complex a method is, the higher the chance for errors of reasoning on the part of those maintaining it.  So more complex methods with more paths through them need more testing in order not to be risky and problematic for changes.  Defects live in complex methods and in untested methods, and if methods are both complex and untested, you’re going to have serious issues.

So the CRAP metric serves to give you a quick quantification of risk for issues as a function of methods in your codebase.

History and Nuance of the CRAP Metric

Okay, so that’s the gist of it.  But now you’re probably wondering about the squares and cubes and such.  Where did the authors of the metric get those specific figures?

Well, they didn’t just make them up and call it a day.  Instead, they did some research and created a best-fit curve based on experimentation, trial, and error.  Let’s take a look at the formula in terms of what the math involved says.

There are two components to the formula: a simple, linear measure of cyclomatic complexity and then a much more complicated term involving the square of cyclomatic complexity mitigated by the cube of the method’s test coverage.  What does that mean?  Well, it means that adding test coverage in general goes a long way toward mitigating CRAP-iness.  But, if you wrote a huge, complex enough beast, even 100% coverage wouldn’t entirely mitigate the risk.  And that makes sense.

If you write a method with a cyclomatic complexity of 10, you bring it below the CRAP threshold of 30 by covering 42% or more of it with tests.  With a cyclomatic complexity of 25, you’ll need to get that coverage up to 80%.  But if your cyclomatic complexity exceeds 30, then no amount of testing in the world can make that method non-CRAP-y.

And that makes sense.  Complexity of 30 is jaw-droppingly complex.

On the flip side, if you keep cyclomatic complexity of methods under five, you actually don’t need to write any unit tests to keep the methods under the threshold (though I recommend you write unit tests for any method).

Measuring and Fixing Method CRAP with NDepend

It might not be the first thing you notice when you install NDepend, but you actually get a CRAP score for your methods right out of the box.  You will, however, need to import your code coverage data for it to work.  Without that, NDepend can’t compute one of the two core components of the CRAP score.

Once you’ve imported coverage data, you can get to the CRAP score with the queries and rules explorer, as shown above.  It’s listed in the code coverage section as a warning that “Methods should have a low CRAP score.”  Out of the box, NDepend will warn you if you have any methods that are longer than 10 lines and have a CRAP score greater than 30.

This makes an excellent starting point, I’d say.  If you have methods with CRAP scores through the roof, you should first get them completely under test.  With that under your belt, start chipping away at refactoring them until you get below the threshold (assuming covering them with tests didn’t already do the trick).

But once you’ve gotten everything under 30, I wouldn’t just call it a day.  Clean, well-factored code usually keeps cyclomatic complexity per method in the 1-3 range and has completely or mostly covered methods.  So I’d gradually ratchet down that score until you feel as though it’s flagging methods that are really, truly fine.

The Importance of CRAP

I’ve presented you today with a relatively arcane metric.  People with a Java background might remember crap4j, but it’s not like this metric ever really took off globally (the way that both cyclomatic complexity and test coverage have, for instance).  So why should you care?

Well, as you grow codebases and push toward releases, it’s easy to lose sight of pockets of code that could come back to haunt you.  They slip out of your general attention window and into the background of your codebase.  And they don’t matter to you until a bug rears its head or a new feature comes up. Then they matter a lot.  Suddenly you’re confronted with a method you REALLY don’t want to change.

Keeping track of CRAP scores in you codebase helps you prevent moments like those.  As they say, an ounce of CRAP is worth a pound of cure.  *Snickers*

Comments:

  1. I enjoyed reading this both for the humor and the information. Thanks for sharing.

  2. Your formula must be wrong as any complexity value with 100% code test coverage will always equal one, as any number multiplied by zero always equals zero. SO you can get away with infinite complexity as long as you’re fully tested.

  3. Todd Sussman says:

    The idea behind CRAP is to analyze risk, not ensure good design principles. If the class under test is fully covered, then changing the code has no risk as if I break something I will know right away.

Comments are closed.