NDepend

Improve your .NET code quality with NDepend

CRAP Metric is a Thing And It Tells You About Risk in Your Code

CRAP Metric Is a Thing And It Tells You About Risk in Your Code

I won’t lie.  As I thought about writing this post, I took special glee in contemplating the title.  How should I talk about the CRAP metric?  *Snicker*

I guess that just goes to show you that some people, like me, never truly grow up.  Of course, I’m in good company with this, since the original authors of the metric had the same thought.  They wanted to put some quantification behind the common, subjective declaration, “This code is crap!”

To understand that quantification, let’s first consider that CRAP is actually an acronym: C.R.A.P.  It stands for change risk anti-patterns, so it addresses the risk of changing a bit of code.  In other words, methods with high CRAP scores are risky to change.  So the CRAP metric is all about assessing risk.

When you get a firm grasp on this metric, you get a nice way to assess risk in your codebase.

The CRAP Metric: Getting Specific

Okay, so how does one quantify risk of change?  After all, there are a lot of ways that one could do this.  Well, let’s take a look at the formula first.  The CRAP score is a function of methods, so we’ll call it CRAP(m), mathematically speaking.  (And yes, typing CRAP(m) made me snicker all over again.)

Let CC(m) = cyclomatic complexity of a method and U(m) = the percentage of a method not covered by unit tests.

CRAP(m) = CC(m)^2 * U(m)^3 + CC(m).

Alright, let’s unpack this a bit.  To arrive at a CRAP score, we need a method’s cyclomatic complexity and its code coverage (or really lack thereof).  With those figures, we multiply the square of a method’s complexity by the cube of its rate of uncovered code.  We then add its cyclomatic complexity to that.  I’ll discuss the why of that a little later, but first let’s look at some examples.

First, consider the simplest, least CRAP-y method imaginable: a method completely covered by tests and with no control flow logic.  That method has a cyclomatic complexity of 1 and uncovered percentage of 0.  That means that CRAP(m) = 1^2 * 0^3 + 1 = 1.  So the minimum CRAP metric score is 1.  What about a gnarly method with no test coverage and cyclomatic complexity of 6?  CRAP(m) = 6^2 * 1^3 + 1 = 37.

The authors of this metric define a CRAP-y method as one with a CRAP score greater than 30, so that last method would qualify.

Continue reading CRAP Metric Is a Thing And It Tells You About Risk in Your Code

The Singleton Design Pattern: Impact Quantified

The Singleton Design Pattern: Impact Quantified

This post has been about a month in the offing.  Back in August, I wrote about what the singleton pattern costs you.  This prompted a good bit of discussion, most of which was (as it always is) anecdotal.  So a month ago, I conceived of an experiment that I called the singleton challenge.  Well, the results are in.  I’m going to quantify the impact of the singleton design pattern on codebases.

I would like to offer an up-front caveat.  I’ve been listening lately to a fascinating audiobook called “How to Measure Anything,” and it has some wisdom for this situation.  Measurement is primarily about reducing uncertainty.  And one of the driving lessons of the book is that you can measure things — reduce uncertainty — without getting published in a scientific journal.

I mention that because it’s what I’ve done here.  I’ll get into my methodology momentarily, but I’ll start by conceding the fact that I didn’t (and couldn’t) control for all variables.  I looked for correlation as a starting point because going for causation might prove prohibitive.  But I think I took a much bigger bite out of trying to quantify this than anyone has so far.  If they have, I’ve never seen it.

A Quick Overview of the Methodology

As I’ve mentioned in the past on this blog, I earn a decent chunk of my consulting income doing application portfolio assessments.  I live and breathe static code analysis.  So over the years, I’ve developed an arsenal of techniques and intellectual property.

This IP includes an extensive codebase assessor that makes use of the NDepend API to analyze codebases en masse, store the results, and report on them.  So I took this thing and pointed it at GitHub.  I then stored information about a lot of codebases.

But let’s get specific.  Here’s a series of quick-hitter bullets about the experiment that I ran:

  • I found this page with links to tons of C# projects on GitHub, so I used that as a “random” selection of codebases that I could analyze.
  • I gave my mass analyzer an ordered list of the codebase URLs and turned it loose.
  • Anything that didn’t download properly, decompress properly, or compile properly (migrating for Core, restoring NuGet packages, and building from command line) I discarded.  This probably actually creates a bias toward better codebases.
  • Minus problematic codebases, I built all solutions in the directory structure and made use of all compiled, non-third-party DLLs for analysis.
  • I stored the results in my database and queried the same for the results in the rest of the post.

I should also note that, while I invited anyone to run analysis on their own code, nobody took me up on it.  (By all means, still do it, if you like.)

Singleton Design Pattern: the Results In Broad Strokes

First, let’s look at the scope of the experiment in terms of the code I crunched.  I analyzed

  • 100 codebases
  • 986 assemblies
  • 5,086 namespaces
  • 72,615 types
  • 501,257 methods
  • 1,495,003 lines of code

From there, I filtered down raw numbers a bit.  I won’t go into all of the details because that would make this an immensely long post.  But suffice it to say that I discounted certain pieces of code, such as compiler-generated methods, default constructors, etc.  I adjusted this so we’d look exclusively at code that developers on these projects wrote.

Now, let’s look at some statistics regarding the singleton design pattern in these codebases.  NDepend has functionality for detecting singletons, which I used.  I also used more of its functionality to distinguish between stateless singleton implementations and ones containing mutable state.  Here’s how that breaks down:

Continue reading The Singleton Design Pattern: Impact Quantified

Get Smart -- Go Beyond Cyclomatic Complexity C#

Get Smart — Go Beyond Cyclomatic Complexity in C#

Recently, I wrote a post explaining the basics of cyclomatic complexity.  You can read that for a deep dive, but for our purposes here, let’s be brief about defining it.  Cyclomatic complexity refers to the number of “linearly independent” paths through a chunk of code, such as a method.  Understand this by thinking in terms of debugging.  If you could trace only one path through a method, it has a cyclomatic complexity of one.  But throw a conditional in there, introducing a second path you could trace, and the complexity grows to two.

Today, I’ll talk specifically about C#.  Cyclomatic complexity in C# is just, well, cyclomatic complexity applied to the language C#.  No big mystery there.

But what I want to talk about today is not cyclomatic complexity — not per se.  Today, I’d like to talk about how you can go beyond cyclomatic complexity in C# to get some even more intelligent metrics.  How can you really zero in on sources of risk and complexity in your code?

Wait, What’s Wrong with Cyclomatic Complexity?

Let me be clear.  There’s absolutely nothing wrong with tracking cyclomatic complexity.

It’s a great aid and shorthand for reasoning about your code’s relative complexity and for understanding where testing challenges lie.  You can use it to locate complexity “hot spots” in your code and then to address them in ways that make sense.  So no criticism whatsoever.  I’m just advocating that you go beyond it.

Think of it this way.  When I encourage you to install a Visual Studio plugin, I’m not knocking Visual Studio.  Visual Studio is a wonderful and productive IDE, in my estimation.  Instead, I’m encouraging you to make it even better — to enhance your experience.

The same sort of reasoning applies here.  Cyclomatic complexity is a great start for reasoning about your code’s complexity.  But we can add some considerations to make your life even better.  Let’s take a look at those.

The rest of this post will show you in detail what that looks like.  But if you want to try it out for yourself, you’ll need to download a copy of NDepend.

Continue reading Get Smart — Go Beyond Cyclomatic Complexity in C#

Understanding Cyclomatic Complexity

Wander the halls of an enterprise software outfit looking to improve, and you’ll hear certain things.  First and foremost, you’ll probably hear about unit test coverage.  But, beyond that, you’ll hear discussion of a smattering of other metrics, including cyclomatic complexity.

It’s actually sort of funny.  I mean, I understand why this happens, but hearing middle managers say “test coverage” and “cyclomatic complexity” has the same jarring effect as hearing developers spout business-meeting-speak.  It’s just not what you’d naturally expect.

And you wouldn’t expect it for good reason.  As I’ve argued in the past, code coverage shouldn’t be a management concern.  Nor should cyclomatic complexity.  These are shop-heavy specifics about particular code properties.  If management needs to micromanage at this level of granularity, you have a systemic problem.  You should worry about these properties of your code so that no one else has to.

With that in mind, I’d like to focus specifically on cyclomatic complexity today.  You’ve probably heard this term before.  You may even be able to rattle off a definition.  But let’s take a look in great detail to avoid misconceptions and clear up any hazy areas.

Defining Cyclomatic Complexity

First of all, let’s get a specific working definition.  This is actually surprisingly difficult because not all sources agree on the exact method for computing it.

How can that be?  Well, the term was dreamed up by a man named Thomas McCabe back in 1976.  He wanted a way to measure “the number of linearly independent paths through a program’s source code.”  But beyond that, he didn’t specify the mechanics exactly, leaving that instead to implementers of the metric.

He did, however, give it an intimidating-sounding name.  I mean, complexity makes sense, but what does “cyclomatic” mean, exactly?  Well, “cyclomatic number” serves as an alias for something more commonly called circuit rank.  Circuit rank measures the number of independent cycles within a cyclic graph.  So I suppose he coined the neologism “cyclomatic complexity” by borrowing a relatively obscure discrete math concept for path independence and applying it to code complexity.

Well then.  Now we have cyclomatic complexity, demystified as a term.  Let’s get our hands dirty with examples and implications.

Continue reading Understanding Cyclomatic Complexity