A while back, I made a post on my blog announcing the commencement of a series on building a better, composite code metric. Well, it’s time to get started!
But before I dive in headfirst, I think it’s important to set the scene with some logistics and some supporting theory. Logistically, let’s get specific about the techs and code that I’m going to be using in this series. To follow along, get yourself a copy of NDepend v6, which you can download here. You can try to follow along if you have an older version of the tool, but caveat emptor, as I’ll be using the latest bits. Secondly, I’m going to use a codebase of mine as a guinea pig for this development. This codebase is Chess TDD on github and it’s what I use for my Chess TDD series on my blog and Youtube. This gives us a controlled codebase, but one that is both active and non-trivial.
What are Static Analysis and Code Metrics, Anyway?
Now, onto the supporting theory. Before we can build meaningful code metrics, it’s important to understand what static analysis is and what code metrics are. Static analysis comes in many shapes and sizes. When you simply inspect your code and reason about what it will do, you are performing static analysis. When you submit your code to a peer to have her review, she does the same thing.
Like you and your peer, compilers perform static analysis on your code, though, obviously, they do so in an automated fashion. They check the code for syntax errors or linking errors that would guarantee failures, and they will also provide warnings about potential problems such as unreachable code or assignment instead of evaluation. Products also exist that will check your source code for certain characteristics and stylistic guideline conformance rather than worrying about what happens at runtime. In managed languages, products exist that will analyze your compiled IL or byte code and check for certain characteristics. The common thread here is that all of these examples of static analysis involve analyzing your code without actually executing it.
The byproduct of this sort of analysis is generally some code metrics. MSDN defines code metrics as, “a set of software measures that provide developers better insight into the code they are developing.” I would add to that definition that code metrics are objective, observable properties of code. The simplest example on that page is “lines of code” which is just, literally, how many lines of code there are in a given method or class. Automated analysis tools are ideally suited for providing metrics quickly to developers. NDepend provides a lot of them, as you can see here.
It’s important to make one last distinction before we move on: simple versus composite metrics. An example of a simple metric is the aforementioned lines of code. Look at a method, count the lines of code, and you’ve got the metric’s value. A composite metric, on the other hand, is a higher level metric that you obtain by performing some kind of mathematical transform on one or more simple metrics. As a straightforward example, let’s say that instead of just counting lines of code, you defined a metric that was “lines of code per method parameter.” This would be a metric about methods in your code base that was computed by taking the number of lines of code and dividing them by the number of parameters to the method.
Is this metric valuable? I honestly don’t know — I just made it up. It’s interesting (though you’d have to do something about the 0 parameter case), but if I told you that it was important, the onus would be on me to prove it. I mention this because it’s important to understand that composite metrics like Microsoft’s “maintainability index” are, at their core, just mathematical transforms on observable properties that the creator of the metric asserts you should care about. There is often study and experimentation behind them, but they’re not unassailable measures of quality.
Let’s Look at NDepend Metrics
With the theory and back-story out of the way, let’s roll up our sleeves and get down to business. You’re going to need NDepend installed now, so if you haven’t done that yet, check out the getting started guide. Installing NDepend is beyond the scope of this series.
If you’re going to follow along with me, clone the Chess TDD codebase and open that up in Visual Studio. The first thing we’re going to need to do is attach an NDepend project. With the code base open, that will be your first option in the NDepend window. I attached a project, named it “Chess Analysis” and saved the NDepend project file in the root folder of the project, alongside the .sln file. This allows you to optionally source control it pretty easily and to keep track of it at a high level.
Once you’ve created an attached the project, run an analysis. You can do this by going to the NDepend menu, then selecting Analyze->Run Analysis.
Now, we’re going to take a look in the queries and rules explorer. NDepend has a lot of cool features and out of the box functionality around metrics, but let’s dive into something specific right now, for this post. We’ll get to a lot of the other stuff later in this series. Navigate in the NDepend menu to Rule->View Explorer Panel. This will open the queries and rules explorer. Click the “Create Group” button and create a rule group called “My Rules.”
Now, right click on “My Rules” and select “Create Child Query,” which will bring up the queries and rules editor window. There’s a bit of comment XML at the top, which is what will control the name of the rule as it appears in the explorer window. Let’s change that to “Lines of Code.” And, for the actual substance of the query, type:
1 2 |
from m in JustMyCode.Methods select new { m, m.Name, m.NbLinesOfCode } |
It should look like this:
Congratulations! You’ve just created your first code metric. Nevermind the fact that lines of code is a metric almost as old as code itself and the fact that you didn’t actually create it. Kidding aside, there is a victory to be celebrated here. You have now successfully created a code metric and started capturing it.
Next time, we’ll start building an actual, new metric that NDepend isn’t already providing you out of the box. Stay tuned!
Comments:
Comments are closed.