NDepend

Improve your .NET code quality with NDepend

Bringing Objectivity to Clean Code

If you want to stir up a pretty serious amount of discussion-churn, wander over to where the software developers sit and ask for a consensus definition of “clean code.”  This probably won’t start a religious war — it’s not like asking for consensus on the best programming language or development tool.  You’ll probably find a lot of enthusiastic agreement with different flavors of the same basic concept.  This is true among luminaries of the field, as quoted here on DZone, and it’s most likely to be true in any given shop.

There will be agreement on the broad concepts and inevitable debate as the points become of a finer grain.  Developers can all agree that code should be “maintainable” and “easy to read” but you might get a bit of fragmentation around subjects like variable naming or relative compactness and ‘density’ of code.  Have the developers look at a bit of code and ask them if it could be “cleaner” and you’ll probably get an array of responses, including potential disagreement and thrash.  This will become especially true if they get hung up on cosmetic particulars like indentation, bracket placement, and casing.

So where does that leave us, exactly, when asked the deceptively simple question, “is this clean code?”  Programmers can arrive at a broad consensus on how to answer that question, but not necessarily on the answer itself.  They’ll all say, “well, it’s clean if it’s readable,” but some might give a particular bit of code a thumbs up while others give it a thumbs down.  If you’re a developer, this can be fun or it can be frustrating.  If you’re a non-technical stakeholder, such as a director, project manager, tester or business analyst, it can be confusing and maddening.  “So is this code good or not!?” Continue reading Bringing Objectivity to Clean Code

The Myth of the Software Rewrite

“We can’t go on like this.  We need to rewrite this thing from scratch.”

The Writing is on the Wall

These words infuriate CIOs and terrify managers and directors of software engineering.  They’re uttered haltingly, reluctantly, by architects and team leads.  The developers working on the projects on a day to day basis, however, often make these statements emphatically and heatedly.

All of these positions are understandable.  The CIO views a standing code base as an asset with sunk cost, much the way that you’d view a car that you’ve paid off.  It’s not pretty, but it gets the job done.  So you don’t want to hear a mechanic telling you that it’s totaled and that you need to spend a lot of money on a new one.  Managers reporting to these CIOs are afraid of being that mechanic and delivering the bad news.

Those are folks whose lives are meetings, power points, and spreadsheets, though.  If you’re a developer, you live the day to day reality of your code base.  And, to soldier on with the metaphor a bit, it’s pretty awful if your day to day reality is driving around a clunker that leaves car parts on the road after every pothole.   You don’t just start to daydream about how nice it would be to ride around in a reliable, new car.  You become justifiably convinced that doing anything less is a hazard to your well being.

And so it comes to pass that hordes of developers storm the castle with torches and pitchforks, demanding a rewrite.  What do we want?  A rewrite!  When do we want it?  Now!

At first, management tries to ignore them, but after a while that’s not possible.  The next step is usually bribery — bringing in ping pong tables or having a bunch of morale-building company lunches.  If the carrot doesn’t work, sometimes the stick is trotted out and developers are ordered to stop complaining about the code.  But, sooner or later, as milestones slip further and further and the defect count starts to mount, management gives in.  If the problem doesn’t go away on its own, and neither carrots nor sticks seem to work, there’s no choice, right?  And, after all, aren’t you just trusting the experts and shouldn’t you, maybe, have been doing that all along?

There’s just one nagging problem.  Is there any reason to think the rewrite will turn out better than the current system? Continue reading The Myth of the Software Rewrite

Let’s Build a Metric 4: Science and Experiments

Last time around, I promised a foray into Newtonian mechanics, so I might as well start with that.  This is the equation for gravitational force, one of the 4 fundamental forces of nature.

Gravitational+Force[1]

To put it conversationally, the force of gravity between two objects is the product of the mass of each object, divided by the square of the distance between the objects, multiplied by some thing called “G”.  Really, I’m not kidding about the last bit.  “G” is the “gravitational constant” and just a placeholder thrown in to make the rest of the math work out.

What Newton figured out was the relationship between the variables at play when it comes to gravitation: the two masses and the distance between them.  The heavier the masses, the more gravitation, but if you started moving the masses apart, the force dropped off precipitously.  He figured out that the force of gravity varied proportionally with the mass of each object and varied inversely with the square of the distance.  As far as Newton was concerned, the law of gravitation, specifically about the Earth, would have been expressed as follows.

gravity2[1]

This formula — this expression of proportionality — demonstrates that it is possible to understand relationships via experimentation, without being able to fully express reality in the form of a neat equation that always works out.  Newton stuck a value in there, called the graviational constant, and called it a day.  Some 70 years or so after Newton died, a man named Henry Cavendish was able to perform an experiment and empirically determine the value of G, resulting in a pretty accurate equation (notwithstanding general relativity).

Code Readability Mechanics

Okay, so what does this have to do with our mission here, to work toward a metric for method readability?  Well, it demonstrates that we can shave off thin slices of this thing for reasoning purposes, without having to go right away for the whole enchilada.  Think of experiments that Newton, had he been the size of solar system, might have run.

He could have placed two planets a million miles apart, recorded the force between them, increased the number to 2 and then 3 million miles, and recorded what had happened to the force of gravity.  This would have told him nothing apart from the fact that the force of gravity was inversely proportional to the square of the distance.  He could have placed two planets a million miles apart, and then swapped out one planet for others that were half and twice the size of the original.  This would have told him only that the force was linearly proportional to the mass of the planet he was swapping out.  He then might have swapped a rocky planet for a gas planet of equal mass and observed that that particular variance was irrelevant.

And then, following each experiment, he could have used each piece of learning, in sequence, to build the equation one piece at a time.   It stands to reason that we can, and probably should, do the same thing with the approach to creating a “time to read/comprehend” metric.

So what are some things that would lengthen the time to comprehend a method?  It’s brain storming time.  I’m going to put some ideas out there, but please feel free to chime in with more in the comments.  For me, it boils down to thinking of the degenerative cases and expanding outward from there.  The simplest method to understand would be a no-op, probably followed by simple getters and setters.  So, thinking inductively, where do we get stuck?

Here are some hypotheses that I have.  These all refer to the gestalt of comprehension.  What I mean is you may be able to find a particular method that serves as a counter-example, but I’m hypothesizing that over a large sample size, these will hold up.

  • The more statements there are in a method, the longer it will take to comprehend.
  • Simple variable assignment has very little effect on time to comprehend.
  • Assignment using arrays and other collection types has more effect than simple assignment.
  • Control flow statements are harder to comprehend than assignments.
  • Compound boolean conditions substantially increase time to comprehend.
  • Naming of helper methods means the difference between extremely large time to comprehend (poorly named helper method) and nearly trivial (well named).
  • Understanding methods that refer to class fields take longer than purely functional methods.
  • Collaborators with poorly named methods sharply increase time to comprehend.
  • Collaborators with well named methods are roughly equivalent to assignment and commands.
  • Time to comprehend is dramatically increased by reference to global variables/state.

From this list, we can extract some things that would need to be measured.  Think of Newton with his hypotheses about mass, distance, and gas/rocky; he’d need a way to measure each of those properties so that he could vary them and observe the results.  Same thing here.  Given this list of hypotheses, here are some things that we’d have to be able to observe/count/measure.

  • Count statements in a method.
  • Identify simple assignment.
  • Identify array/collection assignment.
  • Identify and count control flow statements.
  • Count conditions inside of a boolean expression.
  • Poorly named versus well named members (this is probably going to be pretty hard).
  • Identify and count class field references.
  • Identify methods that refer to no class state.
  • Identify method references to global state.

Experimentation

There’s been a very science-y theme to this post.  I started off with Newtonian mechanics and then formed some hypotheses about what makes code take a long time to comprehend.  From those hypotheses, I extracted things that would need to be observed and measured to start trying to confirm them.  So, in accordance with the scientific method, the next thing to do is to start running some experiments.  In the next post, I’m going to show you how to use NDepend to actually make the observations I’ve outlined.

In parallel with that, I’d like to invite you to sign up to help me with running experiments in time to comprehend.  I don’t mind using myself as the guinea pig for these experiments, but the more data, the better the result.  As this series goes on, it’d be great if you could help by supplying your time to comprehend for some methods. Click below if you’re interested in signing up.




Join the Experiment



The landing page explains in more detail, but participation is pretty low impact.  We’ll periodically send out code to read, and you just read it and record how long it took you to understand it.  So, if you’re interested and you’re up for reading a little code, please join me!

 

< Let’s Build a Metric 3: Compositeness

Let’s Build a Metric 5: Flavors of Lines of Code >

Managing to Avoid Cobras

Incentives are a funny thing.  Done well, they can spur your team to productivity and career-advancing, win-win situations.  Done not so well, they can create havoc.
As this point, I could offer up a dry explanation of the “law of unintended consequences,” but I’ll let Wikipedia do that.  Instead, I’ll tell you a quick story about something that came to be known as the “cobra effect.”  Snakes are clearly more interesting than laws.

In colonial India, the Brits in charge didn’t like the large number of venomous snakes that they encountered. So they did something that seems imminently sensible at first blush: they offered a bounty to locals for dead cobras.  At first, things went well.  Locals killed corbras, locals got their rewards, and the governing Brits saw a dip in cobra encounters.  But then the cobra encounters started to creep back up, even as the number of dead cobras turned in continued to climb.

Turns out, the locals had started breeding cobras specifically to turn them in and collect the reward.  Once the British government caught wind of this, they (predictably) terminated the reward program, and the erstwhile cobra breeders simply released the now-useless snakes. So the net result of this program turned out to be an increase in the cobras.  Ouch.

Beware of Cobras

When I tell you to beware of cobras, I’m not talking about looking out for snakes (though, clearly, if you live in an area with cobras, you should probably look out for them).  I’m talking about keeping an eye out for bad incentives.  For instance, if you’re managing a software department, do a thought exercise as to what might happen if you agreed to pay testers $5 extra for each bug they found and developers $5 extra for each bug they fixed.  At first, that probably seems like a good way to get them all to work a little extra and produce good results.  But, channeling the cobra example, can you spot what might go wrong? Continue reading Managing to Avoid Cobras

A Test Coverage Primer for Managers

Managing Blind

Let me see if I can get in your head a little bit.  You manage a team of developers and it goes well at times.  But, at other times, not so much.  Deadlines slide past without a deliverable, weird things happen in production and you sometimes get deja vu when a bug comes in that you could swear the team fixed 3 months ago.

What do you do?  It’s a maddening problem because, even though you may once have been technical, you can’t really dive into the code and see what’s going on.  Are there systemic problems with the code base, or are you just experiencing normal growing pains?  Are the developers in your group painting an accurate picture of what’s going on?  Are they all writing good code?  Are they writing decent code?  Are any of them writing decent code?  Can you rely on them to tell you?

As I said, it’s maddening.  And it’s hard.  It’s like coaching a sports team where you’re not allowed to watch the game.  All you can do is rely on what the players tell you is going on in the game.

A Light in the Darkness

And then, you light upon a piece of salvation: automated unit tests.  They’re perfect because, as you’ll learn from modern boilerplate, they’ll help you guard against regressions, prevent field defects, keep your code clean and modular, and plenty more.  You’ve got to get your team to start writing tests and start writing them now.

But you weren’t born yesterday.  Just writing tests isn’t sufficient.  The tests have to be good.  And, so you light upon another piece of salvation: measuring code coverage.  This way, not only do you know that developers are writing tests, but you know that you’re covered.  Literally.  If 95% of your code base is covered, that probably means that you’re, like, 95% good, right?  Okay, you realize it’s maybe not quite that simple, but, still, it’s a nice, comforting figure.

Conversely, if you’re down at, say, 20% coverage, that’s an alarming figure.  That means that 80% of your code is not covered.  It’s the Wild West, and who knows what’s going on out there?  Right?  So the answer becomes clear.  Task a team member with instrumenting your build to measure automated test coverage and then dump it to a readout somewhere that you can monitor.

Time to call the code quality issue solved (or at least addressed), and move on to other matters.  Er, right?

Fools’ Gold?

I’ve managed software developers in the past, so I understand the disconcerting feeling that comes from not really being able to tell what’s happening under the covers.  These days, I’m living a little closer to the keyboard, and my living comes largely from coaching developers, and doing IT management consulting and gap analysis.  And a common gap is managers thinking that test coverage correlates a lot more strongly with well written, well tested code than it actually does.

Don’t get me wrong.  In a nice, clean code base, you’re likely to see extremely high test coverage figures.  But many managers get the cause and effect relationship backward.  Really well crafted code causes high coverage.  High coverage does not necessarily cause really well crafted code.

To understand the disconnect, consider the relatively common situation of purchasing a home.  In the USA, part of this process is a (generally) mandatory home inspection in which a licensed home inspector examines the house in detail, making note of any deficiencies, problems, dangers, etc, and giving a general report on what kind of shape the house is in.

Now, you’re no carpenter, and you certainly don’t understand what all the home inspector is doing.  But, you want to know if he’s doing a good job.  So, you introduce a metric.  You make note of how many rooms in the house he enters during his inspection.  If he doesn’t cover at least 90% of the rooms, he’s not doing great work.  If he only covers 20%, he’s terrible.

But the same problem arises.  The fact that he wandered through every room in the house does not mean he did a good inspection.  If he did a thorough inspection, the result will be that he covered a high percentage of rooms.  If he covered a high percentage of rooms, he may or may not have done a thorough inspection.

There Are No Shortcuts

So, what if you timed the inspector?  And, what if you introduced another metric in addition to code coverage to make it harder to game or to be wrong?  Well, that might help a bit, but there are no easy answers, really.  If you want to know whether the inspector does good work, hire another inspector or two, and triangulate.  If you want to know whether a test suite is good or not, you need to have someone with experience do a lot more digging and inspection of the code.  I know you don’t want to hear this because you’re picturing costs, but knowledge isn’t cheap.

I assess code bases a lot, and when looking at test suites, here are some questions I ask (and then answer):

  • What is the average number of asserts per method?  Way less than 1 means you might have useless tests.  Way more than 1 means you might have convoluted, brittle tests.
  • What percentage of test methods have a number of asserts other than 1?  This might indicate a flawed approach to testing.
  • What is the average size of a test method?  Large test methods tend to indicate brittle tests that people will abandon.
  • Do tests often refer to static/global state?  This may indicate an unreliable test suite.
  • Do tests have a lot of duplication?  This may indicate poorly written tests or even a poor design of production code.

I could go on, but you get the idea.  An important takeaway is that I say “may” and “might” in all of my explanations because, you know what?  Life isn’t simple.  These things may exist in your test code, and there may be a perfectly reasonable explanation.  Just as test coverage may or may not be indicative of code quality.

Detecting whether things are on the right track or not isn’t easy.  But the more metrics you have, and the more you use them as triggers for further investigation instead of quality gates, the better equipped you and your group will be.

Let’s Build a Metric 3: Compositeness

Last time, I talked through a little bit of housekeeping on the way to creating a metric that would be, uniquely, ours.  Nevermind the fact that, under the hood, the metric is still lines of code.  It now has a promising name and is expressed in the units we want.  And, I think that’s okay.  There is a lot of functionality and ground to cover in NDepend, so a steady, measurable pace makes sense.

It’s now time to start thinking about the nature of the metric to be created here, which is essentially a measure of time.  That’s pretty ambitious because it contains two components: defining a composite metric (recall that this is a mathematical transform on measured properties) and then tying it an observed outcome via experimentation.  In this series, I’m not assuming that anyone reading has much advanced knowledge about static analysis and metrics, so let’s get you to the point where you grok a composite metric.  We’ll tackle the experimentation a little later.

A Look at a Composite Metric

I could walk you through creating a second query under the “My Metrics” group that we created, but I also want this to be an opportunity to explore NDepend’s rich feature set.  So instead of that, navigate to NDpened->Metric->Code Quality->Types with Poor Cohesion.

Metric3

When you do that, you’re going to see a metric much more complicated than the one we defined in the “Queries and Rules Edit” window.  Here’s the code for it, comments and all.

There’s a good bit to process here.  The CQLinq code here is inspecting Types and providing data on Types.  “Type” here means any class or struct in your code base (well, okay, in my code base), along with a warning if you see anything that matches.  And, what does matching mean?  Well, looking at the compound conditional statement, a type matches if it has “LCOM” greater than .8 or “LCOMHS” greater than .95 and it also has more than 10 fields and 10 methods.  So, to recap, poor cohesion means that there are a good number of fields, a good number of methods, and… something… for these acronyms.

LCOM stands for “Lack [of] Cohesion of Methods.”  If you look up cohesion in the dictionary, you’ll find the second definition particularly suited for this conversation: “cohering or tending to cohere; well-integrated; unified.”  We’d say that a type is “cohesive” if it is unified in purpose or, to borrow from the annals of clean code, if it conforms to the single responsibility principle.  To get a little more concrete, consider an extremely cohesive class.

This class is extremely cohesive.  It has one field and three methods, and every method in the class operates on the field.  Type cohesion might be described as “how close do you get to every method operating on every field?”

Now, here’s the crux of our challenge in defining a composite metric: how do you take that anecdotal, qualitative description, and put a number to it?  How do you get from “wow, Recipe is pretty cohesive” to 0.0?

Well, this is where the mathematical transform part comes in.  Here is how NDepend calculates Lack of Cohesion of Methods (LCOM).

  • LCOM = 1 – (SUM(MF)/(M*F))

Where:

  • M is the number of methods in class (both static and instance methods are counted, it includes also constructors, properties getters/setters, events add/remove methods).
  • F is the number of instance fields in the class.
  • MF is the number of methods of the class accessing a particular instance field.
  • Sum(MF) is the sum of MF over all instance fields of the class.

Quantifying the Qualitative

Whoah.  Okay, let’s walk before we run.  It’ll be helpful to work backward from an already proposed formula.  What do we know by looking at this?

Well, at the very highest level, we’re talking about fields and methods in the class, and how they interact.  It’s easy enough to count the number of methods and fields — so far, so good.  MF, for a given field, the number of methods in the class that access that field, which means that Sum(MF) is the aggregate for all fields.  Sum(MF) will therefore be less than or equal to M*F.  They’re only equal in the case where every method accesses every field.

Thus the term SUM(MF)/(M*F) will range from 0 to 1, which means that the value of this metric ranges from 1 to 0.  1 is thus a perfectly non-cohesive class and 0 is a perfectly cohesive class.  Notice that I described Recipe as “0.0”?  If you run this metric on that class, you’ll see that it scores 0 for a “perfect cohesion score.”  And so, the goal here becomes obvious.  The creator of this metric wanted to come up with a way to describe cohesion normalized between 0 and 1 with a concept of bounded, “perfect” endpoints.

And this is the essence of compositeness, though to build such a metric, you create the transform rather than working backward to deduce the reasoning.  You start out with a qualitative evaluation and then think about a hypothesis for how you want to represent that data.  Is there a minimum?  A maximum?  Should the curve between them be linear?  Exponential?  Logarithmic?

It’s not a trivial line of thinking, by any stretch.  As it turns out, there isn’t even agreement, per se, on the best way to describe type cohesion.  You’ll notice that NDepend supports a secondary metric (that I intentionally omitted from the formula definition above for simplicity) for cohesion called LCOM HS, which stands for the “Henderson Sellers Lack of Cohesion of Methods.”  This is a slightly different algorithm for computing cohesion.  And man, if experts in the field can’t agree on the ideal metric, you can see that this is a tall order.  But hey, I didn’t say it’d be easy — just fun.

So, having seen a little bit more of NDepend and established a foundation for understanding a bit of the theory behind composite code metrics, I’ll leave off until next time, when I’ll dig a bit into how we can start reasoning about our own composite metric for “time to understand a method.”  Stay tuned because that will get interesting — I’ll go through a little bit more of NDepend and even get into Newtonian Mechanics a little.  But not too far.  I promise.

< Let’s Build a Metric 2: Getting the Units Right

Let’s Build a Metric 4: Science and Experiments >

Let’s Build a Metric 2: Getting the Units Right

Last time in this series, there was definitely some housekeeping to accomplish.  I laid the groundwork for the series by talking a bit of theory — what are code metrics and what is static analysis?  I also knocked out some more practical logistics such as taking you through how to get the code base I’m using as well as how to create and attach an NDepend project to it.  The culmination was the creation of our very own, tiny, and relatively useless code metric (useless since NDepend already provides the metric we ‘created’).

So, seems like a good next thing to do is make the metric useful, but there may be an interim step in there of at least making it original.  Let’s do that before we get to “useful.”  But even before that, I’m going to do a bit of housekeeping and tell you about the NDepend feature of managing the queries and rules in the Queries and Rules Explorer.

Group Housekeeping

Last time around, I created the “My Rules” group under “Samples of Custom rules.”  This was nice for illustrative purposes, but it’s not where I actually want it.  I want to bring it up on the same level as that, right after “Defining JustMyCode”.  To do that, you can simply drag it.  Dragging it will result in a black bar indicating where it will wind up:

 

Metrics2 (1)

 

 

Once I’ve finished, it’ll be parked where I want it, as shown below.  In the future, you can use this to move your groups around as you please.  It’s pretty intuitive.

 

Metrics2 (2)

Now, have you noticed that this is called “My Rules” and that’s a little awkward, since we’re actually defining a query.  We’re ultimately going to be trying to figure out how long it takes to comprehend a method, and that’s a query.  A rule would be “it should take less than 30 seconds to comprehend a method.”  We may play with making rules around this metric later, but for now, let’s give the group a more appropriate name.  NDepend is integrated with the expected keyboard shortcuts, so you can fire away with the “rename” shortcut, which is F2.  Do that while “My Rules” is highlighted, and you’ll get the standard rename behavior.

 

Metrics2 (3)

 

Let’s call it “My Metrics.”

 

Metrics2 (4)

Getting Units Right for Our Metric

Now, with various bits of housekeeping out of the way, let’s change the metric “Lines of Code” to be something else — something original.  Open it in the Queries and Rules Editor, and change its name by changing the XML doc comment above the query within the “Name” tag.  Let’s call it “Seconds to Comprehend a Method.”  It’s important in these queries to specify the units in the name of the metric so that it’s clear upon inspection what, exactly, is being measured.  The concept of units does not exist, per se, in NDepend so queries out of the box have titles like “# Source Files” and “# IL Instructions” to indicate that the unit is a count.

Metrics2 (5)

 

Now the title is looking good, but there’s a problem in the actual display of the metric on the side.  Take a look:

 

Metrics2 (6)

 

It seems like a metric called “seconds to comprehend a method” probably shouldn’t internally report in terms of “# of lines of code (LOC)”.  We probably want this to be “seconds.”  The way you can accomplish this is by taking advantage of the fact that, under the covers, what we’re doing in this file is taking advantage of the C# construct of “anonymous type.”

In C#, an anonymous type is one that is defined even as it is instantiated, having its property names, types, and ordering declared in-situ.  In an actual C# method, we might do something like:

That’s not how things work in NDepend, however.  For these queries and rules, we aren’t supplying declarations, per se, but rather returning an expression that the engine knows how to turn into graphical information at the bottom.  In our example, currently, we’re returning an anonymous type consisting of the method itself as the first property, followed by its name and then its number of lines of code.  This is translated in graphical format to what you see underneath the query editor automatically, with the values displayed as data and the property names as the headers.  So, to change the header, we need to change the property name.  Change this:

to this:

and observe the new display results.

Metrics2 (7)

Fluent versus Query Linq

Finally, there’s one more thing that I’m going to do here before we wrap up this post.  My strong preference over the years has been to use the fluent Linq syntax, rather than the query syntax, and based on what I see of the code written by others, I’m not alone.  So, let’s do a conversion here before we get too much further.

This code can be expressed, equivalently, as

Wrapping Up

So far, we’re taking it a little slow on making progress toward our killer, composite metric, but that’s somewhat by design.  These posts split purpose between building a cool metric, and walking you through NDepend in some detail.  In terms of the former goal, we went from having a metric that was just a repeat of one already supplied out of the box to one that is our own — at least at a casual inspection.  Next time around, we’ll actually start doing something that doesn’t just assume it takes one second to comprehend each line of a method.

From a “getting to know NDepend” point of view, we took a walk through creating, renaming, and moving query/rule groups around, and we started to get into CQLinq a little bit in terms of practical application and theory.  That’s a topic that will be visited more throughout the “Let’s Build a Metric” series, and one that we’ll pick right up with in the next post.

 

< Let’s Build a Metric 1: What’s in a Metric?

 

Let’s Build a Metric 3: Compositeness >

Let’s Build a Metric 1: What’s in a Metric?

A while back, I made a post on my blog announcing the commencement of a series on building a better, composite code metric.  Well, it’s time to get started!

But before I dive in headfirst, I think it’s important to set the scene with some logistics and some supporting theory.  Logistically, let’s get specific about the techs and code that I’m going to be using in this series.  To follow along, get yourself a copy of NDepend v6, which you can download here.  You can try to follow along if you have an older version of the tool, but caveat emptor, as I’ll be using the latest bits.  Secondly, I’m going to use a codebase of mine as a guinea pig for this development.  This codebase is Chess TDD on github and it’s what I use for my Chess TDD series on my blog and Youtube.  This gives us a controlled codebase, but one that is both active and non-trivial.

What are Static Analysis and Code Metrics, Anyway?

Now, onto the supporting theory.  Before we can build meaningful code metrics, it’s important to understand what static analysis is and what code metrics are.  Static analysis comes in many shapes and sizes.  When you simply inspect your code and reason about what it will do, you are performing static analysis.  When you submit your code to a peer to have her review, she does the same thing.  

Like you and your peer, compilers perform static analysis on your code, though, obviously, they do so in an automated fashion.  They check the code for syntax errors or linking errors that would guarantee failures, and they will also provide warnings about potential problems such as unreachable code or assignment instead of evaluation.  Products also exist that will check your source code for certain characteristics and stylistic guideline conformance rather than worrying about what happens at runtime.  In managed languages, products exist that will analyze your compiled IL or byte code and check for certain characteristics.   The common thread here is that all of these examples of static analysis involve analyzing your code without actually executing it.

The byproduct of this sort of analysis is generally some code metrics.  MSDN defines code metrics as, “a set of software measures that provide developers better insight into the code they are developing.”  I would add to that definition that code metrics are objective, observable properties of code.  The simplest example on that page is “lines of code” which is just, literally, how many lines of code there are in a given method or class.  Automated analysis tools are ideally suited for providing metrics quickly to developers.  NDepend provides a lot of them, as you can see here.

It’s important to make one last distinction before we move on: simple versus composite metrics.  An example of a simple metric is the aforementioned lines of code.  Look at a method, count the lines of code, and you’ve got the metric’s value.  A composite metric, on the other hand, is a higher level metric that you obtain by performing some kind of mathematical transform on one or more simple metrics.  As a straightforward example, let’s say that instead of just counting lines of code, you defined a metric that was “lines of code per method parameter.”  This would be a metric about methods in your code base that was computed by taking the number of lines of code and dividing them by the number of parameters to the method.

Is this metric valuable?  I honestly don’t know — I just made it up.  It’s interesting (though you’d have to do something about the 0 parameter case), but if I told you that it was important, the onus would be on me to prove it.  I mention this because it’s important to understand that composite metrics like Microsoft’s “maintainability index” are, at their core, just mathematical transforms on observable properties that the creator of the metric asserts you should care about.  There is often study and experimentation behind them, but they’re not unassailable measures of quality.

Let’s Look at NDepend Metrics

With the theory and back-story out of the way, let’s roll up our sleeves and get down to business.  You’re going to need NDepend installed now, so if you haven’t done that yet, check out the getting started guide.  Installing NDepend is beyond the scope of this series.

If you’re going to follow along with me, clone the Chess TDD codebase and open that up in Visual Studio.  The first thing we’re going to need to do is attach an NDepend project.  With the code base open, that will be your first option in the NDepend window.  I attached a project, named it “Chess Analysis” and saved the NDepend project file in the root folder of the project, alongside the .sln file.  This allows you to optionally source control it pretty easily and to keep track of it at a high level.

attach

Once you’ve created an attached the project, run an analysis.  You can do this by going to the NDepend menu, then selecting Analyze->Run Analysis.

analyze

Now, we’re going to take a look in the queries and rules explorer.  NDepend has a lot of cool features and out of the box functionality around metrics, but let’s dive into something specific right now, for this post.  We’ll get to a lot of the other stuff later in this series.  Navigate in the NDepend menu to Rule->View Explorer Panel.  This will open the queries and rules explorer.  Click the “Create Group” button and create a rule group called “My Rules.”

Create Rule

Now, right click on “My Rules” and select “Create Child Query,” which will bring up the queries and rules editor window.  There’s a bit of comment XML at the top, which is what will control the name of the rule as it appears in the explorer window.  Let’s change that to “Lines of Code.”  And, for the actual substance of the query, type:

It should look like this:

Create Rule

Congratulations! You’ve just created your first code metric. Nevermind the fact that lines of code is a metric almost as old as code itself and the fact that you didn’t actually create it. Kidding aside, there is a victory to be celebrated here. You have now successfully created a code metric and started capturing it.

Next time, we’ll start building an actual, new metric that NDepend isn’t already providing you out of the box. Stay tuned!

 

Let’s Build a Metric 2: Getting the Units Right >

A Software Architect’s Best Friend

To this day, I have a persistent nightmare about my time in college.  It’s always pretty similar.  I wake up and I have a final exam later in the day, but I’m completely unprepared.  And I don’t mean that I’m unprepared in the sense that I didn’t re-read chapter 4 enough times.  I mean that I realize I haven’t attended a single lecture, done a single homework assignment, or really even fully understood what, exactly, the course covers.  For all intents and purposes, I’m walking in to take the final exam for a class I haven’t taken.  The dream creates a feeling that I am deeply and fundamentally unprepared for life.

I’ve never encountered anything since college that has stuck with me quite as profoundly; I don’t dream of being unprepared for board meetings or talks or anything like that.  Perhaps this is because my brain was still developing when I was 20 years old, or something.  I’m not a psychologist, so I don’t know.  But what I do know is that the closest I came to having this feeling was in the role of software architect, responsible for a code base with many contributors.

If you’re an architect, I’m sure you know the feeling.  You’ve noticed junior developers introducing global variables on a few occasions and explained to them the perils of this practice, but do you really know they’re not still doing it?  Sure, you check from time to time, but there’s only one of you and it’s not like you can single-handedly monitor every commit to the code base.  You do like to go on vacation at least every now and then, right?  Or maybe you’ve shown your team a design pattern that will prevent redundancy and standardize what had been a hodgepodge approach.  You’ve monitored the situation for a few commits and noticed that they’re not getting it quite right, but you can live with what they’re doing… as long as it doesn’t get any further off the rails.

You get the point.  After all, you’re living it.  You don’t need me to tell you that it’s stressful being responsible for the health of a code base that’s changing faster than you can track in any level of detail.  If you’re anything like me, you love the role, but sometimes you long for the days when you were making your presence felt by hacking things and cranking out code.  Well, what if I told you there were a way to get back there while hanging onto your architect card… at least, to a degree?

You and I both know it — you’re not going to be spending 8+ hours a day working on feature implementations, but that doesn’t mean that you’re forever consigned to UML and meetings and that your hacking days are completely over.  And I’m not just talking about side projects at night.  It’s entirely appropriate for you to be writing prototyping code and it’s also entirely appropriate for you to write little plugins and scripts around your build that examine your team’s code base.  In fact, I might argue that this latter pursuit should be part of your job.

What do I mean, exactly, by this?  What does it mean to automate “examining your team’s code base?”  You might be familiar with this in its simplest forms, such as a build that fails if test coverage is too low or if there are compiler warnings.  But this concept is extremely extensible.  You don’t need to settle for rudimentary information about your team’s code.

NDepend is a tool that provides you with a lot of information out of the box as well as hooks with which to integrate that information into your team’s build.  You can see architectural concerns at a high level, such as how much you depend on external libraries and how internally coupled your modules are.  You can track more granular concerns such as average method complexity or class cohesion.  And you can automatically generate reports about these things, and more, with each build.  All of that comes built in.

But where things get really fun and really interesting is when you go beyond what comes out of the box and start to customize for your own, specific situation.  NDepend comes with an incredibly powerful feature called CQLinq that lets you ask very custom, specific questions about your code.  You can write a query to see how many new global variables your team is introducing.  You can see if your code base is getting worse, in terms of coupling.  And, you can even spend an afternoon or two putting together a complex CQLinq query to see if your team is implementing the pattern that you prototyped for them.

And not only can you see it — you can see it in style.  You can generate a custom report with clear, obvious visuals.  This sort of visualization isn’t just decoration.  It has the power to impact your team’s behavior in a meaningful way.  They can see when their checkins are making more things green and happy and when their checkins are making things red and angry.  They’ll modify their code accordingly and basically anonymously, since this feedback is automated.  They’ll like it because automated feedback won’t feel judgmental to them, and you’ll like it because you’ll know that they’re being funneled toward good architectural decisions even when you aren’t there.  Talk about peace of mind.

You’ll spend a few days getting to know NDepend, a few days tweaking the reports out of the box, and a few days hacking the CQLinq queries to guard your code base according to your standards.  And, from there on in, you’ll enjoy peace of mind and be freed to focus on other things that command your attention.  As an architect there are a thousand demands on your time.  Do yourself a favor and get rid of the ones that you can easily automate with the right tooling.

Why Should Managers Care About Static Analysis?

I’d like to talk a bit today about how, if you’re a dev manager for a team or teams that are responsible for .NET code bases, NDepend will help you lower your blood pressure and sleep better at night.  But first, bear with me as a I introduce and explain myself a bit.  It’ll be relevant.  I promise.

There’s been a relatively unique fluidity to my career, particularly in recent years.  The first 10 years or so of my professional career featured somewhat of a standard path from developer to management, with milestone stops along the way earning me designations like “senior” developer, team lead, and architect.  At the end of my salaried career, I had earned myself a CIO role, presiding over all development and IT operations for a company.  And, while I enjoyed the leadership responsibilities that came along with that sort of role, I ultimately decided to go off on my own into consulting, where I’d have more variety and a more fluid set of responsibilities.  These days, I do a combination of development coaching, management consulting, and content creation to earn my living.

The reason I mention all of this is that I’ve had success as a developer, architect, manager, and consultant to all of the above, and that multi-disciplined success has given me unique perspective on the interaction among those roles.  It’s shown me that, above all, what you need to be successful is the ability to trust your dev team.  But trust doesn’t come easily.  You didn’t get to where you are by blindly trusting anyone who tell you a pleasant story.

Here’s your essential conundrum as a manager.  You’re managing a team of intelligent humans that are doing things you don’t understand.  You can protest and claim to understand what they’re doing, but you don’t.  I learned this in my travels as a manager, even managing a team of .NET folks who turned to me to answer their toughest technical questions.  I may have had years and years of professional development on them, but I wasn’t there in the weeds with them, dealing with the challenges that they were, day in and day out.  I had to trust them because keeping my nose in their affairs didn’t scale.  So no matter how well you might understand the surrounding technologies and principles, you don’t understand what they’re doing at any given moment.

There’s a tendency toward entropy that kicks in when you’re managing people whose labor you don’t understand.  A lot of managers in this situation have to fight a panicky impulse to regain control via micromanagement.  I can’t really tell if they’re making good decisions in their coding, but I can make sure they’re getting in no later than 9, leaving no earlier than 5, writing at least 100 lines of code per day, and keeping unit test coverage about 90%.  You may laugh, but it’s a natural impulse, particularly in the face of a slipped deadline or low quality release.  The C-suite wants answers, and you’re caught in the middle, scrambling simultaneously to provide them and to figure out how to prevent this in the future.  You didn’t get ahead in your career through passivity, so the impulse is to seize control, even if perhaps you realize it’s a misguided impulse.

NDepend provides help here.  It allows you to keep an eye on metrics that aren’t obtuse if you structure things right.  I cannot overemphasize this caveat.  Renowned systems thinker W. Edwards Deming once explained that, “people with targets and jobs dependent upon meeting them will probably meet the targets – even if they have to destroy the enterprise to do it.”  Translated into our terms for this post, what this means is that if you measure your team’s productivity by them producing 100 lines of code per day, they will produce 100 lines of code per day, even if it means creating unstoppable death stars of terrible code.  So disabuse yourself of the notion that you can use this tool to evaluate your developers’ performance, and understand instead that you can use it to identify potential trouble spots in the code.

Pair with a developer or your team’s architect, and use NDepend’s phenomenal reporting capabilities to create a big, visible report on the state of your team’s code.  Here are some ideas for things that might interest you:

  • Which sections of the code are most heavily depended on, so that I can have a heads up when they are changed?
  • How coupled is a module to the rest of the code base?  So, for instance, if I wanted to swap in a new database solution, how hard would that be?
  • Are the most troublesome areas of the code base getting worse, staying the same, or improving?
  • Is the team’s development practice generally in line with commonly accepted good ideas?
  • What does the architecture of the code actually look like (as opposed to some design document that was written and may be out of date)?
  • Are there currently any code violations that the team has identified as critical mistakes?

When I was managing teams, I certainly would have been interested in the answers to these questions for strategic planning purposes.  How dependent you are on a database or whether a feature implementation has touched a dicey section of the code is valuable information for making important decisions (maybe it’s not worth considering a database switch and maybe we should add in more testing than usual for the upcoming release).

Not only can NDepend give you this information, but it can be integrated with your build to furnish it in a very clear, easy, and visible way.  Picture it integrated with your team’s build and delivered to your inbox in digest format on a daily basis.  This is possible to do with your build tool — you could work with the developers on your team to design a build customization where you can see if an area of the code is getting worse or how much coupling there is among the modules in the code.

Imagine it.  Instead of wondering whether your developers are actually working and whether they’re making good decisions, you can partner with them to keep you in the loop on important characteristics of the code, and also to keep you out of their hair for the details it’s better for them to worry about.  You get your information and early detection scheme, and they benefit from having an informed manager that asks intelligent questions only when it makes sense to do so.  And for you, that all adds up to better sleep and lower blood pressure.