Improve your .NET code quality with NDepend

Bringing Objectivity to Clean Code

If you want to stir up a pretty serious amount of discussion-churn, wander over to where the software developers sit and ask for a consensus definition of “clean code.”  This probably won’t start a religious war — it’s not like asking for consensus on the best programming language or development tool.  You’ll probably find a lot of enthusiastic agreement with different flavors of the same basic concept.  This is true among luminaries of the field, as quoted here on DZone, and it’s most likely to be true in any given shop.

There will be agreement on the broad concepts and inevitable debate as the points become of a finer grain.  Developers can all agree that code should be “maintainable” and “easy to read” but you might get a bit of fragmentation around subjects like variable naming or relative compactness and ‘density’ of code.  Have the developers look at a bit of code and ask them if it could be “cleaner” and you’ll probably get an array of responses, including potential disagreement and thrash.  This will become especially true if they get hung up on cosmetic particulars like indentation, bracket placement, and casing.

So where does that leave us, exactly, when asked the deceptively simple question, “is this clean code?”  Programmers can arrive at a broad consensus on how to answer that question, but not necessarily on the answer itself.  They’ll all say, “well, it’s clean if it’s readable,” but some might give a particular bit of code a thumbs up while others give it a thumbs down.  If you’re a developer, this can be fun or it can be frustrating.  If you’re a non-technical stakeholder, such as a director, project manager, tester or business analyst, it can be confusing and maddening.  “So is this code good or not!?” Continue reading Bringing Objectivity to Clean Code

The Myth of the Software Rewrite

“We can’t go on like this.  We need to rewrite this thing from scratch.”

The Writing is on the Wall

These words infuriate CIOs and terrify managers and directors of software engineering.  They’re uttered haltingly, reluctantly, by architects and team leads.  The developers working on the projects on a day to day basis, however, often make these statements emphatically and heatedly.

All of these positions are understandable.  The CIO views a standing code base as an asset with sunk cost, much the way that you’d view a car that you’ve paid off.  It’s not pretty, but it gets the job done.  So you don’t want to hear a mechanic telling you that it’s totaled and that you need to spend a lot of money on a new one.  Managers reporting to these CIOs are afraid of being that mechanic and delivering the bad news.

Those are folks whose lives are meetings, power points, and spreadsheets, though.  If you’re a developer, you live the day to day reality of your code base.  And, to soldier on with the metaphor a bit, it’s pretty awful if your day to day reality is driving around a clunker that leaves car parts on the road after every pothole.   You don’t just start to daydream about how nice it would be to ride around in a reliable, new car.  You become justifiably convinced that doing anything less is a hazard to your well being.

And so it comes to pass that hordes of developers storm the castle with torches and pitchforks, demanding a rewrite.  What do we want?  A rewrite!  When do we want it?  Now!

At first, management tries to ignore them, but after a while that’s not possible.  The next step is usually bribery — bringing in ping pong tables or having a bunch of morale-building company lunches.  If the carrot doesn’t work, sometimes the stick is trotted out and developers are ordered to stop complaining about the code.  But, sooner or later, as milestones slip further and further and the defect count starts to mount, management gives in.  If the problem doesn’t go away on its own, and neither carrots nor sticks seem to work, there’s no choice, right?  And, after all, aren’t you just trusting the experts and shouldn’t you, maybe, have been doing that all along?

There’s just one nagging problem.  Is there any reason to think the rewrite will turn out better than the current system? Continue reading The Myth of the Software Rewrite

Let’s Build a Metric 4: Science and Experiments

Last time around, I promised a foray into Newtonian mechanics, so I might as well start with that.  This is the equation for gravitational force, one of the 4 fundamental forces of nature.


To put it conversationally, the force of gravity between two objects is the product of the mass of each object, divided by the square of the distance between the objects, multiplied by some thing called “G”.  Really, I’m not kidding about the last bit.  “G” is the “gravitational constant” and just a placeholder thrown in to make the rest of the math work out.

What Newton figured out was the relationship between the variables at play when it comes to gravitation: the two masses and the distance between them.  The heavier the masses, the more gravitation, but if you started moving the masses apart, the force dropped off precipitously.  He figured out that the force of gravity varied proportionally with the mass of each object and varied inversely with the square of the distance.  As far as Newton was concerned, the law of gravitation, specifically about the Earth, would have been expressed as follows.


This formula — this expression of proportionality — demonstrates that it is possible to understand relationships via experimentation, without being able to fully express reality in the form of a neat equation that always works out.  Newton stuck a value in there, called the graviational constant, and called it a day.  Some 70 years or so after Newton died, a man named Henry Cavendish was able to perform an experiment and empirically determine the value of G, resulting in a pretty accurate equation (notwithstanding general relativity).

Code Readability Mechanics

Okay, so what does this have to do with our mission here, to work toward a metric for method readability?  Well, it demonstrates that we can shave off thin slices of this thing for reasoning purposes, without having to go right away for the whole enchilada.  Think of experiments that Newton, had he been the size of solar system, might have run.

He could have placed two planets a million miles apart, recorded the force between them, increased the number to 2 and then 3 million miles, and recorded what had happened to the force of gravity.  This would have told him nothing apart from the fact that the force of gravity was inversely proportional to the square of the distance.  He could have placed two planets a million miles apart, and then swapped out one planet for others that were half and twice the size of the original.  This would have told him only that the force was linearly proportional to the mass of the planet he was swapping out.  He then might have swapped a rocky planet for a gas planet of equal mass and observed that that particular variance was irrelevant.

And then, following each experiment, he could have used each piece of learning, in sequence, to build the equation one piece at a time.   It stands to reason that we can, and probably should, do the same thing with the approach to creating a “time to read/comprehend” metric.

So what are some things that would lengthen the time to comprehend a method?  It’s brain storming time.  I’m going to put some ideas out there, but please feel free to chime in with more in the comments.  For me, it boils down to thinking of the degenerative cases and expanding outward from there.  The simplest method to understand would be a no-op, probably followed by simple getters and setters.  So, thinking inductively, where do we get stuck?

Here are some hypotheses that I have.  These all refer to the gestalt of comprehension.  What I mean is you may be able to find a particular method that serves as a counter-example, but I’m hypothesizing that over a large sample size, these will hold up.

  • The more statements there are in a method, the longer it will take to comprehend.
  • Simple variable assignment has very little effect on time to comprehend.
  • Assignment using arrays and other collection types has more effect than simple assignment.
  • Control flow statements are harder to comprehend than assignments.
  • Compound boolean conditions substantially increase time to comprehend.
  • Naming of helper methods means the difference between extremely large time to comprehend (poorly named helper method) and nearly trivial (well named).
  • Understanding methods that refer to class fields take longer than purely functional methods.
  • Collaborators with poorly named methods sharply increase time to comprehend.
  • Collaborators with well named methods are roughly equivalent to assignment and commands.
  • Time to comprehend is dramatically increased by reference to global variables/state.

From this list, we can extract some things that would need to be measured.  Think of Newton with his hypotheses about mass, distance, and gas/rocky; he’d need a way to measure each of those properties so that he could vary them and observe the results.  Same thing here.  Given this list of hypotheses, here are some things that we’d have to be able to observe/count/measure.

  • Count statements in a method.
  • Identify simple assignment.
  • Identify array/collection assignment.
  • Identify and count control flow statements.
  • Count conditions inside of a boolean expression.
  • Poorly named versus well named members (this is probably going to be pretty hard).
  • Identify and count class field references.
  • Identify methods that refer to no class state.
  • Identify method references to global state.


There’s been a very science-y theme to this post.  I started off with Newtonian mechanics and then formed some hypotheses about what makes code take a long time to comprehend.  From those hypotheses, I extracted things that would need to be observed and measured to start trying to confirm them.  So, in accordance with the scientific method, the next thing to do is to start running some experiments.  In the next post, I’m going to show you how to use NDepend to actually make the observations I’ve outlined.

In parallel with that, I’d like to invite you to sign up to help me with running experiments in time to comprehend.  I don’t mind using myself as the guinea pig for these experiments, but the more data, the better the result.  As this series goes on, it’d be great if you could help by supplying your time to comprehend for some methods. Click below if you’re interested in signing up.

Join the Experiment

The landing page explains in more detail, but participation is pretty low impact.  We’ll periodically send out code to read, and you just read it and record how long it took you to understand it.  So, if you’re interested and you’re up for reading a little code, please join me!


< Let’s Build a Metric 3: Compositeness

Let’s Build a Metric 5: Flavors of Lines of Code >

Managing to Avoid Cobras

Incentives are a funny thing.  Done well, they can spur your team to productivity and career-advancing, win-win situations.  Done not so well, they can create havoc.
As this point, I could offer up a dry explanation of the “law of unintended consequences,” but I’ll let Wikipedia do that.  Instead, I’ll tell you a quick story about something that came to be known as the “cobra effect.”  Snakes are clearly more interesting than laws.

In colonial India, the Brits in charge didn’t like the large number of venomous snakes that they encountered. So they did something that seems imminently sensible at first blush: they offered a bounty to locals for dead cobras.  At first, things went well.  Locals killed corbras, locals got their rewards, and the governing Brits saw a dip in cobra encounters.  But then the cobra encounters started to creep back up, even as the number of dead cobras turned in continued to climb.

Turns out, the locals had started breeding cobras specifically to turn them in and collect the reward.  Once the British government caught wind of this, they (predictably) terminated the reward program, and the erstwhile cobra breeders simply released the now-useless snakes. So the net result of this program turned out to be an increase in the cobras.  Ouch.

Beware of Cobras

When I tell you to beware of cobras, I’m not talking about looking out for snakes (though, clearly, if you live in an area with cobras, you should probably look out for them).  I’m talking about keeping an eye out for bad incentives.  For instance, if you’re managing a software department, do a thought exercise as to what might happen if you agreed to pay testers $5 extra for each bug they found and developers $5 extra for each bug they fixed.  At first, that probably seems like a good way to get them all to work a little extra and produce good results.  But, channeling the cobra example, can you spot what might go wrong? Continue reading Managing to Avoid Cobras