NDepend

Improve your .NET code quality with NDepend

trend metrics

Keep Your Codebase Fit with Trend Metrics

A while back, I wrote a post about the importance of trends when discussing code metrics.  Metrics have an impact when teams are first exposed to them, but that tends to fade with time.  Context and trend monitoring create and sustain a sense of urgency.

To understand what I mean, imagine a person aware that he has put on some weight over the years.  One day, he steps on a scale and realizes that he’s much heavier than previously thought.  That induces a moment of shock and, no doubt, grand plans for gyms, diets, and lifestyle adjustments.  But, as time passes, his attitude may shift to one in which the new, heavier weight defines his self-conception.  The weight metric loses its impact.

To avoid this, he needs to continue measuring himself.  He may see himself gaining further weight, poking a hole in the illusion that he has evened out.  Or, conversely, he may see that small adjustments have helped him lose weight, and be encouraged to continue with those adjustments.  In either case, his ongoing conception of progress, more than the actual weight metric, drives and motivates behaviours.

The same holds true with codebases and keeping them clean.  All too often, I see organizations run some sort of static analysis or linting tool on their codebase, and conclude “it’s bad.”  They resolve only to do a better job in a year or two when the rewrite will start.  However good or bad any given figure might be, the trend-line, and not the figure itself, holds the most significance.

Continue reading Keep Your Codebase Fit with Trend Metrics

The Biggest Mistake Static Analysis Could Have Prevented

As I’ve probably mentioned before, many of my clients pay me to come do assessments of their codebases, application portfolios and software practice.  And, as you can no doubt imagine, some of my sturdiest, trustiest tools in the tool chest for this work are various forms of static analysis.

Sometimes I go to client sites by plane, train or automobile (okay, never by train).  Sometimes I just remote in.  Sometimes I do fancy write-ups.  Sometimes, I present my findings with spiffy slide decks.  And sometimes, I simply deliver a verbal report without fanfare.  The particulars vary, but what never varies is why I’m there.

Here’s a hint: I’m never there because the client wants to pay my rate to brag about how everything is great with their software.

Where Does It All Go Wrong?

Given what I’m describing here, one might conclude that I’m some sort of code snob and that I am, at the very least, heavily judging everyone’s code.  And, while I’ll admit that every now and then I think, “the daily WTF would love this,” mostly I’m not judging at all – just cataloging.  After all, I wasn’t sitting with you during the pre-release death march, nor was I the one thinking, “someone is literally screaming at me, so global variable it is.”

I earnestly tell developers at client sites that I don’t know that I’d have done a lot better walking a mile in their shoes.  What I do know is that I’d have, in my head, a clearer map from “global variable today” to “massive pain tomorrow” and be better able to articulate it to management.  But, on the whole, I’m like a home inspector checking out a home that was rented and subsequently trashed by a rock band; I’m writing up an assessment of the damage and not judging their lifestyle.

But for my clients, I’m asked to do more than inspect and catalog – I also have to do root cause analysis and offer suggestions.  So, “maybe pass a house rule limiting renters to a single bottle of whiskey per night,” to return to the house inspector metaphor.  And cataloging all of these has led me to be a veritable human encyclopedia of preventable software development mistakes.

I was contemplating some of these mistakes recently and asking myself, “which was the biggest one” and “which would have been the most preventable with even simple analysis in place?”  It was interesting to realize, after a while, that the clear answer was not at all what you’d expect.

Continue reading The Biggest Mistake Static Analysis Could Have Prevented

up your code review game

Improve Your Code Review Game with NDepend

Code review is a subject with which I’m quite familiar.  I’m familiar first as a participant, both reviewing and being reviewed, but it goes deeper than that.  As an IT management consultant, I’ve advised on instituting and refining such processes and I actually write for SmartBear, whose products include Collaborator, a code review tool.  In spite of this, however, I’ve never written much about the intersection between NDepend and code review.  But I’d like to do so today.

I suppose it’s the nature of my own work that has made this topic less than foremost on my mind.  Over the last couple of years, I’ve done a lot of lone wolf, consultative code assessments for clients.  In essence, I take a codebase and its version history and use NDepend and other tools to perform an extensive analysis.  I also quietly apply some of the same practices to my own code that I use for example purposes.  But neither of these is collaborative because it’s been a while since I logged a lot of time in a collaborative delivery team environment.

But my situation being somewhat out of sync with industry norms does not, in any way, alter industry norms.  And the norm is that software development is generally a highly collaborative affair, and that most code review is happening in highly collaborative environments.  And NDepend is not just a way for lone wolves or pedants to do deep dives on code.  It really shines in the group setting.

NDepend Can Automate the Easy Stuff out of Code Review

When discussing code review, I’m often tempted to leave “automate what you can” for the end, since it’s a powerful point.  But, on the other hand, I also think it’s perhaps the first thing that you should go and do right out of the gate, so I’ll mention it here.  After all, automating the easily-automated frees humans up to focus on things that require human intervention.

It’s pretty likely that you have some kind of automation in process for enforcing coding standards.  And, if you don’t, get some in place.  You should not be wasting time at code review with, “you didn’t put an underscore in front of that field.”  That’s the sort of thing that a machine can easily figure out, and that many, many plugins will figure out for you.

The advantages here are many, but two quick ones bear mentioning here.  First is the time-savings that I’ve discussed, and second is the tightening of the feedback loop.  If a developer writes a line of code, forgetting that underscore, the code review may not happen for a week or more.  If there’s a tool in place creating warnings, preventing a commit, or generating a failed build, the feedback loop is much tighter between undesirable code and undesirable outcome.  This makes improvement more rapid, and it makes the source of the feedback an impartial machine instead of a (perceived) judgmental coworker.

Continue reading Improve Your Code Review Game with NDepend