NDepend Blog

Improve your .NET code quality with NDepend

up your code review game

Improve Your Code Review Game with NDepend

April 21, 2016 6 minutes read

Code review is a subject with which I’m quite familiar.  I’m familiar first as a participant, both reviewing and being reviewed, but it goes deeper than that.  As an IT management consultant, I’ve advised on instituting and refining such processes and I actually write for SmartBear, whose products include Collaborator, a code review tool.  In spite of this, however, I’ve never written much about the intersection between NDepend and code review.  But I’d like to do so today.

I suppose it’s the nature of my own work that has made this topic less than foremost on my mind.  Over the last couple of years, I’ve done a lot of lone wolf, consultative code assessments for clients.  In essence, I take a codebase and its version history and use NDepend and other tools to perform an extensive analysis.  I also quietly apply some of the same practices to my own code that I use for example purposes.  But neither of these is collaborative because it’s been a while since I logged a lot of time in a collaborative delivery team environment.

But my situation being somewhat out of sync with industry norms does not, in any way, alter industry norms.  And the norm is that software development is generally a highly collaborative affair, and that most code review is happening in highly collaborative environments.  And NDepend is not just a way for lone wolves or pedants to do deep dives on code.  It really shines in the group setting.

NDepend Can Automate the Easy Stuff out of Code Review

When discussing code review, I’m often tempted to leave “automate what you can” for the end, since it’s a powerful point.  But, on the other hand, I also think it’s perhaps the first thing that you should go and do right out of the gate, so I’ll mention it here.  After all, automating the easily-automated frees humans up to focus on things that require human intervention.

It’s pretty likely that you have some kind of automation in process for enforcing coding standards.  And, if you don’t, get some in place.  You should not be wasting time at code review with, “you didn’t put an underscore in front of that field.”  That’s the sort of thing that a machine can easily figure out, and that many, many plugins will figure out for you.

The advantages here are many, but two quick ones bear mentioning here.  First is the time-savings that I’ve discussed, and second is the tightening of the feedback loop.  If a developer writes a line of code, forgetting that underscore, the code review may not happen for a week or more.  If there’s a tool in place creating warnings, preventing a commit, or generating a failed build, the feedback loop is much tighter between undesirable code and undesirable outcome.  This makes improvement more rapid, and it makes the source of the feedback an impartial machine instead of a (perceived) judgmental coworker.

With NDepend, you can take this beyond cosmetic considerations and code standards.  Instead of missing underscores and Pascal instead of camel casing, you can automate tight feedback loops for methods with too much complexity, low cohesion classes and modules and any other fairly objective considerations.  Just as there’s no need to say “you didn’t put an underscore in front of that field,” there’s also no need to say, “that method has more than 5 parameters.”  If you have rules like this, automate them and focus on other stuff at code review time.

Make Your Code Review Visual

Once you’ve cleared the decks of easily automated considerations, it’s time to really extract some value from NDepend here.  This can be accomplished by playing to the tool’s strengths, one of which is the great set of visual diagrams that NDepend gives you.

Perhaps you have a rule that bounces any egregious cyclomatic complexity for methods or classes (say, 100 or more), but that doesn’t mean that there aren’t still trouble spots with complexity.  Create “heat maps” of your application using NDepend’s metric view, which allows you to pick a measure (like cyclomatic complexity) and then define thresholds for different colors ranging from green to red.  The result is that you can see application complexity (and many other metrics besides) hot spots at a glance.

This is extremely valuable in a code review setting.  You can get a visual gestalt of your codebase, which can help drive discussions of who should touch which code and when.

Command of CQLinq

Another way that NDepend can really boost your code reviews is through the use of CQLinq.  I talked in the early days of this blog about regarding your code as data.  CQLinq is the driver for this, and it gives you the ability to write ad hoc queries about your code, like, “show me all the methods that start with Get and return void.

Imagine how powerful this is when collaborating in a session about the nature of the codebase, a la code review.  There’s always the diff view and consideration of incremental changes, but with CQLinq you have the ability to satisfy your curiosity about affected or tangential code in ways that you never have before.

Sure, there’s a bit of a learning curve to get everyone fluent with CQLinq, but it’s definitely worth it.

code trends review up icon

Trend Metrics in Code Review

The last feature that I’ll mention is the trend metrics/charts.  NDepend offers up a visual of how your codebase is doing over time when it comes to various metrics and considerations (this is highly customizable to your needs, by the way).  With any given commit, you can see, at a glance, the effect on some facet of the codebase.

This is powerful as a discussion aid and sanity check.  If the code being reviewed triggers a dip in code coverage or an unusual increase in the fan-in of a module, that might be worth a discussion during the review.  “Oh, the code coverage dipped because there’s a lot of file I/O code being checked in” or “hmm, let’s revisit the approach a little by using this other module” might result, bringing clarity and a better design, respectively.

The trends bring a whole added dimension to bear.  Now the review participants not only see the difference between two snapshots of code, but they also have access to general trends across all snapshots of the code for comparison.

It’s About Perspective

If you think of the advice I offered (besides automation), it’s really about bringing new perspectives into the process.  As developers, we spend an awful lot of time reading the text of methods and classes, and we conceptualize code in this way.  But code review is an opportunity to reason about code in a variety of ways.

The trend metric diagram lets you contemplate architecture visually.  Using CQLinq lets you ask questions of code as data.  The trends let you reason about your application over time.  All of these are a divergence from normal business and thus a source of fresh perspective and discussion.

And, really, that’s the beauty of code reviews.  Sure, they give you a chance to catch mistakes early and to spread knowledge, but if that’s all they are, there’s a missed opportunity.  Code reviews are about ideas, collaboration, and opportunities for improvement.  Use all the tools at your disposal to make the most of them.