NDepend

Improve your .NET code quality with NDepend

dev manager in a meeting

Mistakes Dev Managers Make

Managing a team of software developers is a tall order. This is doubly true when the line management includes both org chart duties (career development, HR administrivia, etc) and responsibility for the team’s performance when it comes to shipping. In this case, you’re being asked to understand their day to day performance well enough to evaluate their performance and drive improvement, in spite of the fact that what they do is utterly opaque to you. It’s like being asked to simultaneously coach a team and referee the game for a sport whose rules you don’t know. As I said, a tall order.

I’ll grant that, if you’re a dev manager, you may have been technical at some point, perhaps even recently. Or maybe not, but you’ve been around it long enough to pick up a lot of concepts, at least in the abstract. But in neither case, if you were asked what, exactly, Alice coded up yesterday, would you be able to answer. Whether it’s due to total lack of experience, being “rusty” from not having programmed in a number of years, or simply being unable to keep up with what 8 other people are doing, their work is opaque to you.

As with coaching/refereeing the game that you don’t understand, you can pick up on their body language and gestures. If all members of the team appear disgusted with one of their mates, that one probably did something bad. You’re not totally without context clues and levers to pull, but it wouldn’t be hard at all for them to put one over on you, if they were so inclined. You’re navigating a pretty tough obstacle course.

And so it becomes pretty easy to make mistakes. It’s also pretty understandable, given the lay of the land. I’ll take you through a few of the more common ones that I tend to see, and offer some thoughts on what you can do instead. Continue reading Mistakes Dev Managers Make

Let’s Build a Metric: Incorporating Results and Exploring CQLinq

It turns out I was wrong in the last post, at least if the early returns from the second experiment are to be believed.  Luckily, the scientific method allows for wrongness and is even so kind as to provide a means for correcting it.  I hypothesized that time to comprehend would vary at a higher order with cyclomatic complexity than with lines of code.  This appears not to be the case.  Hey, that’s why we are running the experiments, right?

By the way, as always, you can join the experiment if you want.
You don’t need to have participated from the beginning by any stretch, and you can opt in or out for any given experiment as suits your schedule.

Join the Experiment

 

Results of the First Experiment

Recall that the first experiment asked people to record time to comprehend for a series of methods that varied by number of lines of code.  To keep the signal to noise ratio as high as possible, the methods were simply sequential arithmetic operations, operating on an input and eventually returning a transformed output.  There were no local variables or class level fields, no control flow statements, no method invocations, and no reaching into global state.  Here is a graph of the results from doing this on 3 methods, with 1, 5, and 10 logical lines of code.

LogicalLinesOfCodeComprehensionTime

So as not to overburden anyone with work, and because it’s still early, the experiment contained three methods, yielding three points.  Because this looked loosely quadratic, I used the three points to generate a quadratic formula, which turned out to be this.

GraphOfComprehensionVsLLOC

It’s far from perfect, but this gives us our first crack at shaping time to comprehend as something experimental, rather than purely hypothetical.  Let’s take a look at how to do this using NDepend in Visual Studio.  Recall all the way back in the second post in this series that I defined a metric for time to comprehend.  It was essentially a placeholder for the concept, pending experimental results.

All we’re doing is setting the unit we’ve defined, “Seconds,” equal to the number of lines of code in a method.  But hey, now that we’ve got some actual data, let’s go with it!  The code for this metric now looks like this.

I’ve spread on multiple lines for the sake of readability and with a nod to the notion that this will grow as time goes by. Also to note is that I’ve included, for now, the number of logical lines of code as a handy reference point.

Exploring CQLinq Functionality

This is all fine, but it’s a little hard to read.  As long as we’re here, let’s do a brief foray into NDepend’s functionality.  I’m talking specifically about CQLinq syntax.  If you’re going to get as much mileage as humanly possible out of this tool, you need to become familiar with CQLinq.  It’s what will let you define your own custom ways of looking at and reasoning about your code.

I’ve  made no secret that I prefer fluent/expression Linq syntax over the operator syntax, but there are times when the former isn’t your best bet.  This is one of those times, because I want to take advantage of the “let” keyword to define some things up front for readability.  Here’s the metric converted to the operator syntax.

With that in place, let’s get rid of the cumbersome repetition of “m.NbLinesOfCode” by using the let keyword. And, while we’re at it, let’s give NbLinesOfCode a different name. Here’s what that looks like in CQLinq.

That looks a lot more readable, huh? It’s now something at least resembling the equation pictured above. But there are a few more tweaks we can make here to really clean this thing up, and they just so happen to demonstrate slightly more advanced CQLinq functionality. We’ll use the let keyword to define a function instead of a simple assignment, and then we’ll expand the names out a bit to boot. Here’s the result.

Pretty darned readable, if I do say so myself! It’s particularly nice the way seconds is now expressed — as a function of our LengthFactor equation. As we incorporate more results, this approach will allow this thing to scale better with readability, as you’ll be able to see how each consideration contributes to the seconds.

So, what does it look like? Check it out.

Updated Seconds Metric with CQLinq

Now we can examine the code base and get a nice readout of our (extremely rudimentary) calculation of how long a given method will take to understand.  And you know what else is cool?  The data points of 8.6 seconds for the 1 LLOC method and 51 for the 5 LLOC method.  Those are cool because those were the experimental averages, and seeing them in the IDE means that I did the math right. 🙂

So we finally have some experimental progress and there’s some good learning about CQLinq here.  Stay tuned for next time!

 

 

 

Refactoring is a Development Technique, Not a Project

One of the more puzzling misconceptions that I hear pertains to the topic of refactoring. I consult on a lot of legacy rescue efforts, and refactoring, and people in and around those efforts tend to think of “refactor” as “massive cleanup effort.”  I suspect this is one of those conflations that happens subconsciously.  If you actually asked some of these folks whether “refactor” and “massive cleanup effort” were synonyms, they would say no, but they never conceive of the terms in any other way during their day to day activities.

Let’s be clear.  Here is the actual definition of refactoring, per wikipedia.

 

Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.

 

Significantly, this definition mentions nothing about the scope of the effort.  Refactoring is changing the code without changing the application’s behavior.  This means the following would be examples of refactoring, provided they changed nothing about the way the system interacted with external forces.

  • Renaming variables in a single method.
  • Adding whitespace to a class for readability.
  • Eliminating dead code.
  • Deleting code that has been commented out.
  • Breaking a large method apart into a few smaller ones.

I deliberately picked the examples above because they should be semi-understandable, even by non technical folks, and because they’re all scalable down to the tiny.  Some of these activities could be done by a developer in under a minute.  These are simple, low-effort refactorings.

Let’s now consider another definition of refactoring that can be found at Martin Fowler’s website.


“Refactoring is a controlled technique for improving the design of an existing code base. Its essence is applying a series of small behavior-preserving transformations, each of which “too small to be worth doing”. However the cumulative effect of each of these transformations is quite significant.”

 

I took the wikipedia definition and used it to suggest that refactorings could be small and low-effort.  Fowler takes it a step further and suggests that they should be small and low effort.  In fact, he suggests that they should be “too small to be worth doing.”  That’s fascinating. Continue reading Refactoring is a Development Technique, Not a Project

NDepend Case Study: Increasing Development Efficiency in the Medical Laboratory Sector

Developing applications for use in the health care industry is stressful because the margin of error is almost non-existent. Whether your tool is for treatment, research, or analysis, it needs to be dependable and accurate. The more complex the application is, the higher the chance for errors and delays in development. Dependable companies abide by rigorous methodologies to develop their code before deploying it to clients. In this NDepend case study, we learn why a company in this sector chose NDepend, and why it became an integral part of their development process.

Stago works in the medical lab industry, producing lab analysis tools that focus on haemostasis and coagulation. Working hard for over 60 years and valuing long term investments, they have created a name for themselves in the industry. A few years ago, they wanted to make their software development process more efficient. In addition, they wanted to easily enforce their own best practices and code quality standards across their teams. The goal was to be able to catch issues earlier in the development cycle to cut costs and time spent on quality assurance post-development.

“We selected NDepend after reviewing all the other options on the market and it quickly became the backbone of our development effort.”
– Fabien Prestavoine, Software Architect at Stago

We are very grateful for Stago for sharing their success story with us. Stories such as these is one of the main driving forces behind creating one of the most comprehensive and powerful .NET analysis tool on the market. Since implementing NDepend, Stago has:

  • Easily met all delivery deadlines
  • Cut both cost and time spent on quality assurance
  • Delivered a consistently dependable product
  • Improved communication between their developers and architects

To read more about how NDepend helped Stago streamline their development process, click here to download a PDF of the complete case study.

Or check out this slideshow:

Let’s Build a Metric 7: Counting the Inputs

Over the last two Let’s Build a Metric installments of this series, I’ve talked about different ways to count lines of code and about ways to count different paths through your code. So far, I’ve offered up the hypotheses that more statements/lines in a method means more time to comprehend, and that more paths through the code mean more time to comprehend. I’ll further offer the hypothesis that comprehension time varies more strongly with complexity than it does with lines of code.

I do have results in for the first hypothesis, but will hold off for one more installment before posting those. Everyone on the mailing list will soon receive the second experiment, around complexity, so I’ll post the results there in an installment or two, when I circle back to modifying the composite metric. If you haven’t yet signed up for the experiment, please do so here.

Join the Experiment

More Parameters Means Harder to Read?

In this post, I’d like to address another consideration that I hypothesize will directly correlate with time to comprehend a method: parameter count. Now, unlike these last two posts, parameter count offers no hidden surprises. Unlike lines of code, I don’t know of several different ways that one might approach counting method parameters, and unlike cyclomatic complexity, there’s no slick term for this that involves exponential growth vectors. This is just a matter of tabulating the number of arguments to your methods.

Instead of offering some cool new fact for geek water-cooler trivia, I’ll offer a relatively strong opinion about method parameters. Don’t have a lot of them. In fact, don’t have more than 3, and even 3 is pushing it. Do I have your attention? Good. Continue reading Let’s Build a Metric 7: Counting the Inputs