NDepend

Improve your .NET code quality with NDepend

Be Careful with Software Metaphors

skyscraper- software metaphors

Over the years, there have been any number of popular software metaphors that help people radically misunderstand the realities of software development.  Probably the most famous and persistent one is the idea that making software is similar to building a skyscraper (or to building construction in general).

This led us, as an industry, to approach software by starting with a knowledge worker “architect” who would draw grand schematics to plot every last detail of the software construction.  This was done so that the manual laborers (junior developers) tasked with actual construction could just do repetitive tasks by rote, deferring to a foreman (team lead) should the need for serious thinking arise.  It was important to lay a good foundation with database and framework selection, because once you started there could be no turning back.  Ever.  Should even minor plan changes arise during the course of the project, that would mean a change request, delaying delivery by months.

Software is just like construction, provided you’re terrible at building software.

This metaphor is so prevalent that it transcended conscious thought and crept its way into our subconscious, as evidenced by the “architect” title.  Given the prevalence of agile (or at least iterative) software development, I think you’d be hard pressed to find people that still thought software construction was a great model for building software.  I don’t think you see a lot of thinly sliced buildings, starting with an operational kitchen only and building out from there.

But there are other, more subtle, parallels that pervade the industry and lead to misunderstandings between “the business,” managers, and software developers.

Component Assembly

One such misunderstanding that I see frequently is to equate software components with physical components.  Consider a small application that consists of a login screen, a profile screen, and a screen that allows users to browse and make purchases.  There’s a natural tendency for people not involved with the code — particularly non-technical people — to view these as three isolated components.

The mental model thus becomes one of assembly.  This small project is like setting up a bedroom with disassembled furniture.  There’s a bed, a dresser, and a night stand, so what you do you when you have a tight timeline and plenty of labor?  Naturally, you create a bed team, a dresser team, and a night stand team and task them with working in isolation.  Once the individual components are ready, they can be integrated by positioning them appropriately within the room.  Right?

It’s the perfect plan, but it seems like the assembly teams can’t figure it out.  They keep talking about things you don’t care about, like databases, session management, and something called “common,” whatever that means.  So you tell them to have more meetings and figure it out.  But then they come back and talk about how it isn’t a good idea for two different teams to implement “leg.”  You patiently explain that a bed leg is different than a night stand leg or a dresser leg, and tell them to each make their own, and that you don’t know or care what “DRY” means.

Building isolated, pluggable components is a good idea, it makes business sense, and it allows you to pipeline labor.  Good developers will figure out how to make that sound plan a reality.  Right?

Not so much.  As it turns out, software components and physical components have some key differences.  Replicating a physical component involves construction, fabrication, or 3-D printing, whereas replication of a piece of software involves flipping a bunch of bits on a disk.  Reusing a physical component means hacking it off of a nightstand and taping it to a bed.  Reusing a software component is not destructive this way.  The differences go on, but the point is the same.  Asking a software team to operate as if it were building physical components is a recipe for friction between your mental model and theirs.financial debt as a good software metaphor

Financial Debt

This is liable to raise some eyebrows, because the concept of “technical debt,” is, perhaps, one of the best tools when it comes to facilitating a conversation between developers and people making budget decisions.  Technical debt (more or less) refers to a situation in which developers take a shortcut to get something to market  in a hurry, knowing that they’ll later have to undo their current work and “do it right.”  In other words, they’re paying a premium for short term liquidity, the way someone who incurs financial debt does.  They’ll later have to ‘repay the interest’ by spending more total time getting to the right solution.

Unlike the software metaphor of building construction, which I would argue has been largely damaging to the industry, financial debt as a metaphor has proved quite valuable.  But it has limits, and carrying the metaphor too far can lead to misaligned expectations.

When you take a loan from the bank, there are clear terms to payoff.  Generally this means that you’re constantly paying a small percentage of what you owe as a premium for the outstanding balance of the loan, and that percentage either doesn’t change, or it changes predictably.  With software?  Not so much.

Even assuming that it were straightforward to quantify the “less effort now for more effort later” trade off, you wouldn’t get a constant rate or even a predictably changing rate.  Software is a lot more volatile than that, and the “return rate” will depend on a lot more than what you’ve “borrowed.”

To put it in a way that’s perhaps easier to sink your teeth into, consider the software construct of “global state.”  If you don’t know what this is, think of it as a super power that software developers use to rip holes in the space time continuum, at least as it pertains to the world of software.  Let’s say that your software is a city, and there’s a traffic jam preventing you from shipping.  “No problem,” your developer says, “I can take care of that if you don’t mind some technical debt.”  She then proceeds to rip a wormhole into the middle of a busy street, diverting all traffic into what looks like some kind of desert somewhere.  Traffic problem solved.  Ship it.

You’ve taken out one single loan in this universe to get traffic to a manageable level.  Granted, that particular road is ruined by the wormhole, so you’ll need to build another one at some point, and that’s the interest you’ve agreed to pay.  That’s what you’re planning on doing when you get around to it.  And that all seems fine until you start getting weird reports of camels blocking traffic miles away, and scorpions stinging people going to work on traffic lights.

It turns out that ripping holes in the very fabric of your application has weird, unpredictable consequences that require repayment of debts you never planned for (and perhaps don’t understand the source of).  The lesson is that a decision to let (or encourage) developers to take shortcuts and make hacks can put your code in a degenerative state that neither of you is really prepared for.  If you aren’t careful, pressure to get them to ship will be more like navigating a minefield than shopping around for a mortgage.

Be Careful with Software Metaphors

It’s legitimately hard to mentally model software, particularly if you’ve never been a developer.  We live in a very tactile world and we use vivid mental models as mnemonics to aid our understanding.  Software is very abstract, conceptual, and precise in nature, and this makes it inordinately difficult to model in the way to which we’re accustomed.  We’re bad at bridging these worlds, and it’s perfectly understandable that we’re bad it.

Frankly, the only way to have a good mental model of software is through practice, and the realization that any analogies we use are transitory and incomplete at best.  Holding too close to any particular software metaphor is liable to trip you up in your decision making, so be very wary in conceiving of software development as being like anything other than… software development.

The Most Important Code Metrics You’ve Never Heard Of

Oh, how I hope you don’t measure developer productivity by lines of code. As Bill Gates once ably put it, “measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.”  No doubt, you have other, better reasoned code metrics that you capture for visible progress and quality barometers.  Automated test coverage is popular (though be careful with that one).  Counts of or trends in defect reduction are another one.  And of course, in our modern, agile world, spring velocity is ubiquitous.

But today, I’d like to venture off the beaten path a bit and take you through some metrics that might be unfamiliar to you, particularly if you’re no longer technical (or weren’t ever).  But don’t leave if that describes you — I’ll help you understand the significance of these metrics, even if you won’t necessarily understand all of the nitty-gritty details.

Perhaps the most significant factor here is that the code metrics I’ll go through can be tied, relatively easily, to stakeholder value in projects.  In other words, I won’t just tell you the significance of the metrics in terms of what they say about the code.  I’ll also describe what they mean for people invested in the project’s outcome.

Type Rank

It’s possible that you’ve heard of the concept of Page Rank.  If you haven’t, page rank was, for a long time, the method by which Google determined which sites on the internet were most important.  This should make intuitive sense on some level.  Amazon has a high page rank — if it went down, millions of lives would be disrupted, stocks would plummet, and all sorts of chaos would ensure.  The blog you created that one time and totally meant to add to over the years has a low page rank — no one, yourself included, would notice if it stopped working.

It turns out that you can actually reason about pieces of code in a very similar way.  Some bits of code in the code base are extremely important to the system, with inbound and outbound dependencies.  Others exist at the very periphery or are even completely useless (see the section below on dead code).  Not all code is created equally.  This scheme for ranking code by importance is called “Type Rank” (at least at the level of type granularity — methods can also be ranked).

You can use Type Rank to create a release riskiness score.  All you’d really need to do is have a build that tabulated which types had been modified and what their type rank was, and this would create a composite index of release riskiness.  Each time you were gearing up for deployment, you could look at the score.  If it were higher than normal, you’d want to budget extra time and money for additional testing efforts and issue remediation strategies.

Cohesion

Cohesion of modules in a code base can loosely be described as “how well is the code base organized?”  To put it a bit more concretely, cohesion is the idea that things with common interest are grouped together while unrelated things are not.  A cohesive house would have specialized rooms for certain purposes: food preparation, food consumption, family time, sleeping, etc.  A non-cohesive house would have elements of all of those things strewn about all over the house, resulting in a scenario where a broken refrigerator fan might mean you couldn’t sleep or work at your desk due to noise.

Keeping track of the aggregate cohesiveness score of a codebase will give you insight into how likely your team is to look ridiculous in the face of an issue.  Code bases with low cohesion are ones in which unrelated functionality is bolted together inappropriately, and this sort of thing results in really, really odd looking bugs that can erode your credibility.

Imagine speaking on your team’s behalf and explaining a bug that resulted in a significant amount of client data being clobbered.  When pressed for the root cause, you had to look the person asking directly in the eye and say, “well, that happened because we changed the font of the labels on the login page.”

You would sound ridiculous.  You’d know it.  The person you were talking to would know it.  And you’d find your credibility quickly evaporating.  Keeping track of cohesion lets you keep track of the likelihood of something like that.

Dependency Cycles

So far, I’ve talked about managing risk as it pertains to defects: the risk of encountering them on release, and the risk of encountering weird or embarrassing ones.  I’m going to switch gears, now, and talk about the risk of being caught flat-footed, unable to respond to a changing environment or a critical business need.

Dependency cycles in your code base represent a form of inappropriate coupling.  These are situations where two or more things are mutually dependent in an architectural world where it is far better for dependencies to flow one way.  As a silly but memorable example, consider the situation of charging your phone, where your phone depends on your house’s electrical system to be charged.  Would you hire an electrician to come in and create a situation where your house’s electricity depended on the presence of your charging phone?

All too often, we do this in code, and it creates situations as ludicrous as the phone-electrical example would.  When the business asks, “how hard would it be to use a different logging framework,” you don’t want the answer to be, “we’d basically have to rewrite everything from scratch.”  That makes as much sense as not being able to take your phone with you anywhere because your appliances would stop working.

So, keep an eye out for dependency cycles.  These are the early warning light indicators that you’re heading for something like this.

Dead Code

One last thing to keep an eye out for is dead code.  Dead code is code that can never possibly be called during the running application’s lifecycle.  It just sits in your codebase taking up space to no good end.

That may sound benign, but every line of code in your code base carries a small, cognitive maintenance weight.  The more code there is, the more results come back in text searches of the code base, the more files there are to lose and confuse developers, and the more general friction is encountered when working with the system.  This has a very real cost in the labor required to maintain the code.

Use Code Metrics Wisely

These are metrics about which fewer people know, so the industry isn’t rife with stories about people gaming them, the way it is with something like unit test coverage.  But that doesn’t mean they can’t be gamed.  For instance, it’s possible to have a nightmarish code base without any actual dead code — perversely, dead code could be eliminated by finding everything useless in the code base and implementing calls to it.

The code metrics I’ve outlined today, if you make them big and visible to all, should serve as a conversation starter.  Why did we introduce a dependency cycle?  Should we be concerned about the lack of cohesion in modules?  Use them in this fashion, and your group can save real money and produce better output.  Use them in the wrong fashion, and they’ll be just another ineffective management bludgeon straight out of a Dilbert comic.

NDepend updated to Version 6.2

NDepend version 6.2 has just been released. We have addressed over 20 bug fixes, including a blocker one for Visual Studio 2015 update 1 Git Controls.

More specifically the new Visual Studio 2015 Update 1 Git controls in the Visual Studio status bar were interacting with the NDepend Visual Studio extension status bar control. As a consequence this was provoking VS UI freezing. That is fortunate that the Visual Studio team warned partners (VSIP) a few weeks ago that they were adding controls to the status bar. The issue was coming from a synchronous usage of the WPF dispatcher to implement the NDepend progress & status circle. Invoking the dispatcher asynchronously fixed the issue.

GitStatusBar

We also stumbled on an unusual issue due to an unfixed Windows bug. When working with DataGridView with many rows (like 1000+) we can face an unmanaged StackOverflowException that crashes the process. The Windows bug is explained here http://stackoverflow.com/a/14716720/27194 and as far as we know it is not fixed. The problem occurs only when the Windows process TabTip.exe runs (“Touch Keyboard and Handwriting Panel Service“) and the stackoverflow link explains that the only way to prevent it is to disable this touch keyboard service. We’re going the hard way and actually when NDepend starts, it now tries to kill this process. Most of the time it’ll work, even if the Windows user is not administrator. If you get any inconvenience with this rough fix, please let us know.

Apart these two fixes, many other bugs were fixed and some improvements were added (see the complete list here). Bugs fixed also includes some incorrect results that were happening because the way Roslyn emits IL has significantly changed in some situations, and NDepend relies a lot on IL code analysis.

Enjoy!

 

 

Let’s Build a Metric: Using CQLinq to Reason about Application State

I’ve been letting the experiments run for a bit before posting results so as to give all participants enough time to submit, if they so choose.  So, I’ll refresh everyone’s memory a bit here.  Last time, I published a study of how long it took, in seconds (self reported) for readers to comprehend a series of methods that varied by lines of code.  (Gist here).  The result was that comprehension appears to vary roughly quadratically with the number of logical lines of code.  The results of the next study are now ready, and they’re interesting!

Off the cuff, I fully expected cyclomatic complexity to drive up comprehension time faster than the number of lines of code.  It turns out, however, that this isn’t the case.  Here is a graph of the results of people’s time to comprehend code that varied only by cyclomatic complexity.  (Gist here).

SecondsVsCyclomaticComplexity

If you look at the shape of this graph, the increase is slightly more aggressive than linear, but not nearly as aggressive as the increase that comes with an increase in lines of code.  When you account for the fact that a control flow statement is also a line of code, it actually appears that conditionals are easier to comprehend than the mathematical statements from the first experiment.

Because of this finding, I’m going to ignore cyclomatic complexity for the time being in our rough cut time to comprehend metrics.  I’ll assume that control flow statements impact time to comprehend as lines of code more than as conditional branching scenarios.  Perhaps this makes sense, too, since understanding all of the branching of a method is probably an easier task than testing all paths through it.

As an aside, one of the things I love about NDepend is that it lets me be relatively scientific about the approach to code.  I constantly have questions about the character and makeup of code, and NDepend provides a great framework for getting answers quickly.  I’ve actually parlayed this into a nice component of my consulting work — doing professional assessments of code bases and looking for gaps that can be assessed.

Going back to our in-progress metric, it’s going to be important to start reasoning about other factors that pertain to methods.  Here are a couple of the original hypotheses from earlier in the series that we could explore next.

  • Understanding methods that refer to class fields take longer than purely functional methods.
  • Time to comprehend is dramatically increased by reference to global variables/state.

If I turn a critical eye to these predictions, there are two key components: scope and popularity.  By scope, I mean, “how closely to the method is this thing defined?”  Is it a local variable, defined right there in the method?  Is it a class field that I have to scroll up to find a definition of?  Is it defined in some other file somewhere (or even some other assembly)?  One would assume that having to pause reading the method, navigate to some other file, open it, and read to find the definition of a variable would mean a sharp spike in time to comprehend versus an integer declared on the first line of the method.

And, by popularity, I mean, how hard is it to reason about the state of the member in question?  If you have a class with a field and two methods that use it, it’s pretty easy to understand the relationship and what the field’s value is likely to be.  If we’re talking about a global variable, then it quickly becomes almost unknowable what the thing might be and when.  You have to suck the entirety of the application’s behavior into your head to understand all the things that might happen in your method.

I’m not going to boil that ocean here, but I am going to introduce a few lesser known bits of awesomeness that come along for the ride in CQLinq.  Take a look at the following CQLinq.

If your reaction is anything like mine the first time I encountered this, you’re probably thinking, “you can do THAT?!” Yep, you sure can. Here’s what it looks like against a specific method in my Chess TDD code base.

MethodFieldsAndParametersResults

The constructor highlighted above is shown here:

BoardConstructor

As you can see, it has one parameter, uses two fields, and assigns both of those fields.

When you simply browse through the out of the box metrics that come with NDepend, these are not the kind of things you notice immediately.  The things toward which most people gravitate are obvious metrics, like method size, cyclomatic complexity, and test coverage.  But, under the hood, in the world of CQLinq, there are so many more questions that you can answer about a code base.

Stay tuned for next time, as we start exploring them in more detail and looking at how we can validate potential hypotheses about impact on time to comprehend.

And if you want to take part in this on going experiment, click below to sign up.




Join the Experiment



dev manager in a meeting

Mistakes Dev Managers Make

Managing a team of software developers is a tall order. This is doubly true when the line management includes both org chart duties (career development, HR administrivia, etc) and responsibility for the team’s performance when it comes to shipping. In this case, you’re being asked to understand their day to day performance well enough to evaluate their performance and drive improvement, in spite of the fact that what they do is utterly opaque to you. It’s like being asked to simultaneously coach a team and referee the game for a sport whose rules you don’t know. As I said, a tall order.

I’ll grant that, if you’re a dev manager, you may have been technical at some point, perhaps even recently. Or maybe not, but you’ve been around it long enough to pick up a lot of concepts, at least in the abstract. But in neither case, if you were asked what, exactly, Alice coded up yesterday, would you be able to answer. Whether it’s due to total lack of experience, being “rusty” from not having programmed in a number of years, or simply being unable to keep up with what 8 other people are doing, their work is opaque to you.

As with coaching/refereeing the game that you don’t understand, you can pick up on their body language and gestures. If all members of the team appear disgusted with one of their mates, that one probably did something bad. You’re not totally without context clues and levers to pull, but it wouldn’t be hard at all for them to put one over on you, if they were so inclined. You’re navigating a pretty tough obstacle course.

And so it becomes pretty easy to make mistakes. It’s also pretty understandable, given the lay of the land. I’ll take you through a few of the more common ones that I tend to see, and offer some thoughts on what you can do instead. Continue reading Mistakes Dev Managers Make

Let’s Build a Metric: Incorporating Results and Exploring CQLinq

It turns out I was wrong in the last post, at least if the early returns from the second experiment are to be believed.  Luckily, the scientific method allows for wrongness and is even so kind as to provide a means for correcting it.  I hypothesized that time to comprehend would vary at a higher order with cyclomatic complexity than with lines of code.  This appears not to be the case.  Hey, that’s why we are running the experiments, right?

By the way, as always, you can join the experiment if you want.
You don’t need to have participated from the beginning by any stretch, and you can opt in or out for any given experiment as suits your schedule.

Join the Experiment

 

Results of the First Experiment

Recall that the first experiment asked people to record time to comprehend for a series of methods that varied by number of lines of code.  To keep the signal to noise ratio as high as possible, the methods were simply sequential arithmetic operations, operating on an input and eventually returning a transformed output.  There were no local variables or class level fields, no control flow statements, no method invocations, and no reaching into global state.  Here is a graph of the results from doing this on 3 methods, with 1, 5, and 10 logical lines of code.

LogicalLinesOfCodeComprehensionTime

So as not to overburden anyone with work, and because it’s still early, the experiment contained three methods, yielding three points.  Because this looked loosely quadratic, I used the three points to generate a quadratic formula, which turned out to be this.

GraphOfComprehensionVsLLOC

It’s far from perfect, but this gives us our first crack at shaping time to comprehend as something experimental, rather than purely hypothetical.  Let’s take a look at how to do this using NDepend in Visual Studio.  Recall all the way back in the second post in this series that I defined a metric for time to comprehend.  It was essentially a placeholder for the concept, pending experimental results.

All we’re doing is setting the unit we’ve defined, “Seconds,” equal to the number of lines of code in a method.  But hey, now that we’ve got some actual data, let’s go with it!  The code for this metric now looks like this.

I’ve spread on multiple lines for the sake of readability and with a nod to the notion that this will grow as time goes by. Also to note is that I’ve included, for now, the number of logical lines of code as a handy reference point.

Exploring CQLinq Functionality

This is all fine, but it’s a little hard to read.  As long as we’re here, let’s do a brief foray into NDepend’s functionality.  I’m talking specifically about CQLinq syntax.  If you’re going to get as much mileage as humanly possible out of this tool, you need to become familiar with CQLinq.  It’s what will let you define your own custom ways of looking at and reasoning about your code.

I’ve  made no secret that I prefer fluent/expression Linq syntax over the operator syntax, but there are times when the former isn’t your best bet.  This is one of those times, because I want to take advantage of the “let” keyword to define some things up front for readability.  Here’s the metric converted to the operator syntax.

With that in place, let’s get rid of the cumbersome repetition of “m.NbLinesOfCode” by using the let keyword. And, while we’re at it, let’s give NbLinesOfCode a different name. Here’s what that looks like in CQLinq.

That looks a lot more readable, huh? It’s now something at least resembling the equation pictured above. But there are a few more tweaks we can make here to really clean this thing up, and they just so happen to demonstrate slightly more advanced CQLinq functionality. We’ll use the let keyword to define a function instead of a simple assignment, and then we’ll expand the names out a bit to boot. Here’s the result.

Pretty darned readable, if I do say so myself! It’s particularly nice the way seconds is now expressed — as a function of our LengthFactor equation. As we incorporate more results, this approach will allow this thing to scale better with readability, as you’ll be able to see how each consideration contributes to the seconds.

So, what does it look like? Check it out.

Updated Seconds Metric with CQLinq

Now we can examine the code base and get a nice readout of our (extremely rudimentary) calculation of how long a given method will take to understand.  And you know what else is cool?  The data points of 8.6 seconds for the 1 LLOC method and 51 for the 5 LLOC method.  Those are cool because those were the experimental averages, and seeing them in the IDE means that I did the math right. 🙂

So we finally have some experimental progress and there’s some good learning about CQLinq here.  Stay tuned for next time!

 

 

 

Refactoring is a Development Technique, Not a Project

One of the more puzzling misconceptions that I hear pertains to the topic of refactoring. I consult on a lot of legacy rescue efforts, and refactoring, and people in and around those efforts tend to think of “refactor” as “massive cleanup effort.”  I suspect this is one of those conflations that happens subconsciously.  If you actually asked some of these folks whether “refactor” and “massive cleanup effort” were synonyms, they would say no, but they never conceive of the terms in any other way during their day to day activities.

Let’s be clear.  Here is the actual definition of refactoring, per wikipedia.

 

Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.

 

Significantly, this definition mentions nothing about the scope of the effort.  Refactoring is changing the code without changing the application’s behavior.  This means the following would be examples of refactoring, provided they changed nothing about the way the system interacted with external forces.

  • Renaming variables in a single method.
  • Adding whitespace to a class for readability.
  • Eliminating dead code.
  • Deleting code that has been commented out.
  • Breaking a large method apart into a few smaller ones.

I deliberately picked the examples above because they should be semi-understandable, even by non technical folks, and because they’re all scalable down to the tiny.  Some of these activities could be done by a developer in under a minute.  These are simple, low-effort refactorings.

Let’s now consider another definition of refactoring that can be found at Martin Fowler’s website.


“Refactoring is a controlled technique for improving the design of an existing code base. Its essence is applying a series of small behavior-preserving transformations, each of which “too small to be worth doing”. However the cumulative effect of each of these transformations is quite significant.”

 

I took the wikipedia definition and used it to suggest that refactorings could be small and low-effort.  Fowler takes it a step further and suggests that they should be small and low effort.  In fact, he suggests that they should be “too small to be worth doing.”  That’s fascinating. Continue reading Refactoring is a Development Technique, Not a Project

NDepend Case Study: Increasing Development Efficiency in the Medical Laboratory Sector

Developing applications for use in the health care industry is stressful because the margin of error is almost non-existent. Whether your tool is for treatment, research, or analysis, it needs to be dependable and accurate. The more complex the application is, the higher the chance for errors and delays in development. Dependable companies abide by rigorous methodologies to develop their code before deploying it to clients. In this NDepend case study, we learn why a company in this sector chose NDepend, and why it became an integral part of their development process.

Stago works in the medical lab industry, producing lab analysis tools that focus on haemostasis and coagulation. Working hard for over 60 years and valuing long term investments, they have created a name for themselves in the industry. A few years ago, they wanted to make their software development process more efficient. In addition, they wanted to easily enforce their own best practices and code quality standards across their teams. The goal was to be able to catch issues earlier in the development cycle to cut costs and time spent on quality assurance post-development.

“We selected NDepend after reviewing all the other options on the market and it quickly became the backbone of our development effort.”
– Fabien Prestavoine, Software Architect at Stago

We are very grateful for Stago for sharing their success story with us. Stories such as these is one of the main driving forces behind creating one of the most comprehensive and powerful .NET analysis tool on the market. Since implementing NDepend, Stago has:

  • Easily met all delivery deadlines
  • Cut both cost and time spent on quality assurance
  • Delivered a consistently dependable product
  • Improved communication between their developers and architects

To read more about how NDepend helped Stago streamline their development process, click here to download a PDF of the complete case study.

Or check out this slideshow:

Let’s Build a Metric 7: Counting the Inputs

Over the last two Let’s Build a Metric installments of this series, I’ve talked about different ways to count lines of code and about ways to count different paths through your code. So far, I’ve offered up the hypotheses that more statements/lines in a method means more time to comprehend, and that more paths through the code mean more time to comprehend. I’ll further offer the hypothesis that comprehension time varies more strongly with complexity than it does with lines of code.

I do have results in for the first hypothesis, but will hold off for one more installment before posting those. Everyone on the mailing list will soon receive the second experiment, around complexity, so I’ll post the results there in an installment or two, when I circle back to modifying the composite metric. If you haven’t yet signed up for the experiment, please do so here.

Join the Experiment

More Parameters Means Harder to Read?

In this post, I’d like to address another consideration that I hypothesize will directly correlate with time to comprehend a method: parameter count. Now, unlike these last two posts, parameter count offers no hidden surprises. Unlike lines of code, I don’t know of several different ways that one might approach counting method parameters, and unlike cyclomatic complexity, there’s no slick term for this that involves exponential growth vectors. This is just a matter of tabulating the number of arguments to your methods.

Instead of offering some cool new fact for geek water-cooler trivia, I’ll offer a relatively strong opinion about method parameters. Don’t have a lot of them. In fact, don’t have more than 3, and even 3 is pushing it. Do I have your attention? Good. Continue reading Let’s Build a Metric 7: Counting the Inputs

Software Rewrite: The Chase

Last week, a post I wrote, “The Myth of the Software Rewrite“, became pretty popular.  This generated a lot of comments and discussion, so I decided just to write a follow-up post to address the discussion, as opposed to typing a blog post’s worth of thoughts, distributed over 20 or 30 comments.  This is that post.

No Misconceptions

First of all, I want to be clear about what I’m talking about.  I’m specifically talking  about a situation where the prime, determining factor in whether or not to rewrite the software is that the development group has made a mess and is clamoring to rewrite it.  In essence, they’re declaring bankruptcy — “we’re in over our heads and need outside assistance to wipe the slate clean so we can have a fresh start.”  They’re telling the business and their stakeholders that the only path to joy is letting them start over.

Here are some situations that the article was not meant to address:

  • The business decides it wants a rewrite (which makes me skeptical, but I’m not addressing business decisions).
  • Piecemeal rewrite, a chunk at a time (because this is, in fact, what I would advocate).
  • A rewrite because the original made design assumptions that have become completely obsolete (e.g. designed around disk space being extremely expensive).
  • Rewriting the software to significantly expand or alter the offering (e.g. “we need to move from web to mobile devices and offer some new features, so let’s start fresh.”)

A Lesson From Joseph Heller

Joseph Heller is the author of one of my all time favorite works of fiction, Catch 22.  Even if you’ve never read this book, you’re probably familiar with the term from conversational reference.  A catch 22 is a paradoxical, no-win situation.  Consider an example from the book.

John Yossarian, the ‘protagonist,’ is an anti-heroic bombardier in World War II.  Among other character foibles, one is an intense desire not to participate in the war by flying missions.  He’d prefer to stay on the ground, where it’s safe.  To advance this interest, he attempts to convince an army doctor that he’s insane and thus not able to fly missions.  The army doctor responds with the eponymous catch 22:  “anyone who wants to get out of combat duty isn’t really crazy.”

If you take this to its logical conclusion, the only way that Yossarian could be too crazy to fly missions is if he actually wanted to fly missions.  And if he wanted to fly them, he wouldn’t be noteworthy and he wouldn’t be trying to get out of flying them in the first place.

I mention this vis a vis software rewrites for a simple reason.  The only team I would trust with a rewrite is a team that didn’t view rewriting the software as necessary or even preferable.

It’s the people who know how to manufacture small wins and who can inch back incrementally from the brink that I trust to start a codebase clean and keep it clean. People who view a periodic bankruptcy as “just the way it goes” are the people who are going to lead you to bankruptcy. Continue reading Software Rewrite: The Chase