NDepend

Improve your .NET code quality with NDepend

Should You Aim for 100 Percent Test Coverage?

Test coverage serves as one of the great lightning rods in the world of software development.  First, people ask whether it makes for a good metric at all.  Then they ask, if you want to use it as a metric, should you go for 100 percent coverage?  If not, what percentage should you go for? Maybe 42 percent, since that’s the meaning of life?

I don’t mean to trivialize an important discussion.  But sometimes it strikes me that this one could use some trivializing.  People dig in and draw battle lines over it, and counterproductive arguments often ensue.  It’s strange how fixated people get on this.

I’ll provide my take on the matter here, after a while.  But first, I’d like to offer a somewhat more philosophical look at the issue (hopefully without delving into overly abstract navel-gazing along the lines of “What even is a test, anyway, in the greater scheme of life?”)

What Does “Test Coverage” Measure?

First of all, let’s be very clear about what this metric measures.  Many in the debate — particularly those on the “less is more” side of it — quickly point out that test coverage does not measure the quality of the tests.  “You can have 100 percent coverage with completely worthless tests,” they’ll point out.  And they’ll be completely right.

To someone casually consuming this metric, the percentage can easily mislead.  After all, 100 percent coverage sounds an awful lot like 100 percent certainty.  If you hired me to do some work on your car and I told you that I’d done my work “with 100 percent coverage,” what would you assume?  I’m guessing you’d assume that I was 100 percent certain nothing would go wrong and that I invited you to be equally certain.  Critics of the total coverage school of thought point to this misunderstanding as a reason not to pursue that level of test coverage.  But personally, I just think it’s a reason to clarify definitions.

Continue reading Should You Aim for 100 Percent Test Coverage?

How to Use NDepend’s Trend Charts

Imagine a scene for a moment.  A year earlier, a corporate VP spun up a major software project for his organization.  He brought a slew of his organization’s software developers into the project.  But he also needed to add more staff in the form of contractors.

This strained the budget, so he cut a few corners in terms of team member experience.  The VP reasoned that he could make up for this with strategic use of experienced architects up front.  Those architects would prototype good patterns and make it so the less seasoned contractors could just kind of paint by numbers.  The architects spent a few months doing just that and handed the work off to the contractors.

Fast forward to the present.  Now a consultant sits in a nice office, explaining to a beleaguered VP how they got so far behind schedule.  I can picture this scene quite easily because organizations hire me to be this consultant.  I live this scene over and over again.
Continue reading How to Use NDepend’s Trend Charts

Static Analysis Issue Management Gets a Boost

Years ago, I led a team of software developers.  We owned an eclectic portfolio of software real estate.  It included some Winforms, Webforms, MVC, and even a bit of WPF sprinkled into the mix.  And, as with any eclectic neighborhood, the properties came in a variety of ages and states of repair.

Some of this code depended on a SQL Server database that had a, let’s just say, casual relationship with normalization.  Predictably, this caused maintenance struggles.  But, beyond that, it caused a credibility gap when we spoke to non-technical stakeholders.  “What do you mean you can’t give a definitive answer to how many sales we made last year?”  “Well,” I’d try to explain, “I can’t say for sure because the database doesn’t explicitly define the concept of a sale.”

Flummoxed by the mutual frustration, I tried something a bit different.  Since I couldn’t easily explain the casual, implied relationships in the database, I decided to do a show and tell.  First, I went out and found a static analyzer for database schema.  Then, I brought in some representative stakeholders and said, “watch this.”  With a flourish (okay, not really), I turned the analyzer loose on the schema.

While they didn’t grok my analogies, they the tens of thousands of warnings and errors made an impression.  In fact, it sort of terrified them.  But this did bridge the credibility gap and show them that we all had some work to do.  Mission accomplished.

Static Analyzer Issues

I engaged in something of a relationship hack with my little ploy.  You see, I know how this static analyzer would behave because I know how all of them tend to behave.  They earn their keep by carpet bombing your codebase with violations and warnings.  Out of the box, they overwhelm, and then they leave it to you to dial it back.  Truly, you can take this behavior to the bank.

So I knew that this creaky database would trigger thousands upon thousands of violations.  And then I just sat back waiting for the “magic” to happen.

I mention all of this to paint a picture of how static analyzers typically regard the concept of “issue.”  All categories of severity and priority generally roll up into this catch-all term, and it then refers to the itemized list of everything.  Your codebase has issues and it has lots of them.  This is how the tool earns its mindshare and keep — by proving how much it can surface, and then doing so.

Thus you might define the concept simply as “all that stuff the static analyzer finds.”

Continue reading Static Analysis Issue Management Gets a Boost

exploring technical debt codebase

Exploring the Technical Debt In Your Codebase

Recently, I posted about how the new version of NDepend lets you compute tech debt.  In that post, I learned that I had earned a “B” out of the box.  With 40 minutes of time investment, I could make that an “A.”  Not too shabby!

In that same post, I also talked about the various settings in and around “debt settings.”  With debt settings, you can change units of debt (time, money), thresholds, and assumptions of working capacity.  For folks at the intersection of tech and business, this provides an invaluable way to communicate with the business.

But I really just scratched the surface with that mention.  You’re probably wondering what this looks like in more detail.  How does this interact with the NDepend features you already know and love?  

Well, today, I’d like to take a look at just that.

To start, let’s look at the queries and rules explorer in some detail.

Introducing Quality Gates

Take a look at this screenshot, and you’ll notice some renamed entries, some new entries, and some familiar ones.

In the past, “Code Smells” and “Code Regressions” had the names “Code Quality” and “Code Quality Regression,” respectively.  With that resolved, the true newcomers sit on top: Quality Gates and Hot Spots.  Let’s talk about quality gates.

Continue reading Exploring the Technical Debt In Your Codebase

the relationship between team size and code quality

The Relationship Between Team Size and Code Quality

Over the last few years, I’ve had the occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Continue reading The Relationship Between Team Size and Code Quality

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

New Year’s Resolutions for Code Quality

Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions.  With other holidays, they range all over the map.  While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.

If you doubt this, look around during the first week of the year.  Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less.  Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that.  Instead, I’ll speak to some resolutions you should consider when it comes to code quality.  As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.

Before we get into specifics though, let’s consider the context in which I talk about code quality.  Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft.  That can easily result in diminishing returns on effort.  Instead, I refer to code quality in the business sense.  High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.

So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?”  Let’s take a look.

Continue reading New Year’s Resolutions for Code Quality

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

trend metrics

Keep Your Codebase Fit with Trend Metrics

A while back, I wrote a post about the importance of trends when discussing code metrics.  Metrics have an impact when teams are first exposed to them, but that tends to fade with time.  Context and trend monitoring create and sustain a sense of urgency.

To understand what I mean, imagine a person aware that he has put on some weight over the years.  One day, he steps on a scale and realizes that he’s much heavier than previously thought.  That induces a moment of shock and, no doubt, grand plans for gyms, diets, and lifestyle adjustments.  But, as time passes, his attitude may shift to one in which the new, heavier weight defines his self-conception.  The weight metric loses its impact.

To avoid this, he needs to continue measuring himself.  He may see himself gaining further weight, poking a hole in the illusion that he has evened out.  Or, conversely, he may see that small adjustments have helped him lose weight, and be encouraged to continue with those adjustments.  In either case, his ongoing conception of progress, more than the actual weight metric, drives and motivates behaviours.

The same holds true with codebases and keeping them clean.  All too often, I see organizations run some sort of static analysis or linting tool on their codebase, and conclude “it’s bad.”  They resolve only to do a better job in a year or two when the rewrite will start.  However good or bad any given figure might be, the trend-line, and not the figure itself, holds the most significance.

Continue reading Keep Your Codebase Fit with Trend Metrics

The Biggest Mistake Static Analysis Could Have Prevented

As I’ve probably mentioned before, many of my clients pay me to come do assessments of their codebases, application portfolios and software practice.  And, as you can no doubt imagine, some of my sturdiest, trustiest tools in the tool chest for this work are various forms of static analysis.

Sometimes I go to client sites by plane, train or automobile (okay, never by train).  Sometimes I just remote in.  Sometimes I do fancy write-ups.  Sometimes, I present my findings with spiffy slide decks.  And sometimes, I simply deliver a verbal report without fanfare.  The particulars vary, but what never varies is why I’m there.

Here’s a hint: I’m never there because the client wants to pay my rate to brag about how everything is great with their software.

Where Does It All Go Wrong?

Given what I’m describing here, one might conclude that I’m some sort of code snob and that I am, at the very least, heavily judging everyone’s code.  And, while I’ll admit that every now and then I think, “the daily WTF would love this,” mostly I’m not judging at all – just cataloging.  After all, I wasn’t sitting with you during the pre-release death march, nor was I the one thinking, “someone is literally screaming at me, so global variable it is.”

I earnestly tell developers at client sites that I don’t know that I’d have done a lot better walking a mile in their shoes.  What I do know is that I’d have, in my head, a clearer map from “global variable today” to “massive pain tomorrow” and be better able to articulate it to management.  But, on the whole, I’m like a home inspector checking out a home that was rented and subsequently trashed by a rock band; I’m writing up an assessment of the damage and not judging their lifestyle.

But for my clients, I’m asked to do more than inspect and catalog – I also have to do root cause analysis and offer suggestions.  So, “maybe pass a house rule limiting renters to a single bottle of whiskey per night,” to return to the house inspector metaphor.  And cataloging all of these has led me to be a veritable human encyclopedia of preventable software development mistakes.

I was contemplating some of these mistakes recently and asking myself, “which was the biggest one” and “which would have been the most preventable with even simple analysis in place?”  It was interesting to realize, after a while, that the clear answer was not at all what you’d expect.

Continue reading The Biggest Mistake Static Analysis Could Have Prevented