NDepend

Improve your .NET code quality with NDepend

Code Coverage Should Not Be a Management Concern

Code Coverage Should Not Be a Management Concern

You could easily find yourself mired in programmer debates over code coverage.  Here’s one, for instance.  It raged on for hundreds of votes, dozens of comments, many answers, and eight years before someone put it out to pasture as “opinion-based” and closed it.

Discussion participants debated a “reasonable” percentage of code coverage.  I imagine you can arrive at the right answer by taking the mean of all of the responses. (I’m just kidding.)

Some of the folks in that thread, however, declined to give a simple percentage.  Instead, they gave the obligatory consultant’s response of “it depends” or, in the case of the accepted answer, a parable.  I found it refreshing that not everybody picked some number between 50 and 100 and blurted it out.

But I found it interesting that certain questions went unasked.  What is the goal of this code coverage?  What problem does it solve?  And as for the reasonability of the number, reasonable to whom?

What is Code Coverage? (Briefly)

Before going any further, I’ll quickly explain the concept of code coverage for those not familiar, without belaboring the point.  Code coverage measures the percentage of code that your automated unit test suite executes when it runs.

So let’s say that you had a tiny codebase consisting of two methods with one line of code each.  If your unit test suite executed one method but not the other, you would have 50% code coverage.  In the real world, this becomes significantly more complicated.  In order to achieve total coverage, for instance, you must traverse all paths through control flow statements, such as if conditions, switch statements, and loops.

When people debate how much code coverage to have, they aren’t debating how “correct” the code should be.  Nor do they debate anything about the tests themselves, such as quantity or quality.  They’re simply asking how many lines of code per one hundred your test suite should cause to execute.

You can get tools that calculate this for you.  And you can also get tools that factor it into an overall code quality dashboard.

Continue reading Code Coverage Should Not Be a Management Concern

In Defense of Using Your Users as Software Testers

In most shops of any size, you’ll find a person that’s just a little too cynical.  I’m a little cynical myself, and we programmers tend to skew that way.  But this guy takes it one step further, often disparaging the company in ways that you think must be career-limiting.  And they probably are, but that’s his problem.

Think hard, and some man or woman you’ve worked with will come to mind.  Picture the person.  Let’s call him Cynical Chad. Now, imagine Chad saying, “Testing? That’s what our users are for!”  You’ve definitely heard someone say this at least once in your career.

This is an oh-so-clever way to imply that the company serially skimps on quality.  Maybe they’re always running behind a too-ambitious schedule.  Or perhaps they don’t like to spend the money on testing.  I’m sure Chad would be happy to regale you with tales of project manager and QA incompetence.  He’ll probably tell you about your own incompetence too, if you get a couple of beers in him.

But behind Chad’s casual maligning of your company lies a real phenomenon.  With their backs against the wall, companies will toss things into production, hope for the best, and rely on users to find defects.  If this didn’t happen with some regularity in the industry, it wouldn’t be fodder for Chad’s predictable jokes and complaints.

The Height of Unprofessionalism

Let’s now forget Chad.  He’s probably off somewhere telling everyone how clueless the VPs are, anyway.

Most of the groups that you’ll work with as a software pro would recoil in horror at a deliberate strategy of using your users as testers.  They work for months or years implementing the initial release and then subsequent features.  The company spends millions on their salaries and on the software.  So to toss it to the users and say “you find our mistakes” marks the height of unprofessionalism.  It’s sloppy.

Your pride and your organization’s professional reputation call for something else.  You build the software carefully, testing as you go.  You put it through the paces, not just with unit and acceptance tests, but with a whole suite of smoke tests, load tests, stress tests and endurance tests.  QA does exploratory testing.  And then, with all of that complete, you test it all again.

Only after all of this do you release it to the wild, hoping that defects will be rare.  The users receive a polished product of which you can be proud — not a rough draft to help you sort through.

Users as Testers Reconsidered

But before we simply accept that as the right answer and move on, let’s revisit the nature of these groups.  As I mentioned, the company spends millions of dollars building this software.  This involves hiring a team of experienced and proud professionals, among other things.  Significant time, money, and company stake go into this effort.

If you earn a living as a salaried software developer, your career will involve moving from one group like this to another.   In each of these situations, anything short of shipping a polished product smacks of failure.  And in each of these situations, you’ll encounter a Chad, accusing the company of just such a failure.

But what about other situations?  Should enlisting users as testers always mean a failure of due diligence?  Well, no, I would argue.  Sometimes it’s a perfectly sound business or life decision.

Continue reading In Defense of Using Your Users as Software Testers

How to Use NDepend’s Trend Charts

Imagine a scene for a moment.  A year earlier, a corporate VP spun up a major software project for his organization.  He brought a slew of his organization’s software developers into the project.  But he also needed to add more staff in the form of contractors.

This strained the budget, so he cut a few corners in terms of team member experience.  The VP reasoned that he could make up for this with strategic use of experienced architects up front.  Those architects would prototype good patterns and make it so the less seasoned contractors could just kind of paint by numbers.  The architects spent a few months doing just that and handed the work off to the contractors.

Fast forward to the present.  Now a consultant sits in a nice office, explaining to a beleaguered VP how they got so far behind schedule.  I can picture this scene quite easily because organizations hire me to be this consultant.  I live this scene over and over again.
Continue reading How to Use NDepend’s Trend Charts

How to Evaluate Your Static Analysis Process

I often get inquiries from clients and prospects about setting up and operationalizing static analysis.  This makes sense.  After all, we live in a world short on time and with software developers in great demand.  These clients always seem to have more to do than bandwidth allows.  And static analysis effectively automates subtle but important considerations in software development.

Specifically, it automates peer review to a certain extent.  The static analyzer acts as a non-judging, mute reviewer of sorts.  It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment.  And, finally, it helps you out by acting as architect.  Team members can learn from the tool’s guidance.

So, as I’ve said, receiving setup inquiries doesn’t surprise me.  And I applaud these clients for pursuing this path of improvement.

What does surprise me, however, is how few organizations seem to ask another, related question.  They rarely ask for feedback about the efficacy of their currently implemented process.  Many organizations seem to consider static analysis implementation a checkbox kind of activity.  Have you done it?  Check.  Good.

So today, I’ll talk about checking in on an existing static analysis implementation.  How should you evaluate your static analysis process?

Continue reading How to Evaluate Your Static Analysis Process

Pulling Your Team Through a Project Crunch

Society dictates, for the most part, that childhood serves as a dress rehearsal for adulthood.  Sure, we go to school and learn to read, write, and ‘rithmetic, but we also learn life lessons.  And these lessons come during a time when we can learn mostly consequence-free.

During these formative years, pretty much all of us learn about procrastination.  More specifically, we learn that procrastination feels great.  But then, perhaps a week later, we learn that procrastination actually feels awful. Our young brains learn a lesson about trade-offs.  Despair.com captures this with a delightfully cynical aphorism: “hard work often pays off after time, but laziness always pays off now.”

Continue reading Pulling Your Team Through a Project Crunch

What DevOps Means for Static Analysis

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.
Continue reading What DevOps Means for Static Analysis

Computing Technical Debt with NDepend

For years, I have struggled to articulate technical debt to non-technical stakeholders.  This struggle says something, given that technical debt makes an excellent metaphor in and of itself.

The concept explains that you incur a price for taking quality shortcuts in the code to get done quickly.  But you don’t just pay for those shortcuts with more work later — you accrue interest.Save yourself an hour today with some copy pasta, and you’ll eventually pay for that decisions with many hours down the road.

So I say to interested, non-technical parties, “think of these shortcuts today as decisions upon which you pay interest down the line.”  They typically squint at me a little and say, “yeah, I get it.”  But I generally don’t think they get it.  At least, not fully.

Lack of Concreteness

I think the reason for this tends to come from a lack of actual units.  As a counterexample, think of explaining an auto loan to someone.  “I’m going to loan you $30,000 to buy a car.  With sales tax and interest factored in, you’ll pay me back over a 5 year period, and you’ll pay me about $36,000 in total.”  Explained this way to a consumer, they get it.  “Oh, I see.  It’ll cost me about $6,000 if I want you to come up with that much cash on my behalf.”  They can make an informed value decision.

But that falls flat for a project manager in a codebase.  “Oh man, you don’t want us to squeeze this in by Friday.  We’ll have to do terrible, unspeakable things in the code!  We’ll create so much tech debt.”

“Uh, okay.  That sounds ominous.  What’s the cost?”

“What do you mean?  There’s tech debt!  It’ll be worse later when we fix it than if we do it correctly the first time.”

“Right, but how much worse?  How much more time?”

“Well, you can’t exactly put a number to it, but much worse!”

And so and and so forth.  I imagine that anyone reading can recall similar conversations from one end or the other (or maybe even both).  Technical debt provides a phenomenal metaphor in the abstract.  But when it comes to specifics, it tends to fizzle a bit.

Continue reading Computing Technical Debt with NDepend

What Metrics Should the CIO See?

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.
Continue reading What Metrics Should the CIO See?

Recovering from a Mission Critical Whiff

A career in software produces a handful of truly iconic moments.  First, you beam with pride the first time something you wrote works in production.  Then, you recoil in horror the first time you bring your team’s project to a screeching halt with a broken build or some sort of obliteration of the day’s source history.  And so it goes at the individual level.

But so it also goes at the team or department level, with diluted individual responsibility and higher stakes.  Everyone enjoys that first major launch party.  And, on the flip side, everyone shudders to recall their first death march.  But perhaps no moment produces as many hangdog looks and feelings as the collective, mission critical whiff.

I bet you can picture it.  Your group starts charging at an aggressive deadline, convinced you’ll get there.  The program or company has its skeptics, and you fall behind schedule, but you resolve to prove them wrong.  External stakes run high, but somehow your collective pride trumps even that.  At various points during the project, stakeholders offer a reprieve in the form of extensions, but you assure them you get there.

It requires a lot of nights and weekends, and even some all-nighters in the run up to launch.  But somehow, you get there.  You ship your project with an exhausted feeling of pride.

And then all hell breaks loose.

Major bugs stream in.  The technical debt you knew you’d piled up comes due.  Customers get irate and laugh sardonically at the new shipment.  And, up and down the organizational ladder, people fume.  Uh oh.

How do you handle this?  What can you learn?
Continue reading Recovering from a Mission Critical Whiff

the relationship between team size and code quality

The Relationship Between Team Size and Code Quality

Over the last few years, I’ve had the occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Continue reading The Relationship Between Team Size and Code Quality