NDepend

Improve your .NET code quality with NDepend

What DevOps Means for Static Analysis

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.
Continue reading What DevOps Means for Static Analysis

Why Expert Developers Still Make Mistakes

When pressed, I bet you can think of an interesting dichotomy in the software world.  On the one hand, we programmers seem an extraordinarily helpful bunch.  You can generally picture us going to user groups, conferences, and hackathons to help one another.  We blog, record videos, and help people out on Twitter.

But then, we also seem to tear each other apart.  Have you ever hesitated before posting something on Stack Overflow?  Have you worried that you’ll miss some arcane piece of protocol or else that you’ve asked a stupid question.  Or, spreading our field of vision a little wider, have you ever seen nasty comment sections and ferocious arguments?

We programmers love to help each other… and we also like to rip each other to shreds.  What gives?

Reconciling the Paradoxical

Of course, I need to start by pointing out that “the programming world” consists of many, many human beings.  These people have personalities and motivations as diverse as humanity in general.  So naturally, contradictory behavioral tendencies in the population group can exist.

But let’s set that aside for a moment.  Instead, let’s try to squish the programming community into a single (if way over-generalized) human being.  How can this person be so helpful, but also so… rude?

The answer lies in understanding the protocol of helping.  The person presenting the help is an expert.  Experts enjoy explaining, teaching, offering opinions, and generally helping.  But you’d also better listen up the first time, pay attention to the rules, and not waste their time.  Or they’ll let you hear about it.

In the programming community, we gravitate toward conceptual, meritocratic ladder ranking.  Expert thus becomes hard-won, carefully guarded status in the community.  Show any sign of weakness, and you might worry that you’ll fall down a few rungs on the ladder.

But We Still Make Mistakes

And yet, however expert, we still make mistakes.  Of course, nobody would deny that.  Go up to literally anyone in the field, ask, “do you ever make mistakes,” and you’ll hear “of course” or at least a tepid, “every now and then.”  But a difference exists between making mistakes in the hypothetical and making a specific mistake in the moment.

As a result, many in the field try to exude an air of infallibility.  Most commonly, this manifests in the form of that guy that never, ever says “I don’t know.”  More generally, you can recognize it in the form of constant weighing in and holding forth on all sorts of topics.  In this field, we do tend to build up an impressive beachhead of knowledge — algorithm runtimes, design patterns, API calls, tips and tricks, etc.  Armed with that, we can take up residence in the expert’s chair.

But no matter how we try to disguise it, we inevitably make mistakes.  Perhaps we do something as simple as introducing a bug.  Or maybe we make a fundamentally bad call about some piece of architecture, costing lots of time and effort.  Big or small, though, it happens.  The interesting question is why?  If we log Malcom Gladwell’s famous 10,000 hours of practice, and have heavy incentives to show no weakness, why do we still make mistakes?

Lapses in Concentration

Perhaps most simple and obvious, lapses in concentration will lead to mistakes.  This applies no matter who you are, how much you practice, or what you know.  This can happen in immediately obvious ways.  For instance, your mind might wander while doing relatively repetitive programming tasks, like updating giant reams of XML configuration or something.  Generally speaking, monotonous work creates breeding ground for mistakes (which speaks to why doing such work is a smell for programmers).

But it goes beyond the most obvious as well.  Feeling tired or distracted can lead to concentration lapse mistakes.  Interruptions and attempts to multi-task do the same.  I don’t care how much of a programming black belt you may be — trying to write code while half paying attention on a status call will lead to mistakes.

Imperfect or “Noisy” Information

Moving beyond simple mistakes, let’s look at a category that tends to lead to deeper errors.  I’m talking here about mistakes arising from flawed information.  To understand, consider an example near and dear to any programmer’s heart: bad or incomplete requirements.  If you take action based on erroneous information, mistakes happen.  Now you might argue, “that isn’t my mistake,” but I consider that hair splitting.  Other factors may contribute, but you still own that implementation if you created it.

But look beyond just bad information.  “Noisy” information creates problems as well.  If your business partners bury requirements or requests in the middle of lots of irrelevancies, this can distract as well.  For all of their best intentions, I see a lot of this happening in expansive requirements documents that try to cover every imaginable behavior of a not-yet-written system right up front.  You become lost in a sea of noise and you make mistakes.

These mistakes may come in simple forms, like missing buttons or incorrect behaviors.  But they can also prove fundamental.  If you learn at a later date that the system will actually only ever need one data store, you may have built a completely unnecessary data access layer abstraction.

Overconfidence or Not Enlisting Help

We’ve examined some causes that fall under “probably not your fault.”  Now let’s look at one that falls under, “probably your fault.”  I’m talking about unwarranted faith in your own decision-making.

As I mentioned earlier, in the giant ladder ranking of programmer meritocracy, “I don’t know” can knock you down a few rungs.  (I’ll leave it to the reader to evaluate whether this happens in actuality or only in our minds.)  This leads to a behavior wherein we may try to “wing it,” even in unfamiliar territory.

When we do this, we have no one but ourselves to blame for the inevitable mistakes.  On my own blog, DaedTech, I once gave a label to those who frequently posture and fail this way: expert beginners.  Of course, that label talks about someone of marginal competence, but even a bonafide expert can fall victim to too much self-assurance.  The field of programming presents such immense and complex surface area that you will always have blind-spots.  Pretending you don’t leads to mistakes.

Inevitability

Let’s get a little more philosophical here.  I just mentioned that programming has too much ground for any one person to cover.  This forces a choice between admitting you need outside expertise and making mistakes.  But let’s expand on that even a little more.

Programming is knowledge work.  This means that we, as programmers, solve problems rather than perform any sort of repetitive labor.  Sure, you might write a handful of custom web apps that prove similar in nature.  But this is a far cry from the cookie-cutter nature of, say, assembly line work.  Even writing somewhat similar apps, all of our work involves solving problems that no one has yet solved.

And when you’re blazing a new trail, you will inevitably make mistakes.  It simply comes with the territory.

In Fact, You Should Make Mistakes

I’ll conclude by going even further than inevitability.  You should make mistakes.  In the first place, I think that a culture wherein we consider mistakes signs of weakness is counter-productive and asinine.  Having prominent experts say, “gosh, I really don’t know” encourages us all to do the same and it generally promotes a more productive culture.

But the benefit runs deeper.  I’ve heard it said that, if you’re not making mistakes, you’re probably not doing anything interesting.  And I firmly believe in that.  Pushing the envelope means making mistakes.  But, even beyond that, whether we make mistakes or not is less important than developing robust recovery mechanisms.  We should design software and systems not with an eye toward perfection, but with an eye toward minimizing the impact of mistakes.  After all, software is so fluid that today’s correctly functioning system becomes tomorrow’s ‘mistake’ when the requirements change.  So you might as well get good at recovering.

So, why do experts make mistakes?  Because we all do, and because our mistakes drive us forward when we learn from them.

exploring technical debt codebase

Exploring the Technical Debt In Your Codebase

Recently, I posted about how the new version of NDepend lets you compute tech debt.  In that post, I learned that I had earned a “B” out of the box.  With 40 minutes of time investment, I could make that an “A.”  Not too shabby!

In that same post, I also talked about the various settings in and around “debt settings.”  With debt settings, you can change units of debt (time, money), thresholds, and assumptions of working capacity.  For folks at the intersection of tech and business, this provides an invaluable way to communicate with the business.

But I really just scratched the surface with that mention.  You’re probably wondering what this looks like in more detail.  How does this interact with the NDepend features you already know and love?  

Well, today, I’d like to take a look at just that.

To start, let’s look at the queries and rules explorer in some detail.

Introducing Quality Gates

Take a look at this screenshot, and you’ll notice some renamed entries, some new entries, and some familiar ones.

In the past, “Code Smells” and “Code Regressions” had the names “Code Quality” and “Code Quality Regression,” respectively.  With that resolved, the true newcomers sit on top: Quality Gates and Hot Spots.  Let’s talk about quality gates.

Continue reading Exploring the Technical Debt In Your Codebase

The One Thing Every Company Can Do to Reduce Technical Debt

The idea of technical debt has become ubiquitous in our industry.  It started as a metaphor to help business stakeholders understand the compounding cost of shortcuts in the code.  Then, from there, it grew to define perhaps the foundation of trade-offs in the tech world.

You’d find yourself hard pressed, these days, to find a software shop that has never heard of tech debt.  It seems that just about everyone can talk in the abstract about dragons looming in their code, portending an eventual reckoning.  “We need to do something about our tech debt,” has become the rallying cry for “we’re running before we walk.”

As with its fiscal counterpart, when all other factors equal, having less tech debt is better than having more.  Technical debt creates drag on the pace of new feature deliver until someone ‘repays’ it.  And so shops constantly grapple with the question, “how can we reduce our tech debt?”

I could easily write a post where I listed the 3 or 5 or 13 or whatever ways to reduce tech debt.  First, I’d tell you to reduce problematic coupling.  Then, I’d tell you to stop it with the global variables.  You get the idea.

But today, I want to do something a bit different.  I want to talk about the one thing that every company can do to reduce tech debt.  I consider it to be sort of a step zero.

Continue reading The One Thing Every Company Can Do to Reduce Technical Debt

Computing Technical Debt with NDepend

For years, I have struggled to articulate technical debt to non-technical stakeholders.  This struggle says something, given that technical debt makes an excellent metaphor in and of itself.

The concept explains that you incur a price for taking quality shortcuts in the code to get done quickly.  But you don’t just pay for those shortcuts with more work later — you accrue interest.Save yourself an hour today with some copy pasta, and you’ll eventually pay for that decisions with many hours down the road.

So I say to interested, non-technical parties, “think of these shortcuts today as decisions upon which you pay interest down the line.”  They typically squint at me a little and say, “yeah, I get it.”  But I generally don’t think they get it.  At least, not fully.

Lack of Concreteness

I think the reason for this tends to come from a lack of actual units.  As a counterexample, think of explaining an auto loan to someone.  “I’m going to loan you $30,000 to buy a car.  With sales tax and interest factored in, you’ll pay me back over a 5 year period, and you’ll pay me about $36,000 in total.”  Explained this way to a consumer, they get it.  “Oh, I see.  It’ll cost me about $6,000 if I want you to come up with that much cash on my behalf.”  They can make an informed value decision.

But that falls flat for a project manager in a codebase.  “Oh man, you don’t want us to squeeze this in by Friday.  We’ll have to do terrible, unspeakable things in the code!  We’ll create so much tech debt.”

“Uh, okay.  That sounds ominous.  What’s the cost?”

“What do you mean?  There’s tech debt!  It’ll be worse later when we fix it than if we do it correctly the first time.”

“Right, but how much worse?  How much more time?”

“Well, you can’t exactly put a number to it, but much worse!”

And so and and so forth.  I imagine that anyone reading can recall similar conversations from one end or the other (or maybe even both).  Technical debt provides a phenomenal metaphor in the abstract.  But when it comes to specifics, it tends to fizzle a bit.

Continue reading Computing Technical Debt with NDepend

Learning Programming with Hands on Projects

If you want a surefire way to make money, look for enormous disparity between demand and supply.  As software developers, we understand this implicitly.  When we open our inboxes in the morning, we see vacuous missives from recruiters.  “Hey, dudebro, we need a JavaScript ninja-rockstar like you!”

You don’t tend to see vaguely patronizing, unflinchingly desperate requests like that unless you sit on some kind of goldmine.  They approach us the way one might approach a mischevious toddler holding a winning lottery ticket.  And, of course, anyone would expect that with wildly disproportionate supply and demand.

But, for us, this transcends just writing the code and oozes into learning about it.  Like baseball teams playing the long game, companies would rather grow their own talent than shell out for high-priced free agents.  And so learning about software might just prove more of a growth industry than writing it.

Continue reading Learning Programming with Hands on Projects

What Metrics Should the CIO See?

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.
Continue reading What Metrics Should the CIO See?

Recovering from a Mission Critical Whiff

A career in software produces a handful of truly iconic moments.  First, you beam with pride the first time something you wrote works in production.  Then, you recoil in horror the first time you bring your team’s project to a screeching halt with a broken build or some sort of obliteration of the day’s source history.  And so it goes at the individual level.

But so it also goes at the team or department level, with diluted individual responsibility and higher stakes.  Everyone enjoys that first major launch party.  And, on the flip side, everyone shudders to recall their first death march.  But perhaps no moment produces as many hangdog looks and feelings as the collective, mission critical whiff.

I bet you can picture it.  Your group starts charging at an aggressive deadline, convinced you’ll get there.  The program or company has its skeptics, and you fall behind schedule, but you resolve to prove them wrong.  External stakes run high, but somehow your collective pride trumps even that.  At various points during the project, stakeholders offer a reprieve in the form of extensions, but you assure them you get there.

It requires a lot of nights and weekends, and even some all-nighters in the run up to launch.  But somehow, you get there.  You ship your project with an exhausted feeling of pride.

And then all hell breaks loose.

Major bugs stream in.  The technical debt you knew you’d piled up comes due.  Customers get irate and laugh sardonically at the new shipment.  And, up and down the organizational ladder, people fume.  Uh oh.

How do you handle this?  What can you learn?
Continue reading Recovering from a Mission Critical Whiff

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

New Year’s Resolutions for Code Quality

Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions.  With other holidays, they range all over the map.  While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.

If you doubt this, look around during the first week of the year.  Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less.  Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that.  Instead, I’ll speak to some resolutions you should consider when it comes to code quality.  As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.

Before we get into specifics though, let’s consider the context in which I talk about code quality.  Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft.  That can easily result in diminishing returns on effort.  Instead, I refer to code quality in the business sense.  High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.

So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?”  Let’s take a look.

Continue reading New Year’s Resolutions for Code Quality