NDepend

Improve your .NET code quality with NDepend

Are Code Rules Meant to Be Broken?

If you’ve never seen the movie Footloose, I can’t honestly say I recommend it.  If your tastes run similarly to mine, you’ll find it somewhat over the top.

A boy from the big city moves to a quiet country town.  Once there, he finds that the town council, filled with local curmudgeons, has outlawed rock music and dancing.  So follows a predictable sequence of events as the boy tries to win his new town over and to convince them of the importance of free expression.  You can probably hear his voice saying, “come on, Mr. Uptighterton, rules are made to be broken!”

Today, I’d like to explore a bit the theme of rules and breaking them.  But I’ll move it from a boy teaching the people from American Gothic to dance and into the software development shop and to rules around a codebase.

Perhaps you’ve experienced something similarly, comically oppressive in your travels.  A power mad architect with a crazy inheritance framework.  A team lead that lectures endlessly about the finer points of Hungarian notation.  Maybe you’ve wanted to grab your fellow team members by the shirt collars, shake them, and shout, “go on, leave the trailing underscore off the class field name!”

If so, then I sympathize and empathize.  Soul crushing shops do exist, seeking to break the spirits of all working there.  In such places, rule breaking might help if only to shake people out of learned helplessness and depression.  But I’m going to examine some relatively normal situations and explore the role of rules for a software team.

Continue reading Are Code Rules Meant to Be Broken?

Things Everyone Forgets Before Committing Code

Committing code involves, in a dramatic sense, two universes colliding.  Firstly, you have the universe of your own work and metaphorical workbench.  You’ve worked for some amount of time on your code, hopefully in a state of flow.  And secondly, you have the universe of the team’s communal work product.  And so when you commit, you force these universes together by foisting your recent work on the team.

In bygone years, this created far more heartburn for the average team than it does today.  Barbaric as it may seem, I can actually remember a time when some professional software developers didn’t use source control.  A “commit” thus involved literally overwriting a file on a shared drive, obliterating all trace of the previous version.  (Sometimes, you might create a backup copy of the folder).  Here, your universe actually kind of ate the team’s communal universe.

More Frequent Commits, Fewer Problems

But, even in the earliest days of my career, lack of source control represented sloppy process.  I remember installing the practice in situations that lacked it.  But even with source control in place, people tended to go off and code in their own world for weeks or even months during feature development.  Only when release time neared did they start to have what the industry affectionately calls “merge parties,” wherein the team would spend days or weeks sorting out all of the instances where their changes trampled one another’s.

In the interceding years, the industry has learned the wisdom of continuous integration (CI).  CI builds on the premise, “if it hurts, do it more,” by encouraging frequent, lower stakes commits.  These days, most teams commit on the order of hours, rather than weeks or months.  This significantly lowers the onerousness of universes colliding.

But it doesn’t eliminate the problem altogether, even in teams that live the CI dream.  No matter how frequently you do it and how sophisticated the workflows around modern source control, you still have the basic problem of putting your stuff into the team’s universe.  And this comes with the metaphorical risk of leaving your tools laying around where someone can trip over them.

So today, let’s take a look at some of the most common things everyone forgets before committing code.  And, for the purposes of the post, I’ll remain source control agnostic, with the parlance “commit” meaning generally to sync your files with the team’s.

Continue reading Things Everyone Forgets Before Committing Code

How to Evaluate Your Static Analysis Process

I often get inquiries from clients and prospects about setting up and operationalizing static analysis.  This makes sense.  After all, we live in a world short on time and with software developers in great demand.  These clients always seem to have more to do than bandwidth allows.  And static analysis effectively automates subtle but important considerations in software development.

Specifically, it automates peer review to a certain extent.  The static analyzer acts as a non-judging, mute reviewer of sorts.  It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment.  And, finally, it helps you out by acting as architect.  Team members can learn from the tool’s guidance.

So, as I’ve said, receiving setup inquiries doesn’t surprise me.  And I applaud these clients for pursuing this path of improvement.

What does surprise me, however, is how few organizations seem to ask another, related question.  They rarely ask for feedback about the efficacy of their currently implemented process.  Many organizations seem to consider static analysis implementation a checkbox kind of activity.  Have you done it?  Check.  Good.

So today, I’ll talk about checking in on an existing static analysis implementation.  How should you evaluate your static analysis process?

Continue reading How to Evaluate Your Static Analysis Process

Pulling Your Team Through a Project Crunch

Society dictates, for the most part, that childhood serves as a dress rehearsal for adulthood.  Sure, we go to school and learn to read, write, and ‘rithmetic, but we also learn life lessons.  And these lessons come during a time when we can learn mostly consequence-free.

During these formative years, pretty much all of us learn about procrastination.  More specifically, we learn that procrastination feels great.  But then, perhaps a week later, we learn that procrastination actually feels awful. Our young brains learn a lesson about trade-offs.  Despair.com captures this with a delightfully cynical aphorism: “hard work often pays off after time, but laziness always pays off now.”

Continue reading Pulling Your Team Through a Project Crunch

What DevOps Means for Static Analysis

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.
Continue reading What DevOps Means for Static Analysis

Why Expert Developers Still Make Mistakes

When pressed, I bet you can think of an interesting dichotomy in the software world.  On the one hand, we programmers seem an extraordinarily helpful bunch.  You can generally picture us going to user groups, conferences, and hackathons to help one another.  We blog, record videos, and help people out on Twitter.

But then, we also seem to tear each other apart.  Have you ever hesitated before posting something on Stack Overflow?  Have you worried that you’ll miss some arcane piece of protocol or else that you’ve asked a stupid question.  Or, spreading our field of vision a little wider, have you ever seen nasty comment sections and ferocious arguments?

We programmers love to help each other… and we also like to rip each other to shreds.  What gives?

Reconciling the Paradoxical

Of course, I need to start by pointing out that “the programming world” consists of many, many human beings.  These people have personalities and motivations as diverse as humanity in general.  So naturally, contradictory behavioral tendencies in the population group can exist.

But let’s set that aside for a moment.  Instead, let’s try to squish the programming community into a single (if way over-generalized) human being.  How can this person be so helpful, but also so… rude?

The answer lies in understanding the protocol of helping.  The person presenting the help is an expert.  Experts enjoy explaining, teaching, offering opinions, and generally helping.  But you’d also better listen up the first time, pay attention to the rules, and not waste their time.  Or they’ll let you hear about it.

In the programming community, we gravitate toward conceptual, meritocratic ladder ranking.  Expert thus becomes hard-won, carefully guarded status in the community.  Show any sign of weakness, and you might worry that you’ll fall down a few rungs on the ladder.

But We Still Make Mistakes

And yet, however expert, we still make mistakes.  Of course, nobody would deny that.  Go up to literally anyone in the field, ask, “do you ever make mistakes,” and you’ll hear “of course” or at least a tepid, “every now and then.”  But a difference exists between making mistakes in the hypothetical and making a specific mistake in the moment.

As a result, many in the field try to exude an air of infallibility.  Most commonly, this manifests in the form of that guy that never, ever says “I don’t know.”  More generally, you can recognize it in the form of constant weighing in and holding forth on all sorts of topics.  In this field, we do tend to build up an impressive beachhead of knowledge — algorithm runtimes, design patterns, API calls, tips and tricks, etc.  Armed with that, we can take up residence in the expert’s chair.

But no matter how we try to disguise it, we inevitably make mistakes.  Perhaps we do something as simple as introducing a bug.  Or maybe we make a fundamentally bad call about some piece of architecture, costing lots of time and effort.  Big or small, though, it happens.  The interesting question is why?  If we log Malcom Gladwell’s famous 10,000 hours of practice, and have heavy incentives to show no weakness, why do we still make mistakes?

Lapses in Concentration

Perhaps most simple and obvious, lapses in concentration will lead to mistakes.  This applies no matter who you are, how much you practice, or what you know.  This can happen in immediately obvious ways.  For instance, your mind might wander while doing relatively repetitive programming tasks, like updating giant reams of XML configuration or something.  Generally speaking, monotonous work creates breeding ground for mistakes (which speaks to why doing such work is a smell for programmers).

But it goes beyond the most obvious as well.  Feeling tired or distracted can lead to concentration lapse mistakes.  Interruptions and attempts to multi-task do the same.  I don’t care how much of a programming black belt you may be — trying to write code while half paying attention on a status call will lead to mistakes.

Imperfect or “Noisy” Information

Moving beyond simple mistakes, let’s look at a category that tends to lead to deeper errors.  I’m talking here about mistakes arising from flawed information.  To understand, consider an example near and dear to any programmer’s heart: bad or incomplete requirements.  If you take action based on erroneous information, mistakes happen.  Now you might argue, “that isn’t my mistake,” but I consider that hair splitting.  Other factors may contribute, but you still own that implementation if you created it.

But look beyond just bad information.  “Noisy” information creates problems as well.  If your business partners bury requirements or requests in the middle of lots of irrelevancies, this can distract as well.  For all of their best intentions, I see a lot of this happening in expansive requirements documents that try to cover every imaginable behavior of a not-yet-written system right up front.  You become lost in a sea of noise and you make mistakes.

These mistakes may come in simple forms, like missing buttons or incorrect behaviors.  But they can also prove fundamental.  If you learn at a later date that the system will actually only ever need one data store, you may have built a completely unnecessary data access layer abstraction.

Overconfidence or Not Enlisting Help

We’ve examined some causes that fall under “probably not your fault.”  Now let’s look at one that falls under, “probably your fault.”  I’m talking about unwarranted faith in your own decision-making.

As I mentioned earlier, in the giant ladder ranking of programmer meritocracy, “I don’t know” can knock you down a few rungs.  (I’ll leave it to the reader to evaluate whether this happens in actuality or only in our minds.)  This leads to a behavior wherein we may try to “wing it,” even in unfamiliar territory.

When we do this, we have no one but ourselves to blame for the inevitable mistakes.  On my own blog, DaedTech, I once gave a label to those who frequently posture and fail this way: expert beginners.  Of course, that label talks about someone of marginal competence, but even a bonafide expert can fall victim to too much self-assurance.  The field of programming presents such immense and complex surface area that you will always have blind-spots.  Pretending you don’t leads to mistakes.

Inevitability

Let’s get a little more philosophical here.  I just mentioned that programming has too much ground for any one person to cover.  This forces a choice between admitting you need outside expertise and making mistakes.  But let’s expand on that even a little more.

Programming is knowledge work.  This means that we, as programmers, solve problems rather than perform any sort of repetitive labor.  Sure, you might write a handful of custom web apps that prove similar in nature.  But this is a far cry from the cookie-cutter nature of, say, assembly line work.  Even writing somewhat similar apps, all of our work involves solving problems that no one has yet solved.

And when you’re blazing a new trail, you will inevitably make mistakes.  It simply comes with the territory.

In Fact, You Should Make Mistakes

I’ll conclude by going even further than inevitability.  You should make mistakes.  In the first place, I think that a culture wherein we consider mistakes signs of weakness is counter-productive and asinine.  Having prominent experts say, “gosh, I really don’t know” encourages us all to do the same and it generally promotes a more productive culture.

But the benefit runs deeper.  I’ve heard it said that, if you’re not making mistakes, you’re probably not doing anything interesting.  And I firmly believe in that.  Pushing the envelope means making mistakes.  But, even beyond that, whether we make mistakes or not is less important than developing robust recovery mechanisms.  We should design software and systems not with an eye toward perfection, but with an eye toward minimizing the impact of mistakes.  After all, software is so fluid that today’s correctly functioning system becomes tomorrow’s ‘mistake’ when the requirements change.  So you might as well get good at recovering.

So, why do experts make mistakes?  Because we all do, and because our mistakes drive us forward when we learn from them.

How to Analyze a Static Analyzer

First things first.  I really wanted to call this post, “who will analyze the analyzer,” because I fancy myself clever.  This title would have mirrored the relatively famous Latin question from Satires, “who will guard the guards themselves?”  But I suspect that the confusion I’d cause with that title would outweigh any appreciation of my cleverness.

So, without any literary references whatsoever, I’ll talk about static analyzers.  More specifically, I’ll talk about how you should analyze them to determine fitness for your purpose.

Before I dive into that, however, let’s do a quick refresher on the definition of static analyzer.  This stack overflow question nails it pretty well, right at the beginning of the accepted answer.

Analyzing code without executing it. Generally used to find bugs or ensure conformance to coding guidelines.

Succinctly put, Aaron, and just so.  Most of what we do with code tends to be dynamic analysis.  Whether through automated tests or manual running of the program, we fire it up and see what happens.  Static analyzers, on the other hand, look at the code and use it to make deductions.  These include both deductions about runtime behavior and about the codebase itself.
Continue reading How to Analyze a Static Analyzer

Static Analysis Issue Management Gets a Boost

Years ago, I led a team of software developers.  We owned an eclectic portfolio of software real estate.  It included some Winforms, Webforms, MVC, and even a bit of WPF sprinkled into the mix.  And, as with any eclectic neighborhood, the properties came in a variety of ages and states of repair.

Some of this code depended on a SQL Server database that had a, let’s just say, casual relationship with normalization.  Predictably, this caused maintenance struggles.  But, beyond that, it caused a credibility gap when we spoke to non-technical stakeholders.  “What do you mean you can’t give a definitive answer to how many sales we made last year?”  “Well,” I’d try to explain, “I can’t say for sure because the database doesn’t explicitly define the concept of a sale.”

Flummoxed by the mutual frustration, I tried something a bit different.  Since I couldn’t easily explain the casual, implied relationships in the database, I decided to do a show and tell.  First, I went out and found a static analyzer for database schema.  Then, I brought in some representative stakeholders and said, “watch this.”  With a flourish (okay, not really), I turned the analyzer loose on the schema.

While they didn’t grok my analogies, they the tens of thousands of warnings and errors made an impression.  In fact, it sort of terrified them.  But this did bridge the credibility gap and show them that we all had some work to do.  Mission accomplished.

Static Analyzer Issues

I engaged in something of a relationship hack with my little ploy.  You see, I know how this static analyzer would behave because I know how all of them tend to behave.  They earn their keep by carpet bombing your codebase with violations and warnings.  Out of the box, they overwhelm, and then they leave it to you to dial it back.  Truly, you can take this behavior to the bank.

So I knew that this creaky database would trigger thousands upon thousands of violations.  And then I just sat back waiting for the “magic” to happen.

I mention all of this to paint a picture of how static analyzers typically regard the concept of “issue.”  All categories of severity and priority generally roll up into this catch-all term, and it then refers to the itemized list of everything.  Your codebase has issues and it has lots of them.  This is how the tool earns its mindshare and keep — by proving how much it can surface, and then doing so.

Thus you might define the concept simply as “all that stuff the static analyzer finds.”

Continue reading Static Analysis Issue Management Gets a Boost

ndepend one license

How to Use NDepend When You Only Have One License

I remember my first exposure to NDepend.  Back then, I worked for a company that allocated software developers a budget for personal improvement.  Predictably, most people spent theirs on books, courses, and the like.  But not me.

You see, as soon as I discovered NDepend, I saw immense potential for my own career.  A static analyzer that helped with visualizations of the codebase?  This wouldn’t just help with code reviews.  It would actually make me better at software development.  I took that argument to my manager, and he agreed.  Next thing I knew, I had an officially licensed copy of NDepend.

While NDepend did, in fact, improve my chops, I don’t intend to create an entire post about that here.  Instead, I want to respond to an interesting question I heard recently.  In essence, “how can we get the most out of NDepend with only one license for the team?”  Having used my training budget to buy NDepend, I found myself in the position of having the sole license and wanting to spread the value.

Continue reading How to Use NDepend When You Only Have One License

Quality Gates with NDepend to Help You Fail Fast

I had this car once.  I loved the thing, but, before the end of its life, my wife and I had developed sort of a running joke about it.  Specifically, if you wanted to see the “check engine” light come on, take the thing on a road trip.  About 100 miles in, that light would come on.

The fog of memory has probably colored this tale somewhat.  I can’t imagine that this happened before literally every driving trip we took.  But it sure seems like it did.  I can vividly recall the feeling of “something’s wrong” when we’d come too far to reasonably turn back but still had most of the trip in front of us.

Against this backdrop, the wisdom of the software aphorism, “fail fast” hits home.  Had the light come on as we sat in the driveway, about to leave, we’d have had options.  Take my wife’s car.  Go to the dealership on the way out of town to make sure we could safely drive.  Something. But, 100 miles into the trip, those options narrowed to “just keep going and hope for the best.”

If you must fail, better to do so early.
Continue reading Quality Gates with NDepend to Help You Fail Fast