NDepend

Improve your .NET code quality with NDepend

The Best Christmas Present to Give Your Developers

When Christmas time arrives, it comes with the need to buy gifts, eat too much food, and attend various gatherings.

All of that comes awkwardly together each year in the form of the company Christmas party.  Everyone heads to some local steakhouse for high end food, Bob from accounting having one too many, and some kind of gift exchange. If you’re a dev manager and yours is the sort of organization where managers present their direct reports with Christmas presents, you probably wonder what to get them.  Since you know they like techie things, should you get them a Raspberry Pi or something?  What if they don’t like Linux?  A drone, maybe?  One of those Alexa things?

Personally, I’d advise you to do something a little different this year.  Instead of a tchotchke or a gift card, give them the gift of trust.

Now, I know what you’re thinking.  Not only did I just propose something insanely hokey, but even if you wanted to do it, you can’t exactly put “trust” in a holiday print box and hand it out between speeches and dessert.

Obviously, I don’t mean to suggest that you should just say, “Merry Christmas, I trust you, and isn’t that really the greatest gift of all?”  Rather, you should give them a gift that demonstrates you trust them.  I’ll explore that a bit further in this post.

Continue reading The Best Christmas Present to Give Your Developers

New Year’s Resolutions for Code Quality

Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions.  With other holidays, they range all over the map.  While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.

If you doubt this, look around during the first week of the year.  Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less.  Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that.  Instead, I’ll speak to some resolutions you should consider when it comes to code quality.  As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.

Before we get into specifics though, let’s consider the context in which I talk about code quality.  Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft.  That can easily result in diminishing returns on effort.  Instead, I refer to code quality in the business sense.  High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.

So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?”  Let’s take a look.

Continue reading New Year’s Resolutions for Code Quality

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

how much code should my developers be responsible for header

How Much Code Should My Developers Be Responsible For?

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Continue reading How Much Code Should My Developers Be Responsible For?

scale static analysis tooling

How to Scale Your Static Analysis Tooling

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Continue reading How to Scale Your Static Analysis Tooling

Alternatives to Lines of Code (LOC)

It amazes me that in 2016, I still hear the occasional story of some software team manager measuring developer productivity by committed lines of code (LOC) per day.  In fact, this reminds me of hearing about measles outbreaks.  That this still takes place shocks and creates an intense sense of anachronism.

I don’t have an original source, but Bill Gates is reputed to have offered pithy insight on this topic.  “Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”  This cuts right to the point that “more and faster” does not equal “fit for purpose.”  You can write an awful lot of code without any of it proving useful.

Before heading too far down the management criticism rabbit hole, let’s pull back a bit.  Let’s take a look at why LOC represents such an attractive nuisance for management.

For many managers, years have passed since their days of slinging code (if those days ever existed in the first place).  So this puts them in the unenviable position of managing something relatively opaque to them.  And opacity runs afoul of the standard management playbook, wherein they take responsibility for evaluating performances, forecasting, and establishing metric-based incentives.

The Attraction of Lines of Code

Let’s consider a study in contrasts.  Imagine that you took a job managing a team of ditch diggers.  Each day you could stand there with your clipboard, evaluating visible progress and performance.  The diggers that moved the most dirt per hour would represent your superstars and the ones that tired easily and took many breaks would represent the laggards.  You could forecast milestones by observing yards dug per day and then extrapolating that over the course of days, weeks, and months.  Your reports up to your superiors practically write themselves.

But now let’s change the game a bit.  Imagine that all ditches were dug purely underground and that you had to remain on the surface at all times.  Suddenly accounts or progress, metrics, and performance all come indirectly.  You need to rely on anecdotes from your team about one another to understand performance.  And you only know whether or not you’ve hit a milestone on the day that water either starts draining or stays where it is.

If you found yourself in this position suddenly, wouldn’t you cling to any semblance of measurability as if it were a life preserver?  Even if you knew it was reductionist, wouldn’t you cling?  Even if you knew it might mislead you?  Such is the plight of the dev manager.

In their world of opacity, lines of code represents something concrete and tangible.  It offers the promise of making their job substantially more approachable.  And so in order to take it away, we need to offer them something else instead.

Continue reading Alternatives to Lines of Code (LOC)

The Fastest Way to Get to Know NDepend

I confess to a certain level of avoidance when it comes to tackling something new.  If pressed for introspection, I think I do this because I can’t envision a direct path to success.  Instead, I see where I am now, the eventual goal, and a big uncertain cloud of stuff in the middle.  So I procrastinate by finding other things that need doing.

Sooner or later, however, I need to put this aside and get down to business.  For me, this usually means breaking the problem into smaller problems, identifying manageable next actions, and tackling those.  Once things become concrete, I can move methodically.  (As an aside this is one of many reasons that I love test driven development — it forces this behavior.)

When dealing with a new product or utility that I have acquired, this generally means carving out a path toward some objective and then executing.  For instance, “learn Ruby” as a goal would leave me floundering.  But “use Ruby to build a service that extracts data via API X” would result in a series of smaller goals and actions.  And I would learn via those goals.

For NDepend, I have a recommendation along these lines.  Let’s use the tool to help you visualize your the reality of your codebase better than anyone around you.  In doing this, you will get to know NDepend quickly and without feeling overwhelmed.
Continue reading The Fastest Way to Get to Know NDepend

abstractness-concreteness-zone-of-pain

Concreteness: Entering the Zone of Pain

Years ago, when I first downloaded a trial of NDepend, I chuckled when I saw the “Abstractness vs. Instability” graph.  The concept itself does not amuse, obviously.  Rather, the labels for the corners of the graph provide the levity: “zone of uselessness” and “zone of pain.”

When you run NDepend analysis and reporting on your codebase, it generates this graph.  You can then see whether or not each of your assemblies falls within one of these two dubious zones.  No doubt people with NDepend experience can recall seeing a particularly hairy assembly depicted in the zone of pain and thinking, “I knew it!”

But whether you have experienced this or not, you should stop to consider what it means to enter the zone of pain.  The term amuses, but it also informs.  Yes, these assemblies will tend to annoy developers.  But they also create expensive, risky churn inside of your applications and raise the cost of ownership of the codebase.

Because this presents a real problem, let’s take a look at what, exactly, lands you in the zone of pain and how to recover.

Continue reading Concreteness: Entering the Zone of Pain

prioritize-bugs

How to Prioritize Bugs on Your To-Do List

People frequently ask me questions about code quality.  People also frequently ask me questions about efficiency and productivity.  But it seems we rarely wind up talking about the two together.  How can you most efficiently improve quality via the fixing of bugs?  Or, more specifically, how should you prioritize bugs?

Let me be clear about something up front.  I’m not going to offer you some kind of grand unified scheme of bug prioritization.  If I tried, the attempt would come off as utterly quixotic.  Because software shops, roles, and offerings vary so widely, I cannot address every possible situation.

Instead, I will offer a few different philosophies of prioritization, leaving the execution mechanics up to you.  These should cover most common scenarios that software developers and project managers will encounter.
Continue reading How to Prioritize Bugs on Your To-Do List

Making Devs, Architects, and Managers Happy with the Static Analysis Tool

Organizations come to possess a static analysis tool in a variety of ways.  In some cases, management decides on some kind of quality initiative and buys the tool to make it so.  Or, perhaps some enterprising developers sold management on the tool and now, here it is.  Maybe it came out from the architecture group’s budget.

But however the tool arrives, it will surprise people.  In fact, it will probably surprise most people.  And surprises in the corporate context can easily go over like a lead balloon.  So the question becomes, “how do we make sure everyone feels happy with the new tool?”

A Question of Motivation

Before we can discuss what makes people happy, we need to look first at what motivates them.  Not all folks in the org will share motivation — these will generally vary by role.  The specific case of static analysis presents no exception to this general rule.

When it comes to static analysis, management has the simplest motivation.  Managers lack the development team’s insights into the codebase.  Instead, they perceive it only in terms of qualitative outcomes and lagging indicators.  Static analysis offers them a unique opportunity to see leading indicators and plan for issues around code quality.

Architects tend to view static analysis tools as empowering for them, once they get over any initial discomfort. Frequently, they define patterns and practices for teams, and static analysis makes these much easier to enforce.  Of course, the tool might also threaten the architects, should its guidance be at odds with their historical proclamations about developer practice.

For developers, complexity around motivation grows.  They may share the architects’ feelings of threat, should the tool disagree with historical practice.  But they may also feel empowered or vindicated, should it give them a voice for a minority opinion.  The tool may also make them feel micromanaged if used to judge them, or empowered if it affords them the opportunity to learn.

Continue reading Making Devs, Architects, and Managers Happy with the Static Analysis Tool