NDepend

Improve your .NET code quality with NDepend

Learning Programming with Hands on Projects

If you want a surefire way to make money, look for enormous disparity between demand and supply.  As software developers, we understand this implicitly.  When we open our inboxes in the morning, we see vacuous missives from recruiters.  “Hey, dudebro, we need a JavaScript ninja-rockstar like you!”

You don’t tend to see vaguely patronizing, unflinchingly desperate requests like that unless you sit on some kind of goldmine.  They approach us the way one might approach a mischevious toddler holding a winning lottery ticket.  And, of course, anyone would expect that with wildly disproportionate supply and demand.

But, for us, this transcends just writing the code and oozes into learning about it.  Like baseball teams playing the long game, companies would rather grow their own talent than shell out for high-priced free agents.  And so learning about software might just prove more of a growth industry than writing it.

Continue reading Learning Programming with Hands on Projects

What Metrics Should the CIO See?

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.
Continue reading What Metrics Should the CIO See?

Recovering from a Mission Critical Whiff

A career in software produces a handful of truly iconic moments.  First, you beam with pride the first time something you wrote works in production.  Then, you recoil in horror the first time you bring your team’s project to a screeching halt with a broken build or some sort of obliteration of the day’s source history.  And so it goes at the individual level.

But so it also goes at the team or department level, with diluted individual responsibility and higher stakes.  Everyone enjoys that first major launch party.  And, on the flip side, everyone shudders to recall their first death march.  But perhaps no moment produces as many hangdog looks and feelings as the collective, mission critical whiff.

I bet you can picture it.  Your group starts charging at an aggressive deadline, convinced you’ll get there.  The program or company has its skeptics, and you fall behind schedule, but you resolve to prove them wrong.  External stakes run high, but somehow your collective pride trumps even that.  At various points during the project, stakeholders offer a reprieve in the form of extensions, but you assure them you get there.

It requires a lot of nights and weekends, and even some all-nighters in the run up to launch.  But somehow, you get there.  You ship your project with an exhausted feeling of pride.

And then all hell breaks loose.

Major bugs stream in.  The technical debt you knew you’d piled up comes due.  Customers get irate and laugh sardonically at the new shipment.  And, up and down the organizational ladder, people fume.  Uh oh.

How do you handle this?  What can you learn?
Continue reading Recovering from a Mission Critical Whiff

the relationship between team size and code quality

The Relationship Between Team Size and Code Quality

Over the last few years, I’ve had the occasion to observe lots of software teams.  These teams come in all shapes and sizes, as the saying goes.  And, not surprisingly, they produce output that covers the entire spectrum of software quality.

It would hardly make headline news to cite team members’ collective skill level and training as a prominent factor in determining quality level.  But what else affects it?  Does team size?  Recently, I found myself pondering this during a bit of downtime ahead of a meeting.

Continue reading The Relationship Between Team Size and Code Quality

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

The Best Christmas Present to Give Your Developers

When Christmas time arrives, it comes with the need to buy gifts, eat too much food, and attend various gatherings.

All of that comes awkwardly together each year in the form of the company Christmas party.  Everyone heads to some local steakhouse for high end food, Bob from accounting having one too many, and some kind of gift exchange. If you’re a dev manager and yours is the sort of organization where managers present their direct reports with Christmas presents, you probably wonder what to get them.  Since you know they like techie things, should you get them a Raspberry Pi or something?  What if they don’t like Linux?  A drone, maybe?  One of those Alexa things?

Personally, I’d advise you to do something a little different this year.  Instead of a tchotchke or a gift card, give them the gift of trust.

Now, I know what you’re thinking.  Not only did I just propose something insanely hokey, but even if you wanted to do it, you can’t exactly put “trust” in a holiday print box and hand it out between speeches and dessert.

Obviously, I don’t mean to suggest that you should just say, “Merry Christmas, I trust you, and isn’t that really the greatest gift of all?”  Rather, you should give them a gift that demonstrates you trust them.  I’ll explore that a bit further in this post.

Continue reading The Best Christmas Present to Give Your Developers

New Year’s Resolutions for Code Quality

Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions.  With other holidays, they range all over the map.  While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.

If you doubt this, look around during the first week of the year.  Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less.  Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that.  Instead, I’ll speak to some resolutions you should consider when it comes to code quality.  As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.

Before we get into specifics though, let’s consider the context in which I talk about code quality.  Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft.  That can easily result in diminishing returns on effort.  Instead, I refer to code quality in the business sense.  High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.

So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?”  Let’s take a look.

Continue reading New Year’s Resolutions for Code Quality

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

how much code should my developers be responsible for header

How Much Code Should My Developers Be Responsible For?

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Continue reading How Much Code Should My Developers Be Responsible For?

scale static analysis tooling

How to Scale Your Static Analysis Tooling

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Continue reading How to Scale Your Static Analysis Tooling