NDepend

Improve your .NET code quality with NDepend

exploring technical debt codebase

Exploring the Technical Debt In Your Codebase

Recently, I posted about how the new version of NDepend lets you compute tech debt.  In that post, I learned that I had earned a “B” out of the box.  With 40 minutes of time investment, I could make that an “A.”  Not too shabby!

In that same post, I also talked about the various settings in and around “debt settings.”  With debt settings, you can change units of debt (time, money), thresholds, and assumptions of working capacity.  For folks at the intersection of tech and business, this provides an invaluable way to communicate with the business.

But I really just scratched the surface with that mention.  You’re probably wondering what this looks like in more detail.  How does this interact with the NDepend features you already know and love?  

Well, today, I’d like to take a look at just that.

To start, let’s look at the queries and rules explorer in some detail.

Introducing Quality Gates

Take a look at this screenshot, and you’ll notice some renamed entries, some new entries, and some familiar ones.

In the past, “Code Smells” and “Code Regressions” had the names “Code Quality” and “Code Quality Regression,” respectively.  With that resolved, the true newcomers sit on top: Quality Gates and Hot Spots.  Let’s talk about quality gates.

Continue reading Exploring the Technical Debt In Your Codebase

The One Thing Every Company Can Do to Reduce Technical Debt

The idea of technical debt has become ubiquitous in our industry.  It started as a metaphor to help business stakeholders understand the compounding cost of shortcuts in the code.  Then, from there, it grew to define perhaps the foundation of trade-offs in the tech world.

You’d find yourself hard pressed, these days, to find a software shop that has never heard of tech debt.  It seems that just about everyone can talk in the abstract about dragons looming in their code, portending an eventual reckoning.  “We need to do something about our tech debt,” has become the rallying cry for “we’re running before we walk.”

As with its fiscal counterpart, when all other factors equal, having less tech debt is better than having more.  Technical debt creates drag on the pace of new feature deliver until someone ‘repays’ it.  And so shops constantly grapple with the question, “how can we reduce our tech debt?”

I could easily write a post where I listed the 3 or 5 or 13 or whatever ways to reduce tech debt.  First, I’d tell you to reduce problematic coupling.  Then, I’d tell you to stop it with the global variables.  You get the idea.

But today, I want to do something a bit different.  I want to talk about the one thing that every company can do to reduce tech debt.  I consider it to be sort of a step zero.

Continue reading The One Thing Every Company Can Do to Reduce Technical Debt

Computing Technical Debt with NDepend

For years, I have struggled to articulate technical debt to non-technical stakeholders.  This struggle says something, given that technical debt makes an excellent metaphor in and of itself.

The concept explains that you incur a price for taking quality shortcuts in the code to get done quickly.  But you don’t just pay for those shortcuts with more work later — you accrue interest.Save yourself an hour today with some copy pasta, and you’ll eventually pay for that decisions with many hours down the road.

So I say to interested, non-technical parties, “think of these shortcuts today as decisions upon which you pay interest down the line.”  They typically squint at me a little and say, “yeah, I get it.”  But I generally don’t think they get it.  At least, not fully.

Lack of Concreteness

I think the reason for this tends to come from a lack of actual units.  As a counterexample, think of explaining an auto loan to someone.  “I’m going to loan you $30,000 to buy a car.  With sales tax and interest factored in, you’ll pay me back over a 5 year period, and you’ll pay me about $36,000 in total.”  Explained this way to a consumer, they get it.  “Oh, I see.  It’ll cost me about $6,000 if I want you to come up with that much cash on my behalf.”  They can make an informed value decision.

But that falls flat for a project manager in a codebase.  “Oh man, you don’t want us to squeeze this in by Friday.  We’ll have to do terrible, unspeakable things in the code!  We’ll create so much tech debt.”

“Uh, okay.  That sounds ominous.  What’s the cost?”

“What do you mean?  There’s tech debt!  It’ll be worse later when we fix it than if we do it correctly the first time.”

“Right, but how much worse?  How much more time?”

“Well, you can’t exactly put a number to it, but much worse!”

And so and and so forth.  I imagine that anyone reading can recall similar conversations from one end or the other (or maybe even both).  Technical debt provides a phenomenal metaphor in the abstract.  But when it comes to specifics, it tends to fizzle a bit.

Continue reading Computing Technical Debt with NDepend

Learning Programming with Hands on Projects

If you want a surefire way to make money, look for enormous disparity between demand and supply.  As software developers, we understand this implicitly.  When we open our inboxes in the morning, we see vacuous missives from recruiters.  “Hey, dudebro, we need a JavaScript ninja-rockstar like you!”

You don’t tend to see vaguely patronizing, unflinchingly desperate requests like that unless you sit on some kind of goldmine.  They approach us the way one might approach a mischevious toddler holding a winning lottery ticket.  And, of course, anyone would expect that with wildly disproportionate supply and demand.

But, for us, this transcends just writing the code and oozes into learning about it.  Like baseball teams playing the long game, companies would rather grow their own talent than shell out for high-priced free agents.  And so learning about software might just prove more of a growth industry than writing it.

Continue reading Learning Programming with Hands on Projects

What Metrics Should the CIO See?

I’ve worked in the programming industry long enough to remember a less refined time.  During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team.  Even then, they probably understood the folly of such an approach.  But, if they lacked better measures, they might use that one.

Today, you rarely, if ever see that happen any longer.  But don’t take that to mean reductionist measures have stopped.  Rather, they have just evolved.

Most commonly today, I see this crop up in the form of automated unit test coverage.  A CIO or high level manager becomes aware of generally quality and cadence problems with the software.  She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her.  She then announces the initiative and rolls out.  Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.

The problem with this arises from what, specifically, the group measures and improves.  She wants to improve quality and predictability, so she implements a proxy solution.  She then measures people against that proxy.  And, often, they improve… against that proxy.
Continue reading What Metrics Should the CIO See?

Recovering from a Mission Critical Whiff

A career in software produces a handful of truly iconic moments.  First, you beam with pride the first time something you wrote works in production.  Then, you recoil in horror the first time you bring your team’s project to a screeching halt with a broken build or some sort of obliteration of the day’s source history.  And so it goes at the individual level.

But so it also goes at the team or department level, with diluted individual responsibility and higher stakes.  Everyone enjoys that first major launch party.  And, on the flip side, everyone shudders to recall their first death march.  But perhaps no moment produces as many hangdog looks and feelings as the collective, mission critical whiff.

I bet you can picture it.  Your group starts charging at an aggressive deadline, convinced you’ll get there.  The program or company has its skeptics, and you fall behind schedule, but you resolve to prove them wrong.  External stakes run high, but somehow your collective pride trumps even that.  At various points during the project, stakeholders offer a reprieve in the form of extensions, but you assure them you get there.

It requires a lot of nights and weekends, and even some all-nighters in the run up to launch.  But somehow, you get there.  You ship your project with an exhausted feeling of pride.

And then all hell breaks loose.

Major bugs stream in.  The technical debt you knew you’d piled up comes due.  Customers get irate and laugh sardonically at the new shipment.  And, up and down the organizational ladder, people fume.  Uh oh.

How do you handle this?  What can you learn?
Continue reading Recovering from a Mission Critical Whiff

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

New Year’s Resolutions for Code Quality

Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions.  With other holidays, they range all over the map.  While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.

If you doubt this, look around during the first week of the year.  Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less.  Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that.  Instead, I’ll speak to some resolutions you should consider when it comes to code quality.  As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.

Before we get into specifics though, let’s consider the context in which I talk about code quality.  Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft.  That can easily result in diminishing returns on effort.  Instead, I refer to code quality in the business sense.  High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.

So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?”  Let’s take a look.

Continue reading New Year’s Resolutions for Code Quality

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

how much code should my developers be responsible for header

How Much Code Should My Developers Be Responsible For?

As I work with more and more organizations, my compiled list of interesting questions grows.  Seriously – I have quite the backlog.  And I don’t mean interesting in the pejorative sense.  You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.

Rather, these questions interest me at a philosophical level.  They make me wonder about things I never might have pondered.  Today, I’ll pull one out and dust it off.  A client asked me this once, a while back.  They were wondering, “how much code should my developers be responsible for?”

Why ask about this?  Well, they had a laudable enough goal.  They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it.  “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”

In a data-driven way, they asked a great question.  And yet, the reasoning falls apart on closer inspection.  I’ll speak today about why that happens.  Here are some problems with this thinking.

Continue reading How Much Code Should My Developers Be Responsible For?