Stop me if this sounds familiar. (Well, not literally. I realize that asynchronous publication makes it hard for you to actually stop me as I type. Indulge me the figure of speech.) You work on a codebase for a long time, all the while having the foreboding sense of growing messiness. One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”
Then you have an experience like a holiday-time binge eater getting on a scale on January 1st. As the tool crunches its results, you wince in anticipation. Next, you get the results, get depressed, and then get busy correcting them. Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days. So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.
If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently. They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then. Then, they resolve to carry the learning forward to avoid making similar mistakes. But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA
Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions. With other holidays, they range all over the map. While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.
If you doubt this, look around during the first week of the year. Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less. Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that. Instead, I’ll speak to some resolutions you should consider when it comes to code quality. As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.
Before we get into specifics though, let’s consider the context in which I talk about code quality. Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft. That can easily result in diminishing returns on effort. Instead, I refer to code quality in the business sense. High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.
So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?” Let’s take a look.
Continue reading New Year’s Resolutions for Code Quality
In the past, I’ve talked about the nature of static code analysis. Specifically, static analysis involves analyzing programs’ source code without actually executing them. Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis. This is because performance is inherently a runtime concern. Static analysis tends to do its best, most direct work with source code considerations. It requires a more indirect route to predict runtime issues.
For example, consider something simple.
public void DoSomething(SomeService theService)
With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.” However, it gets a lot harder to talk definitively about runtime behavior. Will this method ever generate an exception? We can’t know that with only the information present. Maybe the only call to this in the entire codebase happens right after instantiating a service. Maybe no one ever calls it.
Today, I’d like to talk about using NDepend to sniff out possible performance issues. But my use of possible carries significant weight because definitive gets difficult. You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.
That said, how can you you use NDepend to identify possible performance woes in your code? Let’s take a look at some ideas.
Continue reading Detecting Performance Bottlenecks with NDepend
As I work with more and more organizations, my compiled list of interesting questions grows. Seriously – I have quite the backlog. And I don’t mean interesting in the pejorative sense. You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.
Rather, these questions interest me at a philosophical level. They make me wonder about things I never might have pondered. Today, I’ll pull one out and dust it off. A client asked me this once, a while back. They were wondering, “how much code should my developers be responsible for?”
Why ask about this? Well, they had a laudable enough goal. They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it. “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”
In a data-driven way, they asked a great question. And yet, the reasoning falls apart on closer inspection. I’ll speak today about why that happens. Here are some problems with this thinking.
Continue reading How Much Code Should My Developers Be Responsible For?
Years ago, when I first downloaded a trial of NDepend, I chuckled when I saw the “Abstractness vs. Instability” graph. The concept itself does not amuse, obviously. Rather, the labels for the corners of the graph provide the levity: “zone of uselessness” and “zone of pain.”
When you run NDepend analysis and reporting on your codebase, it generates this graph. You can then see whether or not each of your assemblies falls within one of these two dubious zones. No doubt people with NDepend experience can recall seeing a particularly hairy assembly depicted in the zone of pain and thinking, “I knew it!”
But whether you have experienced this or not, you should stop to consider what it means to enter the zone of pain. The term amuses, but it also informs. Yes, these assemblies will tend to annoy developers. But they also create expensive, risky churn inside of your applications and raise the cost of ownership of the codebase.
Because this presents a real problem, let’s take a look at what, exactly, lands you in the zone of pain and how to recover.
Continue reading Concreteness: Entering the Zone of Pain
People frequently ask me questions about code quality. People also frequently ask me questions about efficiency and productivity. But it seems we rarely wind up talking about the two together. How can you most efficiently improve quality via the fixing of bugs? Or, more specifically, how should you prioritize bugs?
Let me be clear about something up front. I’m not going to offer you some kind of grand unified scheme of bug prioritization. If I tried, the attempt would come off as utterly quixotic. Because software shops, roles, and offerings vary so widely, I cannot address every possible situation.
Instead, I will offer a few different philosophies of prioritization, leaving the execution mechanics up to you. These should cover most common scenarios that software developers and project managers will encounter.
Continue reading How to Prioritize Bugs on Your To-Do List
I remember my earliest experiences with static analysis. Probably a decade ago, I started to read about it during grad school and poke around with it at work. Immediately, I knew I had discovered a powerful advantage for programmers. These tools automated knowledge.
While I felt happy to share the knowledge with coworkers, their lack of interest didn’t disappoint me. After all, it felt as though I had some sort of trade secret. If those around me chose not to take advantage, I would shine by comparison. (I have since, I’d like to think, matured a bit.) Static analysis became my private competitive advantage — Sabermetrics for programmers.
So as you can imagine, running it on the build machine would not have occurred to me. And that assumes a sophisticated enough setup that doing so made sense (not really the case back then). Static analysis was my ace in the hole for writing good code — a personal choice and technique.
Fast forward a decade. I have now grown up, worked with many more teams, and played many more roles. And, of course, the technological landscape has changed. All of that combined to cause a complete reversal of my opinion. Static analysis and its advantages matter far too much not to use it on the build machine. Today, I’d like to expand on that a bit.
Continue reading Static Analysis for the Build Machine?
As an adult, I have learned that I have an introvert type personality. I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person. So, learning about this characterization surprised me somewhat, but only until I fully understood.
I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding. This describes me to a tee. However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously. Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.
I received just such a question the other day. The question, more or less, was, “if we have continuous testing, do we really need static analysis?” And, just like that, I was stumped. This didn’t square, and I wanted time to think on that. Luckily, I’ve had a bit of time. (This is why I love blogging.) Continue reading The Relationship between Static Analysis and Continuous Testing
I’ve heard people say (paraphrased) that teams succeed uniformly, but fail each in its own unique way. While I might argue the veracity of this statement, it evokes an interesting image. Many roads, lined with many decisions, lead to many different sorts of failures. Team code reviews present no exception. Teams can fail at code review in myriad, unique ways. And, on top of that, many paths to broader failure can involve poor code reviews (doubtless among other things).
How can I assign such importance to the code review? After all, many would consider this an ancillary team activity and one with only upside. Done poorly, code review catches no defects. Done well, it catches some defects. Right? Continue reading How to Perform Effective Team Code Reviews
I’ve trod this path before in various incarnations and I’ll do it again today. After all, I can think of few topics in software development that draw as much debate as this one. “We’ve got this app, and we want to know if we should refactor it or rewrite it.”
For what it’s worth, I answer this question for a living. And I don’t mean that in the general sense that anyone in software must ponder the question. I mean that CIOs, dev managers and boards of directors literally pay me to help them figure out whether to rewrite, retire, refactor, or rework an application. I go in, gather evidence, mine the data and state my case about the recommended fate for the app.
Because of this vocation and because of my writing, people often ask my opinion on this topic. Today, I yet again answer such a question. “How do I know when to rewrite an app instead of just refactoring it?” I’ll answer. Sort of. But, before I do, let’s briefly revisit some of my past opinions. Continue reading Rewrite or Refactor?