Perhaps more than any other holiday I can think of, New Year’s Day has specific traditions. With other holidays, they range all over the map. While Christmas has trees, presents, rotund old men, and songs, New Year’s concerns itself primarily with fresh starts.
If you doubt this, look around during the first week of the year. Armed with fresh resolutions, people swear off cigarettes and booze, flock to gyms, and find ways to spend less. Since you don’t come to the NDepend blog for self help, I’ll forgo talking about that. Instead, I’ll speak to some resolutions you should consider when it comes to code quality. As you come to the office next week, fresh off of singing “Auld Lang Syne” and having champagne at midnight, think of changing your ways with regard to your code base.
Before we get into specifics though, let’s consider the context in which I talk about code quality. Because I don’t drink from mason jars and have a 2 foot beard, I won’t counsel you to chase quality purely for the love of the craft. That can easily result in diminishing returns on effort. Instead, I refer to code quality in the business sense. High quality code incurs a relatively low cost of change and generates few or no unexpected runtime behaviors.
So the question becomes, “what should I do in the new year to efficiently write predictable, maintainable code?” Let’s take a look.
Continue reading New Year’s Resolutions for Code Quality
In the past, I’ve talked about the nature of static code analysis. Specifically, static analysis involves analyzing programs’ source code without actually executing them. Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis. This is because performance is inherently a runtime concern. Static analysis tends to do its best, most direct work with source code considerations. It requires a more indirect route to predict runtime issues.
For example, consider something simple.
public void DoSomething(SomeService theService)
With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.” However, it gets a lot harder to talk definitively about runtime behavior. Will this method ever generate an exception? We can’t know that with only the information present. Maybe the only call to this in the entire codebase happens right after instantiating a service. Maybe no one ever calls it.
Today, I’d like to talk about using NDepend to sniff out possible performance issues. But my use of possible carries significant weight because definitive gets difficult. You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.
That said, how can you you use NDepend to identify possible performance woes in your code? Let’s take a look at some ideas.
Continue reading Detecting Performance Bottlenecks with NDepend
As I work with more and more organizations, my compiled list of interesting questions grows. Seriously – I have quite the backlog. And I don’t mean interesting in the pejorative sense. You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.
Rather, these questions interest me at a philosophical level. They make me wonder about things I never might have pondered. Today, I’ll pull one out and dust it off. A client asked me this once, a while back. They were wondering, “how much code should my developers be responsible for?”
Why ask about this? Well, they had a laudable enough goal. They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it. “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”
In a data-driven way, they asked a great question. And yet, the reasoning falls apart on closer inspection. I’ll speak today about why that happens. Here are some problems with this thinking.
Continue reading How Much Code Should My Developers Be Responsible For?
If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale. When you see this sort of thing, it has generally come about in one of two ways. First, the company piloted a new practice with a team or two and then scaled it from there. Or, second, the development organization started the practice when it was small and grew it as the department grew.
But what about “rolled it out all at once?” Nah, (mercifully) not so much. “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.” Does that sound like the kind of plan executives with career concerns sign off on? Would you sign off on it? Even the pointiest haired of managers would feel gun shy.
When it comes to scaling a static analysis practice, you will find no exception. Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up. And that begs the question of, “how?” when it comes to scaling static analysis.
Two main areas of concern come to mind: technical and human. You probably think I’ll spend most of the post talking technical don’t you? Nope. First of all, too many tools, setups, and variations exist for me to scratch the surface. But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.
Instead, I’ll focus on the human element. Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.
Continue reading How to Scale Your Static Analysis Tooling