As I work with more and more organizations, my compiled list of interesting questions grows. Seriously – I have quite the backlog. And I don’t mean interesting in the pejorative sense. You know – the way you say, “oh, that’s… interesting” after some drunken family member rants about their political views.
Rather, these questions interest me at a philosophical level. They make me wonder about things I never might have pondered. Today, I’ll pull one out and dust it off. A client asked me this once, a while back. They were wondering, “how much code should my developers be responsible for?”
Why ask about this? Well, they had a laudable enough goal. They had a fairly hefty legacy codebase and didn’t want to overtax the folks working on it. “We know our codebase has X lines of code, so how many developers comprise an ideally staffed team?”
In a data-driven way, they asked a great question. And yet, the reasoning falls apart on closer inspection. I’ll speak today about why that happens. Here are some problems with this thinking.
Collective Code Ownership
First, I balked at the question in deference to the idea of collective code ownership. Historically, managers of application development tended to map software work to physical labor for some purposes.
Got a bunch of holes that need digging? Task each team member with digging his holes. Got a bunch of code that needs writing? Task each team member with writing her piece.
The trouble comes when you realize that, unlike holes, chunks of code are not commodities. That is, you cannot interchange them. So when you divide code up this way and the person that wrote module X is on a two week vacation, you just kind of put things on hold until that person comes back.
Individual code ownership carries other problems as well, but that one tends to be the most glaring to the business. Some call it “bus factor.” But whatever you call it, when you start talking about team members having “responsibility for” pockets of code, you encourage it. The team should have responsibility for the codebase. That’s it.
What Does Code Volume Mean, Anyway?
For you grizzled .NET developers out there, you can probably divide your history into “BL” and “AL.” By this I mean, “before Linq” and “after Linq.”
Before Linq, you wrote reams of imperative code, walking through nested loops until you found exactly the object you wanted. Then, along came Linq to turn all of that into declarative code. But it didn’t do this in a one-to-one volume-wise mapping. No, it took your verbose imperative code and, like a collapsing neutron star, it smashed it down to a fraction of its former size.
From this, we can learn that code volume can vary not just from language to language, but from developer to developer and language version to language version. Thus, “how much” as a numerical construct becomes so fluid as to have little measurement value.
From a business perspective, code acts like inventory. It provides business enablement, but, sitting there in your organization, it creates a liability for you. You want developers writing as little of it as possible to get the job done. Thus, ironically, the developers that tend to write the most code per feature, should arguably have responsibility for the least code overall. More is not better.
Code Volatility
The third and final objection point that I presented had to do with the idea of code volatility. By this, I meant the change frequency of a given file or bit of code.
To understand where this fits into the puzzle, consider two extremes. First, consider a stable, well-factored module consisting of a million lines of code. Every 3-4 years, some new government regulation comes out, requiring a slight tweak here and there, but apart from that, it hums along like a dream in production.
On the other hand, consider a mere 10,000 line application. But this application has all sorts of runtime problems. And, to make matters worse, stakeholders keep changing their minds about how the software should behave, resulting in a great deal of churn.
It is entirely possible, given these two codebases, that you would staff the million line codebase with 1 developer. And it’s also entirely possible that you’d staff the second codebase with an entire team. This makes sense even despite the latter codebase being 1% the size of the former.
I offer this example to demonstrate an important point. Lines of code alone makes a flawed proxy for maintenance labor. And so using lines of code alone to attempt to project your maintenance footprint will fall woefully short.
Sizing a Team Against a Codebase
So how then do you size a team against a codebase? If not, “how much code should a developer have responsibility for,” then “how much what should a developer have responsibility for?”
I’ll offer an answer both simple and hard to measure. “How much change should a developer have responsibility for?” This line of thinking normalizes code verbosity and code volatility.
In the agile world, this line of thinking gives rise to story mapping and planning activities. Slice the goals for the software into features in a backlog and then have the team work on those features. If the team goes too slowly for the business, you need more team. If people sit around idle, you have too much team.
In the up-front planning/waterfall world, a similar dynamic emerges, but you only figure it out when you finish ahead of schedule (yeah, right) or you start getting behind schedule. Ahead of schedule, you have too much team. Behind schedule, note enough.
Truly, you measure the scope of developer responsibility by whether or not software capabilities are realized at the pace the business needs.
Good piece. Asking “How much code do I need to solve this problem?” is like asking “How many words do I need to write my novel?” We should measure progress in terms of the thing we are trying to produce.
Thanks — glad you liked the post! And “how many words do I need to write my novel” is a great analogy.