If you want to stir up a pretty serious amount of discussion-churn, wander over to where the software developers sit and ask for a consensus definition of “clean code.” This probably won’t start a religious war — it’s not like asking for consensus on the best programming language or development tool. You’ll probably find a lot of enthusiastic agreement with different flavors of the same basic concept. This is true among luminaries of the field, as quoted here on DZone, and it’s most likely to be true in any given shop.
There will be agreement on the broad concepts and inevitable debate as the points become of a finer grain. Developers can all agree that code should be “maintainable” and “easy to read” but you might get a bit of fragmentation around subjects like variable naming or relative compactness and ‘density’ of code. Have the developers look at a bit of code and ask them if it could be “cleaner” and you’ll probably get an array of responses, including potential disagreement and thrash. This will become especially true if they get hung up on cosmetic particulars like indentation, bracket placement, and casing.
So where does that leave us, exactly, when asked the deceptively simple question, “is this clean code?” Programmers can arrive at a broad consensus on how to answer that question, but not necessarily on the answer itself. They’ll all say, “well, it’s clean if it’s readable,” but some might give a particular bit of code a thumbs up while others give it a thumbs down. If you’re a developer, this can be fun or it can be frustrating. If you’re a non-technical stakeholder, such as a director, project manager, tester or business analyst, it can be confusing and maddening. “So is this code good or not!?”
Enter the Coding Standards Document
When enough people with “manager” in their title are maddened, things get done. Most commonly, this takes the form of the iconic “coding standard.” The coding standard makes things blissfully cut and dried.
- Thou shalt use camelCasing.
- Thou shalt use tabs instead of spaces.
- Thou shalt use verbs when naming methods and nouns when naming types.
- Thou shalt not omit curly brackets for any control flow statements.
It’ll probably go on like this, perhaps without the Biblical language, in a lot more detail over the course of some number of pages — probably anywhere from 2 to 40 in Word document form when all is said and done. Done well, it will make all developers equally unhappy, but at least there will be forced consensus and the code across the codebase will be relatively uniform looking and without weird, identifying dialects that make you say things like, “oh, that’s totally Steve’s code — I can tell.”
This also makes the non-technical stakeholders and especially bosses quite happy. After all, the answer to “is this code clean” is no longer mushy. Anyone can put on a pair of reading glasses, hold the spec up next to the code, and say, “yep, this meets our standard and is therefore clean.” In fact, you can even automate this with a whole host of tools, potentially even failing the team’s build if the code is not in conformance.
So great, right? Problem solved, clean code defined and automated? Time to call it a day and move onto the next challenge in the field of software?
Cosmetic Consistency != Clean Code
Well, maybe not. This is sort of a slippery thing, like a Heisenberg’s Uncertainty Principle for code. The more concrete and automated you make the definition of clean code, the more reductionist and cosmetic you make it. And so, one might argue that specificity and effectiveness are inversely proportional in a standard. The 25 page document detailing where each and every comma, semicolon, and ampersand should go may not actually guarantee clean code, but a general edict to “make your code clean” is entirely ineffectual.
If you want to have objective(-ish) measures of the cleanliness of a codebase without making them trivial and cosmetic, there’s up-front work required and the answers aren’t cut and dried. In my travels, here’s how I’ve approached this in a way that allows developers to get somewhat specific and non-technical stakeholders to have objective things to look at.
- Have a standard for the cosmetic stuff, but automate it completely.
- Axiomatically define thresholds beyond which code probably isn’t clean (e.g. if your class is 300+ lines of code, things are starting to smell)
- Understand that ‘violations’ of the thresholds are discussion-starters and not sins against the code.
In other words, automating or semi-automating a meaningful definition of “clean code” means that you need to stay away from hard absolutes and be ready for discussions. For example, I could implement a cut and dried rule saying that having more than 3 parameters for a method means that your code isn’t clean, but I can’t predict every conceivable scenario or weird bit of legacy code you might need to wrap. So what I’d be inclined to do instead is to measure the percentage of methods in a codebase with more than 3 parameters and declare that if it’s more than, oh, say, 1%, then a conversation about approach is probably in order. This removes the wrist-slapping paradigm of your average approach to coding standards and replaces it with architectural discussions.
I love writing posts for the NDepend blog because I love static analysis and I love the tool. It is only using NDepend (and JArchitect for Java codebases) that I’ve been able, in my career, to set these sorts of non-trivial, non-cosmetic thresholds for architectural discussions. This goes back to the “code as data” concept. And, even better, these tools ship with some opinions baked in for thresholds that are based on industry standards/conventions.
If you’re equipped with these tools, their thresholds and their standards, then you’re way ahead of the curve. Every software development group in the world has abstract opinions on what clean code is, and nearly every development group has some kind of cosmetic-y coding standard. Your developers know, intuitively, how to merge clean code with the cosmetic standards.
But NDepend has a little more experience than they do, and it knows just a little bit better.
Comments:
Comments are closed.