One of the more puzzling misconceptions that I hear pertains to the topic of refactoring. I consult on a lot of legacy rescue efforts, and refactoring, and people in and around those efforts tend to think of “refactor” as “massive cleanup effort.” I suspect this is one of those conflations that happens subconsciously. If you actually asked some of these folks whether “refactor” and “massive cleanup effort” were synonyms, they would say no, but they never conceive of the terms in any other way during their day to day activities.
Code refactoring is the process of restructuring existing computer code – changing the factoring – without changing its external behavior.
Significantly, this definition mentions nothing about the scope of the effort. Refactoring is changing the code without changing the application’s behavior. This means the following would be examples of refactoring, provided they changed nothing about the way the system interacted with external forces.
Renaming variables in a single method.
Adding whitespace to a class for readability.
Eliminating dead code.
Deleting code that has been commented out.
Breaking a large method apart into a few smaller ones.
I deliberately picked the examples above because they should be semi-understandable, even by non technical folks, and because they’re all scalable down to the tiny. Some of these activities could be done by a developer in under a minute. These are simple, low-effort refactorings.
“Refactoring is a controlled technique for improving the design of an existing code base. Its essence is applying a series of small behavior-preserving transformations, each of which “too small to be worth doing”. However the cumulative effect of each of these transformations is quite significant.”
I took the wikipedia definition and used it to suggest that refactorings could be small and low-effort. Fowler takes it a step further and suggests that they should be small and low effort. In fact, he suggests that they should be “too small to be worth doing.” That’s fascinating. Continue reading Refactoring is a Development Technique, Not a Project
Developing applications for use in the health care industry is stressful because the margin of error is almost non-existent. Whether your tool is for treatment, research, or analysis, it needs to be dependable and accurate. The more complex the application is, the higher the chance for errors and delays in development. Dependable companies abide by rigorous methodologies to develop their code before deploying it to clients. In this NDepend case study, we learn why a company in this sector chose NDepend, and why it became an integral part of their development process.
Stago works in the medical lab industry, producing lab analysis tools that focus on haemostasis and coagulation. Working hard for over 60 years and valuing long term investments, they have created a name for themselves in the industry. A few years ago, they wanted to make their software development process more efficient. In addition, they wanted to easily enforce their own best practices and code quality standards across their teams. The goal was to be able to catch issues earlier in the development cycle to cut costs and time spent on quality assurance post-development.
“We selected NDepend after reviewing all the other options on the market and it quickly became the backbone of our development effort.” – Fabien Prestavoine, Software Architect at Stago
We are very grateful for Stago for sharing their success story with us. Stories such as these is one of the main driving forces behind creating one of the most comprehensive and powerful .NET analysis tool on the market. Since implementing NDepend, Stago has:
Easily met all delivery deadlines
Cut both cost and time spent on quality assurance
Delivered a consistently dependable product
Improved communication between their developers and architects
Over the last two Let’s Build a Metric installments of this series, I’ve talked about different ways to count lines of code and about ways to count different paths through your code. So far, I’ve offered up the hypotheses that more statements/lines in a method means more time to comprehend, and that more paths through the code mean more time to comprehend. I’ll further offer the hypothesis that comprehension time varies more strongly with complexity than it does with lines of code.
I do have results in for the first hypothesis, but will hold off for one more installment before posting those. Everyone on the mailing list will soon receive the second experiment, around complexity, so I’ll post the results there in an installment or two, when I circle back to modifying the composite metric. If you haven’t yet signed up for the experiment, please do so here.
More Parameters Means Harder to Read?
In this post, I’d like to address another consideration that I hypothesize will directly correlate with time to comprehend a method: parameter count. Now, unlike these last two posts, parameter count offers no hidden surprises. Unlike lines of code, I don’t know of several different ways that one might approach counting method parameters, and unlike cyclomatic complexity, there’s no slick term for this that involves exponential growth vectors. This is just a matter of tabulating the number of arguments to your methods.
Instead of offering some cool new fact for geek water-cooler trivia, I’ll offer a relatively strong opinion about method parameters. Don’t have a lot of them. In fact, don’t have more than 3, and even 3 is pushing it. Do I have your attention? Good. Continue reading Let’s Build a Metric 7: Counting the Inputs
Last week, a post I wrote, “The Myth of the Software Rewrite“, became pretty popular. This generated a lot of comments and discussion, so I decided just to write a follow-up post to address the discussion, as opposed to typing a blog post’s worth of thoughts, distributed over 20 or 30 comments. This is that post.
First of all, I want to be clear about what I’m talking about. I’m specifically talking about a situation where the prime, determining factor in whether or not to rewrite the software is that the development group has made a mess and is clamoring to rewrite it. In essence, they’re declaring bankruptcy — “we’re in over our heads and need outside assistance to wipe the slate clean so we can have a fresh start.” They’re telling the business and their stakeholders that the only path to joy is letting them start over.
Here are some situations that the article was not meant to address:
The business decides it wants a rewrite (which makes me skeptical, but I’m not addressing business decisions).
Piecemeal rewrite, a chunk at a time (because this is, in fact, what I would advocate).
A rewrite because the original made design assumptions that have become completely obsolete (e.g. designed around disk space being extremely expensive).
Rewriting the software to significantly expand or alter the offering (e.g. “we need to move from web to mobile devices and offer some new features, so let’s start fresh.”)
A Lesson From Joseph Heller
Joseph Heller is the author of one of my all time favorite works of fiction, Catch 22. Even if you’ve never read this book, you’re probably familiar with the term from conversational reference. A catch 22 is a paradoxical, no-win situation. Consider an example from the book.
John Yossarian, the ‘protagonist,’ is an anti-heroic bombardier in World War II. Among other character foibles, one is an intense desire not to participate in the war by flying missions. He’d prefer to stay on the ground, where it’s safe. To advance this interest, he attempts to convince an army doctor that he’s insane and thus not able to fly missions. The army doctor responds with the eponymous catch 22: “anyone who wants to get out of combat duty isn’t really crazy.”
If you take this to its logical conclusion, the only way that Yossarian could be too crazy to fly missions is if he actually wanted to fly missions. And if he wanted to fly them, he wouldn’t be noteworthy and he wouldn’t be trying to get out of flying them in the first place.
I mention this vis a vis software rewrites for a simple reason. The only team I would trust with a rewrite is a team that didn’t view rewriting the software as necessary or even preferable.
It’s the people who know how to manufacture small wins and who can inch back incrementally from the brink that I trust to start a codebase clean and keep it clean. People who view a periodic bankruptcy as “just the way it goes” are the people who are going to lead you to bankruptcy. Continue reading Software Rewrite: The Chase
I earn my living, or part of it, anyway, doing something very awkward. I get called in to assess and analyze codebases for health and maintainability. As you can no doubt imagine, this tends not to make me particularly popular with the folks who have constructed and who maintain this code. “Who is this guy, and what does he know, anyway?” is a question that they ask, particularly when confronted with the parts of the assessment that paint the code in a less than flattering light. And, frankly, they’re right to ask it.
But in reality, it’s not so much about who I am and what I know as it is about properties of code bases. Are code elements, like types and methods, larger and more complex than those of the average code base? Is there a high degree of coupling and a low degree of cohesion? Are the portions of the code with the highest fan-in exercised by automated tests or are they extremely risky to change? Are there volatile parts of the code base that are touched with every commit? And, for all of these considerations and more, are they trending better or worse?
It’s this last question that is, perhaps, most important. And it helps me answer the question, “who are you and what do you know?” I’m a guy who has run these types of analyses on a lot of codebases and I can see how yours stacks up and where it’s going. And where it’s going isn’t awesome — it’s rotting.
In the last installment of this series, I talked a good bit about lines of code. As it turns out, the question, “what is a line of code?” is actually more complex than it first appears. Of the three different ways I mentioned to regard a line of code, I settled on “logical lines of code” as the one to use as part of assessing time to comprehend.
As promised, I sent code to use as part of the experiment, and got some responses. So, thanks to everyone who participated. If you’d like to sign up for the experiment, but have yet to do so, please feel free to click below.
Here is the code that went out for consideration. I’m not posting the results yet so that people can still submit without any anchoring effect and also because I’m not, in this installment, going to be updating the composite metric just yet.
The reason that I’m discussing this code is to show how simple it was. I mean, really, look at this code and think of all that’s missing.
There are no control flow statements.
There are no field accesses.
There is no interaction with collaborators.
There is no interaction with global state.
There is no internal scoping of any kind.
These are purely functional methods that take an integer as input, do things to it using local declarations, and then return it as output. And via this approach, we’ve fired the first tracer bullet at isolating logical lines of code in a method. So let’s set that aside for now and fire another one at an orthogonal concern.
Before, I talked about the meaning of a line of code. Now I’d like to talk about the meaning of complexity in your methods. Specifically here, I’m referring to what’s called “cyclomatic complexity.” Cyclomatic complexity is a measure of the number of path’s through a piece of source code. Let’s see a few examples to get the hang of it.
The cyclomatic complexity of this method is 2 because there are two paths through it.
The if condition evaluates to true and the method throws an exception.
The if condition evaluates to false and the method finishes executing.
Be mindful that “if” isn’t the only way to create multiple paths through the code. For instance, this method also has a cyclomatic complexity of 2 because of the ternary operator creating two different execution paths.
Cyclomatic complexity can increase quite rapidly, particularly when nested conditionals enter the equation. This method has a cyclomatic complexity of 4, and you can see it already is starting to get hard to figure out exactly why.
Imagine what it starts to look like as methods have things like large degrees of nesting, switch statements, and conditional after conditional. The cyclomatic complexity can soar to the point where it’s unlikely that every path through the code has even ever been executed, let alone tested.
So it stands to reason that something pretty simple to articulate, like complexity, can have a nuanced effect on the time to comprehend a code base. In the upcoming installment of our experiments, I’d like to focus on cyclomatic complexity and its effect on method time to comprehend.
But I’ll close out this post by offering up a video showing you one of the ways that NDepend allows you to browse around your code by cyclomatic complexity.
One of the things I remember most vividly from my CIO days was the RFP process for handling spikes in demands on my group’s time. In case you haven’t participated in this on either side, the dance involves writing up a brief project description, sending it over to a handful of software consulting firms, and getting back tentative project proposals. I’d pick one, and work would begin.
There were a lot more documents and legalese involved, but the gist, writ small, might be something like, “we need an application that will run in our data center, take information about customers out of our proprietary, and put it back into our CRM as notes, depending on a series of business rules.” The response, in proposal form, would essentially be, “we’ll do that, and we think it’ll cost you about $30,000.”
This is what most people think of as the cost of a software project. Perhaps it’s not a little, 5-figure line of business utility. Perhaps it’s a $300,000 web or mobile application. Perhaps it’s even a $30,000,000 enterprise workflow overhaul. Whatever the case may be, there’s a tendency to think of software in terms of the cost of the labor necessary to write it. The consulting firms would always base their proposals and estimates on a “blended rate” of the hourly cost of the labor. Firms with in-house development staffs tend to reason about the cost of software projects as a function of the salaries of the folks tasked with the work.
Of course, if you’re a veteran at managing software projects and teams, you realize that there’s more to the total spend than just the cost of the labor. No doubt you factor in the up-front and licensing cost of the tools you use to develop the software as well as the cost of the hardware on which it will eventually run. You probably even factor in the cost of training users and operations folks, and paying maintenance programmers to keep the lights on.
But there are other, more subtle, hidden costs that I’d like to discuss — costs related to your approach to software development. These are variable costs that depend on the nature of the code that your team is writing. Continue reading Hidden Costs in Your Software
The last episode of this series was heavy on theory, so let’s balance it out a bit with some applied science. The lofty goal of this series of posts is to construct a way to predict the time to comprehend a method. But, regardless of how that turns out and how close we get, we’re going to take a detailed look at NDepend, static analysis, and code metrics along the way.
One of the simplest hypotheses from the last post was, “the more statements there are in a method, the longer it will take to comprehend.” Intuitively, this makes sense. The more stuff there is in a method, the longer it will take to grok. But you’ll notice that I said “statement” rather than “line of code.” What did I mean by that? Are these things interchangeable? Continue reading Let’s Build a Metric 5: Flavors of Lines of Code
If you want to stir up a pretty serious amount of discussion-churn, wander over to where the software developers sit and ask for a consensus definition of “clean code.” This probably won’t start a religious war — it’s not like asking for consensus on the best programming language or development tool. You’ll probably find a lot of enthusiastic agreement with different flavors of the same basic concept. This is true among luminaries of the field, as quoted here on DZone, and it’s most likely to be true in any given shop.
There will be agreement on the broad concepts and inevitable debate as the points become of a finer grain. Developers can all agree that code should be “maintainable” and “easy to read” but you might get a bit of fragmentation around subjects like variable naming or relative compactness and ‘density’ of code. Have the developers look at a bit of code and ask them if it could be “cleaner” and you’ll probably get an array of responses, including potential disagreement and thrash. This will become especially true if they get hung up on cosmetic particulars like indentation, bracket placement, and casing.
So where does that leave us, exactly, when asked the deceptively simple question, “is this clean code?” Programmers can arrive at a broad consensus on how to answer that question, but not necessarily on the answer itself. They’ll all say, “well, it’s clean if it’s readable,” but some might give a particular bit of code a thumbs up while others give it a thumbs down. If you’re a developer, this can be fun or it can be frustrating. If you’re a non-technical stakeholder, such as a director, project manager, tester or business analyst, it can be confusing and maddening. “So is this code good or not!?” Continue reading Bringing Objectivity to Clean Code
“We can’t go on like this. We need to rewrite this thing from scratch.”
The Writing is on the Wall
These words infuriate CIOs and terrify managers and directors of software engineering. They’re uttered haltingly, reluctantly, by architects and team leads. The developers working on the projects on a day to day basis, however, often make these statements emphatically and heatedly.
All of these positions are understandable. The CIO views a standing code base as an asset with sunk cost, much the way that you’d view a car that you’ve paid off. It’s not pretty, but it gets the job done. So you don’t want to hear a mechanic telling you that it’s totaled and that you need to spend a lot of money on a new one. Managers reporting to these CIOs are afraid of being that mechanic and delivering the bad news.
Those are folks whose lives are meetings, power points, and spreadsheets, though. If you’re a developer, you live the day to day reality of your code base. And, to soldier on with the metaphor a bit, it’s pretty awful if your day to day reality is driving around a clunker that leaves car parts on the road after every pothole. You don’t just start to daydream about how nice it would be to ride around in a reliable, new car. You become justifiably convinced that doing anything less is a hazard to your well being.
And so it comes to pass that hordes of developers storm the castle with torches and pitchforks, demanding a rewrite. What do we want? A rewrite! When do we want it? Now!
At first, management tries to ignore them, but after a while that’s not possible. The next step is usually bribery — bringing in ping pong tables or having a bunch of morale-building company lunches. If the carrot doesn’t work, sometimes the stick is trotted out and developers are ordered to stop complaining about the code. But, sooner or later, as milestones slip further and further and the defect count starts to mount, management gives in. If the problem doesn’t go away on its own, and neither carrots nor sticks seem to work, there’s no choice, right? And, after all, aren’t you just trusting the experts and shouldn’t you, maybe, have been doing that all along?