I’ve worked in the programming industry long enough to remember a less refined time. During this time, the CIO (or CFO, since IT used to report to the CFO in many orgs) may have counted lines of code to measure the productivity of the development team. Even then, they probably understood the folly of such an approach. But, if they lacked better measures, they might use that one.
Today, you rarely, if ever see that happen any longer. But don’t take that to mean reductionist measures have stopped. Rather, they have just evolved.
Most commonly today, I see this crop up in the form of automated unit test coverage. A CIO or high level manager becomes aware of generally quality and cadence problems with the software. She may consult with someone or read a study and conclude that a robust, automated test suite will cure what ails her. She then announces the initiative and rolls out. Then, she does the logical thing and instruments her team’s process so that she can track progress and improvement with the testing initiative.
The problem with this arises from what, specifically, the group measures and improves. She wants to improve quality and predictability, so she implements a proxy solution. She then measures people against that proxy. And, often, they improve… against that proxy.
If you measure your organization’s test coverage and hold them accountable, you can rest assured that they will improve test coverage. Improved quality, however, remains largely an orthogonal concern.
The CIO’s Leaky Abstraction
The issue here stems from what I might call a leaky organizational abstraction. If the CIO came from a software development background, this gets even more thorny.
Consider that a CIO or high level manager generally concerns himself with organizational strategy. He approves and monitors budgets, signs off on major initiatives, decides on the fate of applications in the application portfolio, etc. The CIO, in other words, makes business decisions that have a technical flavor. He deals in profits, losses, revenues, expenses, and organizational politics.
Through that lens, he might look at quality problems across the board as hits to the company’s reputation or drags on the bottom line. “We’re losing subscribers due to these bugs that happen at each roll out. We estimate that we lose $10,000 more each month in revenue.” He would then pull the trigger on business solutions: hiring consultants to fix this problem, realigning his org chart, putting off milestones to focus on quality, etc.
But if he dives into the weeds, he’s shedding a business person’s hat for a techie’s. “Move over architects,” he says, “I know how you can fix this at the line level. I call it ‘automated test coverage’ and I order you to start doing it.” In a traditionally organized corporate structure, the CIO begins doing the job of folks in his organization at his peril.
What The CIO Needs
Test coverage presents a highly technical solution to a business problem. Ideally, the people running the business should weigh in on existential threats and opportunities to the business. They should then delegate the solutions to people in more of an expert position.
For the CIO, this means deciding to spend money to stem up the mounting quality problems. And then, it means hiring or positioning a quality expert to roll it out and monitor the results. This person then translates from shop talk back to business talk, and converses with the CIO in that language. The CIO keeps track of the same measures that alerted him to the problem in the first place, while the implementer tracks test coverage (or whatever).
Let’s consider some examples.
High Coupling Correlates with Feature Slowdown
Some of the calls I receive from consulting clients involve complaints about application inflexibility. Organizations want to make seemingly minor changes and the development group says it can’t do them quickly. Or, perhaps, the sorts of changes that used to take a week now take a month.
Almost invariably, I find high coupling indicators in these codebases. I’ll see extensive use of global variables, rats’ nests of dependencies among namespaces and assemblies, etc. Even at the individual class level, you see strange couplings and radiating difficulty of change.
In this scenario, the technical expert understands, can measure, and can speak about codebase coupling. The CIO understands the ramifications of continuing forward with mounting lethargy in response to changes.
Lack of Cohesion Correlates with Embarrassing Defects
You can tie a lack of cohesion in different application types directly to weird, seemingly inexplicable defects. If you create a God class that handles everything in the application, you shouldn’t feel surprise when changing the way the database stores customer entries somehow creates an issue with the login page.
To a consultant or architect, metrics around cohesion speak to this likelihood. To a CIO, this translates into “egg on face” factor during rollouts and such.
Complex Methods and Types Correlate with Future Inflexibility
“We’re not touching the business rules engine!” When you start to hear statements like that from the development organization, you know you have a problem. Specifically, they fear parts of the codebase and will actively resist modifying them. “If you want that, you need a total rewrite!”
I’ve seen enough codebases to know you most commonly find this behavior (and inflexibility) with daunting levels of cyclomatic complexity. Methods and types like this seem to take on their own gravitational fields of complexities, becoming black holes. No one wants to venture near the event horizon.
But the CIO doesn’t need to understand complexity. That concerns technical folks. The CIO needs to understand, “what can’t I touch in this app and what’s at risk of becoming untouchable?”
The CIO Dashboard
Through some examples, I’ve established a few metrics well known to architects and what they can mean for the business (and thus the CIO). You could reasonably envision this as a CIO dashboard designed to trigger conversations and decisions.
The CIO should not concern herself with member coupling, cohesion, and cyclomatic complexity. Key technical people should worry about those and translate them into meaningful business terms. As a simple example, imagine a dashboard featuring each assembly (or application’s) name and then indicators labeled “slowdown likelihood,” “embarrassing defect likelihood” and “inflexibility likelihood.” These could simply alternate between green, yellow, and red, as the underlying metrics shifted.
You then have appropriate, non-leaky organizational abstractions. The CIO does not say, “architect, why is our method cyclomatic complexity up over 5 — get it down to 3!” Instead, the CIO asks, “architect, why did the inflexibility light on your application flip to yellow? Should this concern me?” In other words, it starts a conversation.
If you introduce a metric for your team to hit, they will hit it. So if you introduce a metric you don’t fully understand, you might get an outcome you don’t expect. Spare yourself that predictable pain. Use a composite metric dashboard, maintained by people who understand the details, to help guide your strategy.