If you hang around agile circles long enough, you’re quite likely to hear the terms “big, visible chart” and “information radiator.” I think both of these loosely originate from the general management concept that, to demonstrably improve something, you must first measure and track it. A “big, visible chart” is information that an individual or team displays in, well, big and visible fashion. An information radiator is more or less the same concept (I’m sure it’s possible for someone who is an 8th degree agile black belt to sharp-shoot this, but I think you’d be hard pressed to argue that this isn’t the gist).
Big, Visible Information Radiators
As perhaps the most ubiquitous example imaginable, consider the factory sign that proudly displays, “____ days since the last accident,” where, hopefully, the blank contains a large number. A sign like this is far from a feel-good vanity metrics; it actually alters behavior. Imagine a factory where lots of accidents happen. Line managers can call meetings and harp on the importance of safety, but probably to limited effect. After all, the prospect of a floor incident is abstract, particularly for someone to whom it hasn’t ever happened.
But if you start putting a number on it, the concept becomes less abstract. “Currently we have an incident every day, but we want to try to make it happen only once per month, and we’re going to keep track.” Now, each incident means that the entire factory fails each and every day, and it does so visibly. Incidents go from “someone else’s problem that you hear about anecdotally from time to time” to “the thing that’s making us fail visibly.” And then you’ll find that doing nothing but making the number very visible will actually serve to alter behavior — people will be more careful so as not to be responsible for tanking the team’s metrics.
In the world of agile, the earliest and most common bit of information to see was the team’s card wall: which features were in progress, which were being tested, which were complete, and who was working on what. This served double duty of creating public visibility/accountability and providing an answer to the project manager’s “whatcha doin?” without interruptions or mind-numbing status meetings. But times and technologies progressed, resulting in other information being visible to the team at all times.
These days, it’s common to see a big television or monitor located near a team and displaying the status of the team’s code on the build machine. Jenkins is a tool very commonly used to do this, and it will show you projects with red for failing and green for all good. If you want to get creative, you can use home automation tech to have red or green lamps turn on and off. For the team, this is a way of exposing broken builds as a deficiency and incenting team members to keep it in a consistently passing state.
Getting Further Upstream
In my travels, I’ve observed these practices being extremely important for teams, but I think that teams can take the concept a lot further than they often do, provided they have the right tools and the right things to measure. Specifically, I’ve often found it curious that teams tend to capture and expose process metrics (like story points) or compiling and runtime metrics (like the aforementioned build status or something like unit test coverage) at the exclusion of source code metrics. After all, compiling and running are downstream activities — why not start at the source (no pun intended)?
It is this musing that brings me to the NDepend dashboard. Below is a quick screenshot of what it looks like for my ChessTDD project.
It’s a visually appealing layout that conveys a lot of information without leaving you feeling overwhelmed. (But, if you disagree, it’s also quite customizable). At a glance I can see a snapshot of the basic stats and rule violations in the codebase. I can also see a rich history of the code, telling me what the trends look like. Looking at these graphs, I can see that the number of rules I’ve violated remains flat, but that the instances of those violations are slowly increasing. (I haven’t configured NDepend in this project to cull out violations I don’t want to track, or these numbers wouldn’t make me look quite so lazy).
On top of that, there are a number of other graphs that I can keep track of. And, if I want to see other things, I can choose from a rich set of customizations that allow me to build my own graphs. I can keep track of an amazing number of properties of my code base as it exists in source control, rather than as it exists while being built or run.
The actual radiation/visible component of the NDepend dashboard could take the form of appearing on the big TV in the team room, or it could take the form of being the Visual Studio start screen for each developer on the team. As long as everyone sees it and is made conscious of the trends, it will serve its purpose.