NDepend

Improve your .NET code quality with NDepend

Business Complexity vs. Implementation Complexity

It is good software design practice to make sure that methods can be entirely viewed in the code editor that typically shows 30 to 45 lines at a time. The root of this principle is easy to grasp: scrolling up and down over a too large method impedes code readability.

Refactoring a too large method or a too large class implies to create several code elements smaller in terms of number of textual lines. But ultimately the code behavior didn’t change. In other words the business complexity remained the same but the refactor session reduced the implementation complexity (at least we hope so).

Software complexity is a subjective measure relative to the human cognition capabilities. Something is complex when it requires effort to be understood by a human. Software complexity is a 2 dimensional measure. To understand a piece of code one must understand both:

  • What this piece of code is doing at runtime, the behavior of the code, this is the business complexity.
  • How the actual implementation solves the business needs at runtime, this is the implementation complexity.

The whole point of software design, SOLID principles, code maintainability, patterns… is to avoid adding implementation complexity to the business complexity. But from where implementation complexity comes from? There are 5 main sources of implementation complexity:

Too Large Code

We already mentioned this one. It is easy to limit implementation complexity by abiding by simple code rules that check thresholds on code metrics like methods number of lines of code and method complexity, or classes with too many methods. The Single Responsibility Principle (SRP) also contribute to smaller implementations: less responsibilities for a class necessarily means less code.

Lack of Abstractions

An abstraction such as an abstract method, an interface or even a delegate, is the minimum required knowledged to invoke an implementation at runtime without knowing it at design time. In other words an abstraction hides implementation detail. Abstraction represents a great weapon to reduce the implementation complexity because polymorphism can replace quantity of if/else/switch/case. Concretely code like that:

Can be replaced with code like that:

Abstraction also reduces implementation complexity because it frees the developer mind of implementation details.

The S in SOLID is the SRP (mentioned above) and is not related to abstraction. But the OLID principles are all about using abstractions wisely:

How do we check for good usage of abstractions? There is no magic stick like thresholds to limit too large code elements. However the Abstractness vs. Instability graph and metrics and the Level metric are a good start to identify code areas that need more abstractions. They are both described in this post about Dependency Inversion Principle.

State Mutability at Runtime

A common source of implementation complexity is mutable states. The human brain is not properly wired to anticipate what is really happening at runtime in a program. While reviewing a class, it is hard to imagine how many instances will simultaneously exists at runtime and how the states of each these instances will evolve over time. This is actually THE major source of problems when dealing with a multi-threaded program.

If a class is immutable (if the states of all its instances objects don’t change at runtime once the constructor is called) its runtime behavior immediately becomes much easier to grasp and as a bonus, one doesn’t need to synchronize access to immutable objects. See here a post explaining in-depth immutable class.

A method is a pure function if it produces outputs from its inputs without modifying any state. Like immutable classes, pure functions are also a good mean to reduce implementation complexity.

Some code rules exists to enforce state mutability like Structure should be Immutable or Fields should be marked as ReadOnly when possible. Being immutable for a class or pure for a method is such an essential property that dedicated C# keywords for that would be welcome, like readonly for fields. Here is a proposal for C# support of the immutable keyword. By now some ImmutableAttribute or PureAttribute can be used to tag such elements and some code rule can check for Types tagged with ImmutableAttribute must be immutable or Methods tagged with PureAttribute must be pure

Over Coupling

When trying to re-engineer/understand/refactor a large piece of code (I mean something made of dozens of classes like a big namespaces or an assembly), the difficulty is directly proportional to the amount of coupling between considered code elements. Both dependency graphs below shows dependencies between 36 classes: but the left contains 64 edges and the right one contains 133 edges: which one would you prefer to deal with?

One key strategy to control coupling is to layer components and make sure that there is no dependency cycles. This can be checked on namespaces for example with the code rule Avoid namespaces dependency cycles. Using abstractions is also a good way to limit over coupling. If an interface has N implementations then relying only on one interface is virtually like depending on the N underlying classes.

Lack of Unit Tests

Software testing is a large topic I won’t cover here. But one key benefit of writing tests (apart enforcing business rules and detecting regressions early) is to ensure that the code is testable. Being testable for code means less coupling, more abstractions and overall simple code. Ultimately if the code is easily testable we can safely assume that its implementation complexity is kept low. Here also many code rules like Code should be tested can help enforce high testability.

One Measure for All

There are more sources of implementation complexity but those 5 ones are certainly the bigger culprits. To reduce this unnecessary complexity one must be able to measure it. But how to unify too large code, bad design, poorly tested code and more in a single metric?

As we saw most of these aspects can be enforced with code rules. A code rule produces issues and for each issue the code rule can estimate the cost to fix an issue and the annual cost to let an issue unfixed. A famous analogy with the financial field says that:

  • The estimated cost to fix code smells is the Technical-Debt: a measure of the implementation complexity.
  • The estimated annual cost to let code smells unfixed is the Annual Interest of the Debt: a measure of the business operation cost induced by poorly written and poorly tested code.

These estimations can be expressed in developer-time and ultimately in money cost which can be used by management to take the right decisions.

Answers to arguments against 100% coverage

I’ve been enthusiast about 100% coverage for more than a decade. The large code base of NDepend we are working on will reach soon 90% overall coverage. Most classes tested are being 100% covered.

In the heatmap below small rectangles are methods. Grapes of rectangles are classes namespaces and assemblies. By importing code coverage in this heat-map we can see at a glance that most classes are green: they are 100% covered. This heatmap also shows in red and orange areas with room for improvements. Areas in gray represent untestable code tagged with System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute.

Not everybody agrees with 100% coverage and the same points against used to popup again and again so I take a chance to address them one by one.

I don’t waste my time writing tests to cover trivial code

Often the example of trivial code proposed is property getter and setter. Indeed writing tests to cover such get/set methods doesn’t bring any value. But such trivial code is meant to be used by real code, isn’t it?

For example suppose we have a Contact class with a property like Contact.FirstName, this properties is meant to be used. The module that builds an invoice and that prints the contact name on it certainly uses it. And if the build invoice module is well tested, Contact.FirstName must get called implicitly at test time.

So if some trivial code is not covered by tests, the idea is not to write dumb tests to cover it, the idea is to question yourself why the trivial code is not already implicitly covered by tests that exercise real code. And certainly you’ll find room for improvements in the tests suite in charge of testing the real code, that depends upon the trivial uncovered code. A future post will go through benefit of 100% coverage but as this point suggests, one key benefit is that a hole in 100% covered code is always an indicator of something interesting to improve.

90% coverage is enough

Often teams get satisfied with 90% or 95% coverage. For the scale of a large code base such score is outstanding. But for the scale of a class or a component 100% must be the goal.

The key is testability. We all have experienced some easy to test code and some hard to test code. The maintainability is hard to measure but the testability is something concrete. And good testability leads to good maintainability. In other words if a component can be fully tested seamlessly it means this component will be easy to maintain.

On the other hand, in the real world we often end up with a component that contains a small portion that is hard to test. And we get satisfied with 90 or 95% coverage. But this is sweeping dust under the carpet: this small portion of code is not testable because it is not well designed and as a consequence it is bug-prone. Hence we end up not testing the most sensitive part of the code that likely concentrates most of the problems!

If after having wrote a test suite you end up with a few if/else blocks that are hard to test, do a favor to yourself: refactor to reach seamless full testability.

What matters is what gets asserted in tests, not the amount of coverage

Suppose you have a 1.000 lines of code linear method: it only contains a linear list of processing with no if/then/switch/case… Suppose you have a test with zero assertions invoking this method. You end up with 1.000 lines of code 100% covered but not tested at all.

This fictitious situation often proposed as 100% coverage counter argument doesn’t reflect the reality. In the real world if one works hard to get a complex class 100% covered, I cannot imagine that the tests don’t contain an healthy amount of assertions. No matter if TDD Test First Design approach is used or if tests get written at the same time as code tested : writing tests leads to think more and to think better. One doesn’t write a test without having a few points in mind to assert.

But there is more. Why only tests should contain assertions? The code itself is well suited to contain assertions. For more than a decade we stuff the NDepend code base with assertions. Everything that can be asserted gets asserted. We end up with more than 26K assertions. A good half of those comes from non-nullable references. C#8 nullable reference will relieve all those assertions. But will remain all other assertions about non-empty strings, special string formats, non-zero counter, IDictionary.ContainsKey(), non-zero denominator of a division, !object.IsDisposed, loop invariant…. And all those assertions are checked at test time and fail tests when violated. Ultimately most tests end up with a 1/10 ratio between the number of assertions in test checked, and the number of assertions in code checked.

We still rely on the good old System.Diagnostics.Debug.Assert() but this is an implementation detail. Any assertion library can be used from the code itself (including a custom one, or the Code.Contracts library that would have deserved more love). The only thing that matters is to tune assertions to fail a test upon violation, typically by sending an exception. A related topic is if those assertions must be checked or not at production time. This is another interesting debate with pros and cons.

Some code just cannot be tested

Indeed there are some API call that just cannot be tested like calls to System.Windows.MessageBox. Such calls need to be mocked. This way the code is well splitted between 100% covered code and code tagged with System.Diagnostics.CodeAnalysis.ExcludeFromCodeCoverageAttribute. Dependency Injection can be used to inject untestable implementation in tested code.

Again testability is a central characteristic of code. Partitioning code according to its testability makes sense.

However, there are situations where such untestable API tend to prolifer. Then your choice is to either mock everything or accept to live with non-tested code. But being in this situation is the sign that such API is immature because not test-prone. Thus something radical must be done about that: get rid of it, contribute to it (if OSS), fork it…

UI Code is untestable

Since testing code has become a popular practice, UI code has always been treated as an awkward case: there are some frameworks dedicated to UI testing but such practice remains tedious and increases test maintenance. This is not satisfying.

As for any code, UI code must be written with testability in mind. UI code can contain UI logic but should not contain business logic. For example the UI class shouldn’t contain the implementation of a IsContactReadOnly() method (business logic) but can call such method declared in a non-UI testable class, to decide if a Contact First Name textbox should be readonly or editable (UI logic). Ideally UI code looks like an empty shell as much as possible.

Nowadays web development outperforms desktop developments. The backend spits HTML CSS and Javascript textual content. This content seen as data represents some easily testable material. However when possible, it is preferable to test the IsContactReadOnly() business logic directly from the backend code, than to test it indirectly, for example by checking if an HTML textbox is readonly or editable. Hopefully modern web platforms like ASP.NET Core propose testing strategies.

Desktop UI is a different beast to test and there is no magic. To test our UI specific code we’ve invested in an architecture where the UI code can be piloted, both from the main-form code and from tests. Such architecture relies massively on some mediator classes: any UI portion can pilot any other UI portion through these mediators. These mediators are used at production runtime, and tests supersede them to pilot the UI at testime. Tests dedicated to UI testing have few assertions, but such tests are not useless: they execute all the assertions stuffed in our UI code. At the end of the day some of our UI classes remain not fully covered but closed to. Small portions left untested doesn’t contain error-prone code but untestable UI peculiarities, like complex designer code or some DPI related code that can only be tested manually. The experience proved that this approach saved us from regression bugs many times.


100% coverage is a sensitive topic. In the NDepend team we are 100% coverage driven. In a future post we’ll detail the benefits of being 100% coverage driven. But I wanted to detail first the non-trivial reasons that make 100% coverage worth in most situations.