NDepend

Improve your .NET code quality with NDepend

5 Tips to Help You Visualize Code

5 Tips to Help You Visualize Code

Source code doesn’t have any physical weight — at least not until you print it out on paper.  But it carries a lot of cognitive weight.  It starts off simply enough. But before long, you have files upon files, folders upon folders, and more lines of code than you can ever keep straight.  This is where the quest to visualize code comes in.

The solution file and namespaces organization make for a pretty unhelpful visualization aid.  But that’s nothing against those tools. It’s just not what they’re for.  Nevertheless, if the only way you attempt to visualize code involves staring at hierarchical folders, you’re gonna have a bad time.

How do most people handle this?  Well, they turn to whiteboards, formal design documents, architecture diagrams, and the like.  This represents a much more powerful visual aid, and it tends to serve as table stakes of meaningful software development.

But it’s a siren song.  It’s a trap.

Why?  Well, as I’ve discussed previously, those visualization aids just represent someone’s cartoon of what they think the code will look like when complete.  You draw up a nice layer-cake architecture, and you wind up with something that looks more like six tumbleweeds glued to a barbed wire fence.  Those visual aids are great…for visualizing what everyone wishes your code looked like.

What I want to talk about today are strategies to visualize code — your actual code, as it exists.

Continue reading 5 Tips to Help You Visualize Code

A problem with extension methods

We like extension methods. When named accordingly they can both make the caller code clearer, and isolate static methods from classes on which they operate.

But when using extension methods, breaking change can happen, and this risk is very concrete, it actually just happened to us.

Since 2012, NDepend.API proposes a generic Append() extension:

Two default rules use this extension method: Avoid namespaces dependency cycles and Avoid types initialization cycles

Last month, on Oct 17th 2017, Microsoft released .NET Framework v4.7.1 that implements .NET Standard 2.0. Around 200 .NET Standard 2.0 were missing in .NET Framewok v4.6.1, and one of those missing API is:

Within NDepend, rules, quality gates, trend metrics … basically everything, is a C# LINQ query stored as textual and compiled and executed on-the-fly. Since the compilation environment uses both namespaces NDepend.Helpers and System.Linq, when running NDepend on top of the .NET Framework v4.7.1, both Append() extension methods are visible. As a consequence, for each query calling the Append() method, the compiler fails with:

Hopefully a user notified us with this problem that we didn’t catch yet and we just released NDepend v2017.3.2 that fixes this problem Only one clean fix is possible to make it compatible with all .NET Framework versions: refactor all calls to the Append() extension method,  into a classic static method invocation, with an explanatory comment:

We expect support on this within the next weeks and months when more and more users will run the .NET Fx v4.7.1 while not changing their rules-set. There is no lesson learnt, this situation can happen and it happens rarely, this shouldn’t prevent you from declaring and calling extension methods. The more mature the frameworks you are relying on, the less likely it’ll happen.

CRAP Metric is a Thing And It Tells You About Risk in Your Code

CRAP Metric Is a Thing And It Tells You About Risk in Your Code

I won’t lie.  As I thought about writing this post, I took special glee in contemplating the title.  How should I talk about the CRAP metric?  *Snicker*

I guess that just goes to show you that some people, like me, never truly grow up.  Of course, I’m in good company with this, since the original authors of the metric had the same thought.  They wanted to put some quantification behind the common, subjective declaration, “This code is crap!”

To understand that quantification, let’s first consider that CRAP is actually an acronym: C.R.A.P.  It stands for change risk anti-patterns, so it addresses the risk of changing a bit of code.  In other words, methods with high CRAP scores are risky to change.  So the CRAP metric is all about assessing risk.

When you get a firm grasp on this metric, you get a nice way to assess risk in your codebase.

The CRAP Metric: Getting Specific

Okay, so how does one quantify risk of change?  After all, there are a lot of ways that one could do this.  Well, let’s take a look at the formula first.  The CRAP score is a function of methods, so we’ll call it CRAP(m), mathematically speaking.  (And yes, typing CRAP(m) made me snicker all over again.)

Let CC(m) = cyclomatic complexity of a method and U(m) = the percentage of a method not covered by unit tests.

CRAP(m) = CC(m)^2 * U(m)^3 + CC(m).

Alright, let’s unpack this a bit.  To arrive at a CRAP score, we need a method’s cyclomatic complexity and its code coverage (or really lack thereof).  With those figures, we multiply the square of a method’s complexity by the cube of its rate of uncovered code.  We then add its cyclomatic complexity to that.  I’ll discuss the why of that a little later, but first let’s look at some examples.

First, consider the simplest, least CRAP-y method imaginable: a method completely covered by tests and with no control flow logic.  That method has a cyclomatic complexity of 1 and uncovered percentage of 0.  That means that CRAP(m) = 1^2 * 0^3 + 1 = 1.  So the minimum CRAP metric score is 1.  What about a gnarly method with no test coverage and cyclomatic complexity of 6?  CRAP(m) = 6^2 * 1^3 + 1 = 37.

The authors of this metric define a CRAP-y method as one with a CRAP score greater than 30, so that last method would qualify.

Continue reading CRAP Metric Is a Thing And It Tells You About Risk in Your Code

Code reuse is not a good goal.

Code Reuse is Not a Good Goal

Wait, wait, wait.  Put down the pitchforks and listen for a minute.  You’re probably thinking that I’m about to tout the “virtues” of copy/paste programming or something.  But I assure you I’m not going to do that.  Instead, I’m going to speak to a similar but subtly different topic: code reuse as a first-class goal.

If you’ve read The Pragmatic Programmer, you know about the DRY principle.  You’ll also know that the underlying evil comes from duplication of knowledge in your system.  This creates inconsistencies and maintenance headaches.  So, since you duplicate code at your peril, isn’t code reuse a good thing?  Isn’t it the opposite of code duplication?

No, not exactly.  You can write “hello world” without any duplication.  But you can also write it without reusing it or anything in it.

Code Reuse as a First-Class Goal

So what, then, do I mean when I talk about code reuse as a first-class goal?  I’m talking about a philosophy that I see frequently in my consulting, especially in the enterprise.

The idea seems to come from a deep fear of rework and waste.  If we think of the history of the corporation, the Industrial Revolution saw an explosion in manufacturing driven by global efficiency.  The world scrambled to make things cheaper and faster, and waste or rework in the process impeded that.

Today and in our line of work, we don’t manufacture widgets.  Instead, we produce software.  But we still seem to have this atavistic terror that someone, somewhere, might already have written a data access layer component that you’re about to write.  And you writing it would be a waste.

In response to this fear, organizations come up with plans that can get fairly elaborate.  They create “centers of excellence” that monitor teams for code reuse opportunities, looking to stamp out waste.  Or they create sophisticated code sharing platforms and internal app stores.  Whatever the details, they devote significant organizational resources to avoiding this waste.

And that’s actually great, in some respects.  I mean, you don’t want to waste time reinventing wheels by writing your own source control tools and logging frameworks.  But things go sideways when the goal becomes not one of avoiding reinvented wheels but instead one of seeing how much of your own code you can reuse.

Let’s take a look at some of the problems that you see when organizations really get on the “code reuse” horse.

Continue reading Code Reuse is Not a Good Goal

The Singleton Design Pattern: Impact Quantified

The Singleton Design Pattern: Impact Quantified

This post has been about a month in the offing.  Back in August, I wrote about what the singleton pattern costs you.  This prompted a good bit of discussion, most of which was (as it always is) anecdotal.  So a month ago, I conceived of an experiment that I called the singleton challenge.  Well, the results are in.  I’m going to quantify the impact of the singleton design pattern on codebases.

I would like to offer an up-front caveat.  I’ve been listening lately to a fascinating audiobook called “How to Measure Anything,” and it has some wisdom for this situation.  Measurement is primarily about reducing uncertainty.  And one of the driving lessons of the book is that you can measure things — reduce uncertainty — without getting published in a scientific journal.

I mention that because it’s what I’ve done here.  I’ll get into my methodology momentarily, but I’ll start by conceding the fact that I didn’t (and couldn’t) control for all variables.  I looked for correlation as a starting point because going for causation might prove prohibitive.  But I think I took a much bigger bite out of trying to quantify this than anyone has so far.  If they have, I’ve never seen it.

A Quick Overview of the Methodology

As I’ve mentioned in the past on this blog, I earn a decent chunk of my consulting income doing application portfolio assessments.  I live and breathe static code analysis.  So over the years, I’ve developed an arsenal of techniques and intellectual property.

This IP includes an extensive codebase assessor that makes use of the NDepend API to analyze codebases en masse, store the results, and report on them.  So I took this thing and pointed it at GitHub.  I then stored information about a lot of codebases.

But let’s get specific.  Here’s a series of quick-hitter bullets about the experiment that I ran:

  • I found this page with links to tons of C# projects on GitHub, so I used that as a “random” selection of codebases that I could analyze.
  • I gave my mass analyzer an ordered list of the codebase URLs and turned it loose.
  • Anything that didn’t download properly, decompress properly, or compile properly (migrating for Core, restoring NuGet packages, and building from command line) I discarded.  This probably actually creates a bias toward better codebases.
  • Minus problematic codebases, I built all solutions in the directory structure and made use of all compiled, non-third-party DLLs for analysis.
  • I stored the results in my database and queried the same for the results in the rest of the post.

I should also note that, while I invited anyone to run analysis on their own code, nobody took me up on it.  (By all means, still do it, if you like.)

Singleton Design Pattern: the Results In Broad Strokes

First, let’s look at the scope of the experiment in terms of the code I crunched.  I analyzed

  • 100 codebases
  • 986 assemblies
  • 5,086 namespaces
  • 72,615 types
  • 501,257 methods
  • 1,495,003 lines of code

From there, I filtered down raw numbers a bit.  I won’t go into all of the details because that would make this an immensely long post.  But suffice it to say that I discounted certain pieces of code, such as compiler-generated methods, default constructors, etc.  I adjusted this so we’d look exclusively at code that developers on these projects wrote.

Now, let’s look at some statistics regarding the singleton design pattern in these codebases.  NDepend has functionality for detecting singletons, which I used.  I also used more of its functionality to distinguish between stateless singleton implementations and ones containing mutable state.  Here’s how that breaks down:

Continue reading The Singleton Design Pattern: Impact Quantified

You have no excuse for dead code.

You Have No Excuse for Dead Code

In darker times, software management would measure productivity as a function of lines of code.  More code means more done, right?  Facepalm.  When I work with IT management in my capacity as a consultant, I encourage them to view code differently.  I encourage them to view code as a liability, like inventory.  And when useful code is a liability, think of what a boat anchor dead code is.

I once wrote a fun post about the fate of dead code in your codebase.  And while I enjoyed writing that, it had a serious underlying message.  Dead code costs you time, money, and maintenance headaches.  And it has absolutely no upside.

A Working Definition for Dead Code

Okay. If I’m going to make a blog post out of disparaging dead code, I should probably offer a definition.  Let’s do that here.

Some people will draw a distinction between code that can’t be executed (unreachable) and executed code whose effects don’t matter (dead).  I acknowledge this definition but won’t use it here.  For the sake of simplicity and clarity of message, let’s create a single category of dead code: any code in your codebase that has no bearing on your application’s behavior is, for our purposes here, dead.

The Problems with Dead Code

Having defined it, what’s the problem?  If it has no bearing on your application’s behavior, what’s the harm?  How does it cost time and money, as I claimed a moment ago?

Well, simply put, your code does not live in a shrink-wrapped vacuum.  As your application evolves, developers have to change the code.  When you have only code that matters in your codebase, they can do this with the most efficiency.  If, on the other hand, you have thousands of lines of useless code, these developers will spend hundreds of hours maintaining that useless code.

Think of having dead code as being reminiscent of running your heat in the winter while keeping all of your windows open.  It’s self-defeating and wasteful.

But even worse, it’s a totally solvable problem.  Let’s take a look at different types of dead code that you encounter and what you can do about it.

Continue reading You Have No Excuse for Dead Code

Static analysis of .NET Core 2.0 applications

NDepend v2017.3 has just been released with major improvements. One of the most requested features, now available, is the support for analyzing .NET Core 2.0 and .NET Standard 2.0 projects. .NET Core and its main flavor, ASP.NET Core, represents a major evolution for the .NET platform. Let’s have a look at how NDepend is analyzing .NET Core code.

Resolving .NET Core third party assemblies

In this post I’ll analyze the OSS application ASP.NET Core / EntityFramework MusicStore hosted on github. From the Visual Studio solution file, NDepend is resolving the application assembly MusicStore.dll and also two test assemblies that we won’t analyze here. In the screenshot below, we can see that:

  • NDepend recognizes the .NET profile, .NET Core 2.0, for this application.
  • It resolves several folders on the machine that are related to .NET Core, especially NuGet package folders.
  • It resolves all 77 third-party assemblies referenced by MusicStore.dll. This is important since many code rules and other NDepend features take into account what the application code is using.

It is worth noticing that the .NET Core platform assemblies have high granularity. A simple website like MusicStore references no fewer than 77 assemblies. This is because the .NET Core framework is implemented through a few NuGet packages that each contain many assemblies. The idea is to release the application only with needed assemblies, in order to reduce the memory footprint.

.NET Core 2.0 third party assemblies granularity

NDepend v2017.3 has a new heuristic to resolve .NET Core assemblies. This heuristic is based on .deps.json files that contain the names of the NuGet packages referenced. Here we can see that 3 NuGet packages are referenced by MusicStore. From these package names, the heuristic will resolve third-party assemblies (in the NuGet store) referenced by the application assemblies (MusicStore.dll in our case).

NuGet packages referenced in .deps.json file

Analyzing .NET Standard assemblies

Let’s be clear that NDepend v2017.3 can also analyze .NET Standard assemblies. Interestingly enough, since .NET Standard 2.0, .NET Standard assemblies reference a unique assembly named netstandard.dll and found in C:\Users\[user]\.nuget\packages\NETStandard.Library\2.0.0\build\netstandard2.0\ref\netstandard.dll.

By decompiling this assembly, we can see that it doesn’t contain any implementation, but it does contain all types that are part of .NET Standard 2.0. This makes sense if we remember that .NET Standard is not an implementation, but is a set of APIs implemented by various .NET profiles, including .NET Core 2.0, the .NET Framework v4.6.1, Mono 5.4 and more.

Browsing how the application is using .NET Core

Let’s come back to the MusicStore application that references 77 assemblies. This assembly granularity makes it impractical to browse dependencies with the dependency graph, since this generates dozens of items. We can see that NDepend suggests viewing this graph as a dependency matrix instead.

NDepend Dependency Graph on an ASP.NET Core 2.0 project

The NDepend dependency matrix can scale seamlessly on a large number of items. The numbers in the cells also provide a good hint about the represented coupling. For example, here we can see that  22 members of the assembly Microsoft.EntityFrameworkCore.dll are used by 32 methods of the assembly MusicStore.dll, and a menu lets us dig into this coupling.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

Clicking the menu item Open this dependency shows a new dependency matrix where only members involved are kept (the 32 elements in column are using the 22 elements in rows). This way you can easily dig into which part of the application is using what.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

All NDepend features now work when analyzing .NET Core

We saw how to browse the structure of a .NET Core application, but let’s underline that all NDepend features now work when analyzing .NET Core applications. On the Dashboard we can see code quality metrics related to Quality Gates, Code Rules, Issues and Technical Debt.

NDepend Dashboard on an ASP.NET Core 2.0 project

Also, most of the default code rules have been improved to avoid reporting false positives on .NET Core projects.

NDepend code rules on an ASP.NET Core 2.0 project

We hope you’ll enjoy using all your favorite NDepend features on your .NET Core projects!

Without Seeing Your Application's Dependency Graph, You're Flying Blind

Without Seeing Your Application’s Dependency Graph, You’re Flying Blind

Software architecture tends to be a pretty hard game.  Writing scripts and little toy apps is easy enough.  You build something and then you run it, confirming it does what you want.  But then the software grows in scope and complexity, and things get tough.  And it’s only once things get tough and architects enter the fray that you really worry about something called a dependency graph.

At that point, the dependency graph really matters to anyone interested in architecture.

What is a Dependency Graph?

Let’s start with the basics.  What is a dependency graph?  It’s actually not really a code-specific term, though it applies frequently and suitably to building software.

In mathematical terms, a dependency graph is a directed graph, where directed edges connect the nodes and indicate a directional dependency.  I’ll concede that I just typed a pretty dense definition there, so let’s take the edge off with an example.  Please bear with my rudimentary-at-best drawing skills.

In this diagram, I’ve modeled the components of a house.  The basement of the house, containing the foundation, depends on nothing.  The first floor of the house, however, depends on the basement/foundation for stability.  And the upstairs depends on that first floor and, indirectly, the basement.  In this model here, the garage is a free-standing structure, depending on nothing and taking no dependencies, either.

I use this completely non-mathematical and non-programming model to demonstrate that the dependency graph is a standalone concept.  It’s a fairly straightforward way to illustrate relationships.  And more importantly, it’s a highly visual way to do so.

Continue reading Without Seeing Your Application’s Dependency Graph, You’re Flying Blind

Get Smart -- Go Beyond Cyclomatic Complexity C#

Get Smart — Go Beyond Cyclomatic Complexity in C#

Recently, I wrote a post explaining the basics of cyclomatic complexity.  You can read that for a deep dive, but for our purposes here, let’s be brief about defining it.  Cyclomatic complexity refers to the number of “linearly independent” paths through a chunk of code, such as a method.  Understand this by thinking in terms of debugging.  If you could trace only one path through a method, it has a cyclomatic complexity of one.  But throw a conditional in there, introducing a second path you could trace, and the complexity grows to two.

Today, I’ll talk specifically about C#.  Cyclomatic complexity in C# is just, well, cyclomatic complexity applied to the language C#.  No big mystery there.

But what I want to talk about today is not cyclomatic complexity — not per se.  Today, I’d like to talk about how you can go beyond cyclomatic complexity in C# to get some even more intelligent metrics.  How can you really zero in on sources of risk and complexity in your code?

Wait, What’s Wrong with Cyclomatic Complexity?

Let me be clear.  There’s absolutely nothing wrong with tracking cyclomatic complexity.

It’s a great aid and shorthand for reasoning about your code’s relative complexity and for understanding where testing challenges lie.  You can use it to locate complexity “hot spots” in your code and then to address them in ways that make sense.  So no criticism whatsoever.  I’m just advocating that you go beyond it.

Think of it this way.  When I encourage you to install a Visual Studio plugin, I’m not knocking Visual Studio.  Visual Studio is a wonderful and productive IDE, in my estimation.  Instead, I’m encouraging you to make it even better — to enhance your experience.

The same sort of reasoning applies here.  Cyclomatic complexity is a great start for reasoning about your code’s complexity.  But we can add some considerations to make your life even better.  Let’s take a look at those.

The rest of this post will show you in detail what that looks like.  But if you want to try it out for yourself, you’ll need to download a copy of NDepend.

Continue reading Get Smart — Go Beyond Cyclomatic Complexity in C#

C# Tools to Help with Your Code Quality

C# Tools to Help with Your Code Quality

Over the years, one of the things I’ve come to love about the .NET ecosystem is the absolute abundance of tools to help you.  It’s an embarrassment of riches.  I enjoy writing code in C# because the language itself is great.  But C# tools take the experience to a whole other level.

I know, I know.  Some of you out there might argue that you get all of this goodness only by using heavyweight, resource-intensive tooling.  I’ll just go ahead and concede the point while saying that I don’t care.  I’m happy to work on a powerful development rig, outfitted with powerful tools, to write code in a highly productive language.

Today, I’d like to talk about some of these C# tools.  Or I should say I’d like to talk about some of the many C# tools you can use that are generally oriented toward the subject of code quality.

So, if you’re a C# developer, what are some tools you can use to improve the quality of your code?

Continue reading C# Tools to Help with Your Code Quality