NDepend

Improve your .NET code quality with NDepend

Quickly assess your .NET code compliance with .NET Standard

Yesterday evening I had an interesting discussion about the feasibility of migrating parts of the NDepend code to .NET Standard to ultimately run it on .NET Core. We’re not yet there but this might make sense to run at least the code analysis on non Windows platform, especially for NDepend clones CppDepend (for C++), JArchitect (for Java) and others to come.

Then I went to sleep (as every developers know the brain is coding hard while sleeping), then this morning I went for an early morning jogging and it stroke me: NDepend is the perfect tool to  assess some .NET code compliance to .NET Standard, or to any other libraries actually! As soon on my machine I did a proof of concept in a few minutes, then spent half an hour to fix an unexpected difficulty (explained below) and then it worked.

The key is that .NET standard 2.0 types are all packet in a single assemblies netstandard.dll v2.0 that can be found under C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319 (on my machine). All these 2,334 types are actually type forward definitions and NDepend handles well this peculiarity. A quick analyze of netstandard.dll with NDepend with this quick code query makes it all clear: 

(Btw, I am sure that if you read this  you have an understanding of what is .NET Standard but if anything is still unclear, I invite you to read this great article by my friend Laurent Bugnion wrote 3 days ago A Brief History of .NET Standard)

.NET Standard Forwarded Types

Given that, what stoke me this morning is that to analyze some .NET code compliance to .NET Standard, I’d just have to include netstandard.dll in the list of my application assemblies and write a code query that  filters the dependencies the way I want. Of course to proof test this idea I wanted to explore the NDepend code base compliance to .NET Standard:

NetStandard assembly included in the NDepend assemblies to analyze

The code query was pretty straightforward to write. It is written in a way that:

  • it is easy to use to analyze compliance with any other library than .NET standard,
  • it is easy to explore the compliance and the non-compliance with a library in a comprehensive way, thanks to the NDepend code query result browsing facilities,
  • it is easy to refactor the query for querying more, for example below I refactor it to assess the usage of third-party non .NET Standard compliant code

The result looks like that and IMHO it is pretty interesting. For example we can see at a glance that NDepend.API is almost full compliant with .NET standard except for the usage of System.Drawing.Image (all the 1 type are the Image type actually) and for the usage of code contracts.

NDepend code base compliance with .NET standard

For a more intuitive assessment of the compliance to .NET Standard we can use the metric view, that highlights the code elements matched by the currently edited code query.

  • Unsurprisingly NDepend.UI is not compliant at all,
  • portions of NDepend.Core non compliant to .NET Standard are well defined (and I know it is mostly because of some UI code here too, that we consider Core because it is re-usable in a variety of situations).

With this information it’d be much easier to plan a major refactoring to segregate .NET standard compliant code from the non-compliant one, especially to anticipate hot spots that will be painful to refactor.

Treemap view of the compliance with .NET Standard

A quick word about the unexpected difficulty I stumbled on. Since netstandard.dll contains only type forward definitions, it doesn’t contain nested type. Concretely it contains List<T> but not List<T>+Enumerator (that is also part of the formal .NET Standard). Of course we don’t want to flag methods that use List<T>+Enumerator as non-compliant. To see the way we solved that, have a look at the tricky part in the code code query related to: allNetStandardNestedTypes


The code query to assess compliancy can be refactored at whim. For example I found it interesting to see which non-compliant third-party code elements were the most used. So I refactored the query this way:

Without surprise UI code that is non .NET Standard compliant popups first:

.NET Standard non-compliant third-party code usage

There is no limit to refactor this query to your own need, like assessing usage of non-compliant code — except UI code– for example, or assessing the usage of code non compliant to ASP.NET Core 2 (by changing the library).

Hope you’ll find this content useful to plan your migration to .NET Core and .NET Standard!

Moq a Detailed Look at its Code Quality

Moq: A Detailed Look at Its Code Quality

In case you haven’t seen it, I’ve been doing a series of research-oriented posts for this blog.  This is going to be in the same vein but focused on the Moq codebase instead of focusing on hundreds of codebases.  Why Moq?  Well, I’ll get to that in a moment.

I started this by making a set of observations relating unit test prevalence to properties of clean code.  That generated considerable buzz, so I did some more studies in that vein, refining the methodology and adding codebases as we went.  By the end of the series, we’d grown the sample size to 500 codebases and started doing actual regression analysis.

Since then, we’ve enlisted the help of someone who specializes in data analysis to do some PCA with the data, which far outstrips my background studying data.  We brought this to bear in studying the effects of functional-style programming on codebases and also on categorizing codebases according to simple vs. complex and monolithic vs. decoupled, in addition to functional vs. OOP.  Doing this across more than 500 C# codebases has produced a wealth of information.

Okay, But How Does Moq Fit in?

I give you all this backstory in case you want to read about it but also to explain that I’ve looked at these codebases en masse.  With hundreds of codebases and millions of lines of code, I’m not going in and poking around to see if the code looks clean.  I’m using NDepend to perform large-scale robo-analysis.

And, while that’s been great, I started to want to see just how these categorizations stacked up.  So I started scrolling through the summary data and Moq jumped out at me.

First of all, I know Moq pretty well from using it over the years, and secondly, it had stood out when looking at the rate of unit test methods in the codebase.  Nearly half of its methods are test methods, which seems reasonable for a tool designed to help you write unit tests.  Combine that with these stats in the PCA:

So as a quick interpretation, Moq counts as reasonably non-monolithic, very simple, and very functional.  Add to that the high degree of unit testing, and I figured we’d have a codebase that was a joy to look at.  So I popped open the source code (as it was at the beginning of the year when I was grabbing codebases en masse) and analyzed it with NDepend, fresh off my excitement about using the new dark theme. I wanted to see if it was as much of a joy to look at as all of this data and statistical rigor indicated it would be.  And spoiler alert, it was the kind of codebase I’d feel right at home in.  Let’s take a tour.

NDepend Analysis, Test Coverage, and First Look

The first thing I wanted to look at was code coverage.  This was because I wanted a quick test case to see if my assumption that a high rate of test methods would correspond to high coverage.  And, it did.  Here’s a quick look at what happened after I imported coverage data and then ran analysis on the project.

Now I could see with Visual Studio’s coverage tool itself that Moq was sporting roughly a 90% test coverage.  But by importing the coverage data into NDepend, you can paint a much more compelling picture.

The heat map dominating most of the screen shows squares corresponding to methods in the codebase.  Larger squares are larger methods.  And the coloring indicates test coverage.  Over on the right, among the 2,152 methods in question, you can actually scroll through them in order of percent coverage and navigate to them if you want to take a look.

Taking a Look at the Dashboard and Technical Debt

So far, all systems go.  Most of the stats on the codebase looked good, coming in from the broad aggregates I have in a spreadsheet.  And then, following the same trend, Moq looks great in the IDE from a testing perspective.  But I looked at the dashboard and saw this:

Basic stats about the codebase lined up, and there’s the test coverage, hovering around 90%.  But a C for its tech debt rating?  10 critical rules violated?  This surprised me, given how rosy everything looked in the statistical analysis.  I drilled in to take a look.

And, sure enough, there be some dragons.  Huge types and overly complex methods are a problem.  Also in there are mutually dependent namespaces, which create coupling that hurts you as a codebase grows.  And you’ve got some hiding of base class methods and global state.  To get a sense for what I was looking at, I created another heat map. In this case, we’re looking at types.  The bigger the square, the more lines of code, and the closer to red, the more methods in that type.

This explains why the averages looked good in my spreadsheet but why NDepend has some objections and critical rules violated.  By and large, you have a lot of little green types, which is what you’d hope to see.  But there are some pretty hefty types in the mix, both in terms of lines of code to the type and number of methods to the type.

Some of these are gigantic unit test classes, while others are the API.  For those of you familiar with Moq, this should make sense to you.  Think of how many static methods you invoke on the Mock class. This illustrates the classic tradeoff between shooting for clean code guidelines about methods per type and such and between providing the API you want to furnish.

Drilling Into the Sources of Debt

NDepend has a view where you can drill into the technical debt per type or per method, and sort it accordingly.  I did that per type, and here’s what I saw:

No big surprise there.  The technical debt was coming disproportionately from these large types.  So I took advantage of another view to see where the debt was coming from, by rule violation. Here’s what that looked like:

Topping that list is a series of things that I would make it a priority to address in my own codebases, time permitting: types that are too big, types that have too many methods, namespace dependency cycles, and so on.  There is, however, one exception to what I would worry about as a top priority issue—visibility of nested types as a design choice.  That’s based on a Microsoft guideline and I don’t personally favor an approach where I use this a lot.  But it doesn’t bother me, either.

I see that the creators of Moq did that 395 times, so clearly they view it as a useful design choice.  This made me curious about what would happen to the tech debt grade if I disabled that particular rule.  So I did that, and the result was a somewhat greener and more pleasing grade of B:

What’s the Verdict With Moq?

I also spent some time scrolling through various classes and methods.  I didn’t want the entirety of the experience to be just a matter of data gathering since that’s what I’ve been doing for months.  And the verdict I have is that this particular data point fits in nicely with the aggregate.

Moq has excellent stats for most of what I’ve been looking at.  And, indeed, there are a lot of simple, functional methods in the codebase, almost all of which are thoroughly tested.  I would happily work in this codebase.

But NDepend is calling out real and important opportunities for improvement.  If I were working on this codebase, I’d make an effort to break some of those gigantic unit test classes into smaller ones that are more cohesive over their context.  I’d also take a hard look at the mutually dependent namespaces and either merge them or rework the dependency direction a bit.  And even I’d give some idle thought to how I might segment the large Mock class somehow into smaller chunks if that wound up making sense.

So the whole thing winds up being an interesting microcosm to me.  Moq is, as the stats would indicate, a pretty nice codebase.  But, as with just about any codebase, there’s plenty of room for improvement.  And having a tool to show you where to improve quickly is invaluable.

On the Superiority of the Visual Studio Dark Theme

When I downloaded the newest version of NDepend, something wonderful awaited me.  Was it support for the latest .NET Core version?  The addition of checks for ubiquitous language for DDD projects?  Any of the various rule additions or bug fixes?

Nope.  I’m a lot more superficial than that, apparently.  I saw this and was incredibly excited:

Dark Theme

In case you’re scratching your head and wondering, “what?” instead of sharing my delight, I’ll be more explicit.  NDepend added support for the Visual Studio dark theme.  And I absolutely love it.

Asserting the Superiority of the Visual Studio Dark Theme

Why so excited?  Well, as a connoisseur of the Visual Studio dark theme, this makes my experience with the tool just that much better.

Don’t get me wrong.  I didn’t mind the interface up until this release, per se.  I’ve happily used NDepend for years and never really thought about its color scheme.  But this version is the equivalent of someone going into my house where I already like the bathroom and installing a radiant floor.  I never knew I was missing it until I had it.

Everything in Visual Studio is better in the dark.

Oh, I know there’s evidence to the contrary.  People are, apparently, 26% more accurate when reading dark text on a light background than vice-versa. And it’s easier to focus on and remember text in a light theme.  I understand all of that.  And I don’t care.

The dark theme is still better.

Continue reading On the Superiority of the Visual Studio Dark Theme

C# 8.0 Features A Final Glimpse of the Future

C# 8.0 Features: A Final Glimpse Of The Future

It was not that long ago when we published our first post about the future of C# 8.0 and the probable features it’s getting. On the first post, we covered extension everything, default implementation on interfaces, and nullable reference types.

A couple of months later, we published the second installment in the series where we covered null coalescing assignment and records

Now it’s time for a final glimpse into the future. Today we’ll cover another two possible C# 8.0 features: target-typed new expressions and covariant return types.

Continue reading C# 8.0 Features: A Final Glimpse Of The Future

Clean Architecture C# — an example

Starting A Clean Architecture Example in C#

It’s time for the second part of our series about clean architecture. As promised in the first post, we’re going to show you a sample application in C#, to demonstrate what a clean architecture implementation might look like.

Even though our sample application will be minimalist, it’s still too much for a single post. We want to keep things light and easy for you, so we’ll have to break this post into two or three parts.

Don’t worry though: at the end, I’ll connect all the dots and things should (hopefully) make sense. And all the generated code will be publicly available for you to download and study on your own. Continue reading Starting A Clean Architecture Example in C#

Functional C# Improves Your Design without Making Your Code Cleaner, Exactly

Functional C# Improves Your Design Without Making Your Code Cleaner, Exactly

Today I offer another one of the code research posts we’ve been doing.  If you want more backstory on the series, check out the last post in the series, where I give a brief history.  You should also read it if you want to understand both what I mean by functional C# and for details about its impact on codebases at the method level.

Quick editorial note: a couple of people have commented/sent notes asking about p-values.  I’ve been eliding those to keep the posts more narrative.  But as we’ve expanded the set of variables we capture, we’ve been looking only at dramatically lower p-values.  Those cited in this post, for instance, range between 0 and 0.04, with most being less than 0.01.

I’ll summarize the last functional study here, briefly.  Last time, I studied about 500 codebases to see what functional-style programming did to methods and types.  And the answer was that it made them less object-oriented, but it had surprisingly little influence on clean code statistics, like

  • Lines of code per method or type.
  • Cyclomatic complexity.
  • Parameters per method.
  • Method nesting depth.
  • Methods per type.

I expected that functional codebases would correlate with a reduction in all of those things.  In other words, I figured that functional-style programming would lead to smaller, clearer, more focused, and less complex methods.  It didn’t.

Undaunted, I vowed to take a broader look at the effect of functional programming on a wider array of concerns. And I did just that, with the help of my partner who runs the statistical regressions. Continue reading Functional C# Improves Your Design Without Making Your Code Cleaner, Exactly

C# Immutable Types Understanding the Attraction

C# Immutable Types: Understanding the Attraction

In our post about value objects, we briefly covered the topic of immutability. There, we talked about the more conceptual side of immutability, as in “value objects must be immutable types since it wouldn’t make sense for them to be otherwise.”

As it turns out, there are many practical benefits to making your types immutable—not just the value objects. I remember the first time I came across the concept of immutable types. I was researching string immutability, and somehow Google led me to a Stack Overflow question about immutability in general.

Up until that moment, I wasn’t familiar with the concept of immutability. I must admit it wasn’t an easy idea to wrap my head around. I remember thinking, “Why would I want objects that don’t change? Things change in real life!”

So, I’m writing this post with my past self in mind. The goal here is twofold: to explain why immutable types are desirable and to provide a quick recipe on how to work with them.

Continue reading C# Immutable Types: Understanding the Attraction

Checking DDD Ubiquitous Language with NDepend

Since NDepend version 2018.1, the tool proposes a default rule to check Domain Driven Design (DDD) Ubiquitous Language validity.

DDD Ubiquitous Language

Let’s quote Martin Fowler on Ubiquitous Language:

Ubiquitous Language is the term Eric Evans uses in Domain Driven Design for the practice of building up a common, rigorous language between developers and users. This language should be based on the Domain Model used in the software – hence the need for it to be rigorous, since software doesn’t cope well with ambiguity.

Evans makes clear that using the ubiquitous language between in conversations with domain experts is an important part of testing it, and hence the domain model. He also stresses that the language (and model) should evolve as the team’s understanding of the domain grows.

–Martin Fowler

Eric Evans coined the term DDD, let’s quote him:

By using the model-based language pervasively and not being satisfied until it flows, we approach a model that is complete and comprehensible, made up of simple elements that combine to express complex ideas.

Domain experts should object to terms or structures that are awkward or inadequate to convey domain understanding; developers should watch for ambiguity or inconsistency that will trip up design.

–Eric Evans

See below a sample of ubiquitous language usage in the real-world. We end up with clean and readable code:

The TrainTrain Code Base

To explain and demonstrate the rule, we’ll conduct our experiment on the TrainTrain code base.

This OSS code base has been developed by Bruno Boucard and Thomas Pierrain from 42Skillz, a French consultancy company specialized in DDD and developers coaching. TrainTrain has been developed in order to illustrate concretely most of DDD concepts (including Ubiquitous Language) in a session named How To Distill The Core Domain From Your Legacy App (Live Coding). In this session, a legacy version of the code is live-refactored to a DDD-compliant version. It has been performed both at Explore DDD 2017 (Denver, Sept 2017) and DDD Europe 2018 (Amsterdam, Jan 2018).

We worked with Bruno and Thomas to develop this first rule related to DDD and we expect that more rules will follow from this collaboration.

The Rule

See below the full source code of the new rule named DDD ubiquitous language check that can be found in the rule group Naming Convention. This rule is disabled by default because before using it, the user must customize both:

  • The core domain namespace name (by default set to “TrainTrain.Domain”)
  • The vocabulary list

The idea is to centralize in this rule source code the vocabulary. The rule then checks that all code elements defined in the core domain namespace are named with one or several terms found in the vocabulary list. Code elements checked include classes, enumerations, structures, interfaces, methods, properties and fields. If a term needs to be used both with singular and plural forms, both forms need to be mentioned, like Seat and Seats for example.

The NDepend rule system makes easy to modify the source code of an existing rule. There is no Visual Studio project to create and store, no NuGet package to reference, no assembly to compile, version and maintain, no integration. Just textual edition with code completion, API documentation and live result while editing, and then Ctrl+S, that’s it. As a consequence, the NDepend rule system is well suited to implement such rule that must be customized with some user data before usage.

Notice that this rule relies on the new NDepend API method ExtensionMethodsString.GetWord(this string identifier). This method extracts words from code identifiers. For example from the field identifier _seatsRequestedCount it extracts the 3 words seats, Requested, Count. To be compliant with the vocabulary list, we then set the first char to upper, for example seats becomes Seats.

Running the Rule

See below a screenshot on running this rule on a TrainTrain version. An issue is spotted on a core domain class named TreasholdCapacity. Both words are reported in the column wordsNotInVocabulary because both words are not in the vocabulary list. Moreover the word Treashold has a typo. At this point, to fix this issue:

  • either this class should be renamed with existing core domain vocabulary words
  • either these words should be added to the vocabulary list (with the typo fix)

 

DDD is nowadays a popular concept. We are proud to innovate with a static analysis code rule related to DDD. We have plans for more DDD related rules and we would like to hear both your feedback on using this rule, and your needs for more DDD related rules.

 

New .NET Core 2.1 and ASP.NET Core 2.1 APIs

.NET Core 2.1 and ASP.NET Core 2.1 Preview1 have just been released (see here the official announcement) and we are going to explore new APIs in this post. We’ll found out many of the new features announced in the .NET Core 2.1 Roadmap and ASP.NET Core 2.1 Roadmap on the MSDN blog.

We just released NDepend v2018.1 and we took a chance to support analysis of .NET Core 2.1 and ASP.NET Core 2.1 applications. NDepend is often deemed as the Swiss-Army Knife for .NET developers thanks to its code query language (CQLinq). CQLinq can be used to write code rules, quality gatestrend code metrics, explore dependencies or advanced code search. One thing CQLinq excels at is exploring the diff between two snapshots of a code base. Exploring new APIs is a sub-task of exploring what was changed. Let’s harness this capability to explore new .NET Core 2.1 APIs.

New .NET Core 2.1 Preview1 APIs

We could have downloaded sources files of both .NET Core 2.1.0 Preview1 and 2.0.0, recompile and then do the diff. Instead, since NDepend can analyze raw assemblies even without sources available, we compared assemblies in these two folders:

  • C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.1.0-preview1-26216-03
  • C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.0.5

Here is the CQLinq code query that lists all new public classes and types. It is also refined to match public members (methods and fields) of each new type in the result.

Find the whole list here .NET Core 2.1 new public classes. Here is a first glimpse in the screenshot below. We highlighted the new great Span<T> capability.

.NET Core 2.1 new classes

Not only new public classes list is interesting, but also new public methods added on existing public classes. This query does list these 1.500 methods:

Here also find a screenshot below and the whole list here: .NET Core 2.1 new public methods on existing public classes. For this screenshot we used the new Dark theme support of NDepend v2018.1.

.NET Core 2.1 new public methods

Interestingly enough let’s list new namespaces that contain at least one public type:

.NET Core 2.1 new public namespaces

 

New ASP.NET Core 2.1 Preview1 APIs

To analyze ASP.NET Core assemblies it was a bit more difficult than just comparing assemblies in 2 folders. ASP.NET Core assemblies are stored in NuGet packages so we did explore assemblies in the folder C:\Program Files\dotnet\sdk\NuGetFallbackFolder and then tinker a bit to get the wanted versions in both 2.1 and 2.0 cases.

Let’s use the same code query to match ASP.NET Core 2.1 new public classes :

ASP.NET Core 2.1 new public class

The same way here is the ASP.NET Core 2.1 new public methods on existing public classes :

ASP.NET Core 2.1 new public methods in existing public classes

And finally, here are the new ASP.NET Core 2.1 namespaces that contain at least one public classes.

ASP.NET Core 2.1 new namespaces

 

 

CQRS Understanding from First Principles

CQRS: Understanding From First Principles

There seems to be no end to the choices you have for architecture when building an application. You don’t want to fall victim to cargo cult programming, so you need to truly understand the options available. Today, we’ll focus on one option, called CQRS.

CQRS leads to a clean architecture that’s easy to maintain. Let’s take a look at the underlying principles of CQRS. This will help you understand what the benefits are and whether you want to use it in your applications. Continue reading CQRS: Understanding From First Principles