NDepend

Improve your .NET code quality with NDepend

You have no excuse for dead code.

You Have No Excuse for Dead Code

In darker times, software management would measure productivity as a function of lines of code.  More code means more done, right?  Facepalm.  When I work with IT management in my capacity as a consultant, I encourage them to view code differently.  I encourage them to view code as a liability, like inventory.  And when useful code is a liability, think of what a boat anchor dead code is.

I once wrote a fun post about the fate of dead code in your codebase.  And while I enjoyed writing that, it had a serious underlying message.  Dead code costs you time, money, and maintenance headaches.  And it has absolutely no upside.

A Working Definition for Dead Code

Okay. If I’m going to make a blog post out of disparaging dead code, I should probably offer a definition.  Let’s do that here.

Some people will draw a distinction between code that can’t be executed (unreachable) and executed code whose effects don’t matter (dead).  I acknowledge this definition but won’t use it here.  For the sake of simplicity and clarity of message, let’s create a single category of dead code: any code in your codebase that has no bearing on your application’s behavior is, for our purposes here, dead.

The Problems with Dead Code

Having defined it, what’s the problem?  If it has no bearing on your application’s behavior, what’s the harm?  How does it cost time and money, as I claimed a moment ago?

Well, simply put, your code does not live in a shrink-wrapped vacuum.  As your application evolves, developers have to change the code.  When you have only code that matters in your codebase, they can do this with the most efficiency.  If, on the other hand, you have thousands of lines of useless code, these developers will spend hundreds of hours maintaining that useless code.

Think of having dead code as being reminiscent of running your heat in the winter while keeping all of your windows open.  It’s self-defeating and wasteful.

But even worse, it’s a totally solvable problem.  Let’s take a look at different types of dead code that you encounter and what you can do about it.

Continue reading You Have No Excuse for Dead Code

Static analysis of .NET Core 2.0 applications

NDepend v2017.3 has just been released with major improvements. One of the most requested features, now available, is the support for analyzing .NET Core 2.0 and .NET Standard 2.0 projects. .NET Core and its main flavor, ASP.NET Core, represents a major evolution for the .NET platform. Let’s have a look at how NDepend is analyzing .NET Core code.

Resolving .NET Core third party assemblies

In this post I’ll analyze the OSS application ASP.NET Core / EntityFramework MusicStore hosted on github. From the Visual Studio solution file, NDepend is resolving the application assembly MusicStore.dll and also two test assemblies that we won’t analyze here. In the screenshot below, we can see that:

  • NDepend recognizes the .NET profile, .NET Core 2.0, for this application.
  • It resolves several folders on the machine that are related to .NET Core, especially NuGet package folders.
  • It resolves all 77 third-party assemblies referenced by MusicStore.dll. This is important since many code rules and other NDepend features take into account what the application code is using.

It is worth noticing that the .NET Core platform assemblies have high granularity. A simple website like MusicStore references no fewer than 77 assemblies. This is because the .NET Core framework is implemented through a few NuGet packages that each contain many assemblies. The idea is to release the application only with needed assemblies, in order to reduce the memory footprint.

.NET Core 2.0 third party assemblies granularity

NDepend v2017.3 has a new heuristic to resolve .NET Core assemblies. This heuristic is based on .deps.json files that contain the names of the NuGet packages referenced. Here we can see that 3 NuGet packages are referenced by MusicStore. From these package names, the heuristic will resolve third-party assemblies (in the NuGet store) referenced by the application assemblies (MusicStore.dll in our case).

NuGet packages referenced in .deps.json file

Analyzing .NET Standard assemblies

Let’s be clear that NDepend v2017.3 can also analyze .NET Standard assemblies. Interestingly enough, since .NET Standard 2.0, .NET Standard assemblies reference a unique assembly named netstandard.dll and found in C:\Users\[user]\.nuget\packages\NETStandard.Library\2.0.0\build\netstandard2.0\ref\netstandard.dll.

By decompiling this assembly, we can see that it doesn’t contain any implementation, but it does contain all types that are part of .NET Standard 2.0. This makes sense if we remember that .NET Standard is not an implementation, but is a set of APIs implemented by various .NET profiles, including .NET Core 2.0, the .NET Framework v4.6.1, Mono 5.4 and more.

Browsing how the application is using .NET Core

Let’s come back to the MusicStore application that references 77 assemblies. This assembly granularity makes it impractical to browse dependencies with the dependency graph, since this generates dozens of items. We can see that NDepend suggests viewing this graph as a dependency matrix instead.

NDepend Dependency Graph on an ASP.NET Core 2.0 project

The NDepend dependency matrix can scale seamlessly on a large number of items. The numbers in the cells also provide a good hint about the represented coupling. For example, here we can see that  22 members of the assembly Microsoft.EntityFrameworkCore.dll are used by 32 methods of the assembly MusicStore.dll, and a menu lets us dig into this coupling.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

Clicking the menu item Open this dependency shows a new dependency matrix where only members involved are kept (the 32 elements in column are using the 22 elements in rows). This way you can easily dig into which part of the application is using what.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

All NDepend features now work when analyzing .NET Core

We saw how to browse the structure of a .NET Core application, but let’s underline that all NDepend features now work when analyzing .NET Core applications. On the Dashboard we can see code quality metrics related to Quality Gates, Code Rules, Issues and Technical Debt.

NDepend Dashboard on an ASP.NET Core 2.0 project

Also, most of the default code rules have been improved to avoid reporting false positives on .NET Core projects.

NDepend code rules on an ASP.NET Core 2.0 project

We hope you’ll enjoy using all your favorite NDepend features on your .NET Core projects!

Without Seeing Your Application's Dependency Graph, You're Flying Blind

Without Seeing Your Application’s Dependency Graph, You’re Flying Blind

Software architecture tends to be a pretty hard game.  Writing scripts and little toy apps is easy enough.  You build something and then you run it, confirming it does what you want.  But then the software grows in scope and complexity, and things get tough.  And it’s only once things get tough and architects enter the fray that you really worry about something called a dependency graph.

At that point, the dependency graph really matters to anyone interested in architecture.

What is a Dependency Graph?

Let’s start with the basics.  What is a dependency graph?  It’s actually not really a code-specific term, though it applies frequently and suitably to building software.

In mathematical terms, a dependency graph is a directed graph, where directed edges connect the nodes and indicate a directional dependency.  I’ll concede that I just typed a pretty dense definition there, so let’s take the edge off with an example.  Please bear with my rudimentary-at-best drawing skills.

In this diagram, I’ve modeled the components of a house.  The basement of the house, containing the foundation, depends on nothing.  The first floor of the house, however, depends on the basement/foundation for stability.  And the upstairs depends on that first floor and, indirectly, the basement.  In this model here, the garage is a free-standing structure, depending on nothing and taking no dependencies, either.

I use this completely non-mathematical and non-programming model to demonstrate that the dependency graph is a standalone concept.  It’s a fairly straightforward way to illustrate relationships.  And more importantly, it’s a highly visual way to do so.

Continue reading Without Seeing Your Application’s Dependency Graph, You’re Flying Blind

Get Smart -- Go Beyond Cyclomatic Complexity C#

Get Smart — Go Beyond Cyclomatic Complexity in C#

Recently, I wrote a post explaining the basics of cyclomatic complexity.  You can read that for a deep dive, but for our purposes here, let’s be brief about defining it.  Cyclomatic complexity refers to the number of “linearly independent” paths through a chunk of code, such as a method.  Understand this by thinking in terms of debugging.  If you could trace only one path through a method, it has a cyclomatic complexity of one.  But throw a conditional in there, introducing a second path you could trace, and the complexity grows to two.

Today, I’ll talk specifically about C#.  Cyclomatic complexity in C# is just, well, cyclomatic complexity applied to the language C#.  No big mystery there.

But what I want to talk about today is not cyclomatic complexity — not per se.  Today, I’d like to talk about how you can go beyond cyclomatic complexity in C# to get some even more intelligent metrics.  How can you really zero in on sources of risk and complexity in your code?

Wait, What’s Wrong with Cyclomatic Complexity?

Let me be clear.  There’s absolutely nothing wrong with tracking cyclomatic complexity.

It’s a great aid and shorthand for reasoning about your code’s relative complexity and for understanding where testing challenges lie.  You can use it to locate complexity “hot spots” in your code and then to address them in ways that make sense.  So no criticism whatsoever.  I’m just advocating that you go beyond it.

Think of it this way.  When I encourage you to install a Visual Studio plugin, I’m not knocking Visual Studio.  Visual Studio is a wonderful and productive IDE, in my estimation.  Instead, I’m encouraging you to make it even better — to enhance your experience.

The same sort of reasoning applies here.  Cyclomatic complexity is a great start for reasoning about your code’s complexity.  But we can add some considerations to make your life even better.  Let’s take a look at those.

The rest of this post will show you in detail what that looks like.  But if you want to try it out for yourself, you’ll need to download a copy of NDepend.

Continue reading Get Smart — Go Beyond Cyclomatic Complexity in C#

C# Tools to Help with Your Code Quality

C# Tools to Help with Your Code Quality

Over the years, one of the things I’ve come to love about the .NET ecosystem is the absolute abundance of tools to help you.  It’s an embarrassment of riches.  I enjoy writing code in C# because the language itself is great.  But C# tools take the experience to a whole other level.

I know, I know.  Some of you out there might argue that you get all of this goodness only by using heavyweight, resource-intensive tooling.  I’ll just go ahead and concede the point while saying that I don’t care.  I’m happy to work on a powerful development rig, outfitted with powerful tools, to write code in a highly productive language.

Today, I’d like to talk about some of these C# tools.  Or I should say I’d like to talk about some of the many C# tools you can use that are generally oriented toward the subject of code quality.

So, if you’re a C# developer, what are some tools you can use to improve the quality of your code?

Continue reading C# Tools to Help with Your Code Quality

Announcing the Singleton Challenge

Announcing the Singleton Challenge

About a month ago, I made a post about what the singleton pattern costs you.  Although I stated my case in terms of trade-offs rather than prescriptive advice, I still didn’t paint the singleton pattern in a flattering light.  Instead, I talked about the problems that singletons tend to create in codebases.

Whenever I’ve talked about the singleton pattern, anywhere I’m blogging, the general response follows a typical signature.  It attracts a relatively large amount of shares (usually indicating support) and traffic (often indicating support) while generating comments (these often contain objections).  Generally, the response follows this pattern from Stack Overflow.  At the time of writing:

  • 1,018 net upvotes for an answer explaining why they’re bad.
  • 377 net upvotes for an answer explaining why they’re usually a mistake.
  • 280 net upvotes for them getting a bad wrap only because people misuse them (with a disproportionate number of mitigating downvotes).
  • 185 net upvotes for an answer explaining why they’re terrible.

It seems as though something like 80 percent of the active developer population has come around to thinking, “Yeah, let’s not do this anymore.”  But a vocal 20 percent minority resents that, thinking the rest are throwing the baby out with the bathwater.

Perusing some of the comments on the blog and discussion sites where it was shared, I found two themes of objection to the post.

  1. If singleton is so bad, why do DI frameworks provide singleton object instances?
  2. Your singleton examples are bad, ergo the problem is that you misuse singletons.

Today, I’d like to address those threads of criticism.  But not ultimately in the way you might think.

Continue reading Announcing the Singleton Challenge

Understanding Cyclomatic Complexity

Wander the halls of an enterprise software outfit looking to improve, and you’ll hear certain things.  First and foremost, you’ll probably hear about unit test coverage.  But, beyond that, you’ll hear discussion of a smattering of other metrics, including cyclomatic complexity.

It’s actually sort of funny.  I mean, I understand why this happens, but hearing middle managers say “test coverage” and “cyclomatic complexity” has the same jarring effect as hearing developers spout business-meeting-speak.  It’s just not what you’d naturally expect.

And you wouldn’t expect it for good reason.  As I’ve argued in the past, code coverage shouldn’t be a management concern.  Nor should cyclomatic complexity.  These are shop-heavy specifics about particular code properties.  If management needs to micromanage at this level of granularity, you have a systemic problem.  You should worry about these properties of your code so that no one else has to.

With that in mind, I’d like to focus specifically on cyclomatic complexity today.  You’ve probably heard this term before.  You may even be able to rattle off a definition.  But let’s take a look in great detail to avoid misconceptions and clear up any hazy areas.

Defining Cyclomatic Complexity

First of all, let’s get a specific working definition.  This is actually surprisingly difficult because not all sources agree on the exact method for computing it.

How can that be?  Well, the term was dreamed up by a man named Thomas McCabe back in 1976.  He wanted a way to measure “the number of linearly independent paths through a program’s source code.”  But beyond that, he didn’t specify the mechanics exactly, leaving that instead to implementers of the metric.

He did, however, give it an intimidating-sounding name.  I mean, complexity makes sense, but what does “cyclomatic” mean, exactly?  Well, “cyclomatic number” serves as an alias for something more commonly called circuit rank.  Circuit rank measures the number of independent cycles within a cyclic graph.  So I suppose he coined the neologism “cyclomatic complexity” by borrowing a relatively obscure discrete math concept for path independence and applying it to code complexity.

Well then.  Now we have cyclomatic complexity, demystified as a term.  Let’s get our hands dirty with examples and implications.

Continue reading Understanding Cyclomatic Complexity

Marker Interface Isn't a Pattern or a Good Idea

Marker Interface Isn’t a Pattern or a Good Idea

Today, I have the unique opportunity to show you the shortest, easiest code sample of all time.  I’m talking about the so-called marker interface.  Want to see it?  Here you go.

I told you it was simple.  It’s dead simple for a code sample, so that makes it mind-blowingly simple for a design pattern.  And that’s how people classify it — as a design pattern.

How Is This “Marker Interface” Even a Pattern?

As you’ve inferred from the title, I’m going to go on to make the claim that this is not, in fact, a “design pattern” (or even a good idea).  But before I do that, I should explain what this is and why anyone would do it.  After all, if you’ve never seen this before, I can forgive you for thinking it’s pretty, well, useless.  But it’s actually clever, after a fashion.

The interface itself does nothing, as advertised.  Instead, it serves as metadata for types that “implement” it.  For example, consider this class.

The customer class doesn’t implement the interface.  It has no behavior, so the idea of implementing it is nonsense.  Instead, the customer class uses the interface to signify something to the client code using it.  It marks itself as containing sensitive information, using the interface as a sort of metadata.  Users of the class and marker interface then consume it with code resembling the following:

Using this scheme, you can opt your classes into special external processing.

Marker Interface Backstory

I’m posting code examples in C#, which makes sense.  After all, NDepend is a .NET ecosystem tool.  But the marker interface actually goes back a long way.  In fact, it goes back to the earlier days of Java, which baked it in as a first class concept, kind of how C# contains a first class implementation of the iterator design pattern.

In Java, concepts like serialize and clone came via marker interfaces.  If you wanted serialization in Java, for instance, you’d tag your class by “implementing” the marker interface Serializable.  Then, third party processing code, such as ORMs, IoC containers, and others would make decisions about how to process it.  This became common enough practice that a wide ecosystem of tools and frameworks agreed on the practice by convention.

C# did not really follow suit.  But an awful lot of people have played in both sandboxes over the years, carrying this practice into the .NET world.  In C#, you’ll see two flavors of this.  First, you have the classic marker interface, wherein people use it the way that I showed above.  Secondly, you have situations where people get clever with complex interface inheritance schemes and generics in order to force certain constraints on clients.  I won’t directly address that second, more complex use case, but note that all of my forthcoming arguments apply to it as well.

Now, speaking of arguments, let’s get to why I submit that this is neither a “pattern” nor a good idea in modern OOP.  NDepend tells you to avoid this, and I wholeheartedly agree.

Continue reading Marker Interface Isn’t a Pattern or a Good Idea

Migrating from HTTP to HTTPS in a IIS / ASP.NET environment

Google is urging more and more webmasters to move their sites to HTTPS for security reasons. We did this move last week for our IIS / ASP.NET website https://www.NDepend.com and we learned a few tricks along the way. Once you’ve done it once it becomes pretty straightforward, but getting the big picture and handling every detail well is not trivial. So I hope this post will be useful.

HTTPS and Google Analytics Referrals

One reason for moving to HTTPS is that Google Analytics referrals don’t work when the user comes from a HTTPS website. And since most of your referrers websites are likely to be already HTTPS, if you keep up with HTTP, your GAnalytics becomes blind.

Notice that once you’ve moved to HTTPS, you still won’t be able to track referrers that come from an HTTP url, which is annoying since most of the time you don’t have edit-access to these urls.

Getting the Certificate

You can get free certificates from LetsEncrypt.com, but they have a 3 month lease. The renewal process can certainly be automated, but instead we ordered a 2 year certificate from gandi.net for only 20 EUR for the two years. For that price you’ll get the minimum and won’t obtain a certificate with the Green Address Bar, which costs around 240 EUR / year.

When ordering the certificate, a CSR (Certificate Sign Request) will be requested. The CRS can be obtained from IIS as explained here for example, through the menu Generate Certificate Request. A few questions about who you are will be asked, the most important being the Common Name, which will be typically www.yourdomain.com  (or, better, use a wildcard, as in *.yourdomain.com). If the Common Name doesn’t match the web site domain, the user will get a warning at browsing time, so this is a sensitive step.

Installing the Certificate in IIS

Once you’ve ordered the certificate, the certificate shop will provide you with a .crt or .cer crypted content. This is the certificate. But IIS doesn’t deal with the .crt nor .cer formats, it asks for a .pfx file! This is misleading and the number one explanation on the web is this one on the Michael Richardson blog. Basically you’ll use the IIS menu Complete Certificate Request (that follows the first Generate Certificate Request). Now restart IIS or the server to make sure it’ll take care of the certificate.

Binding the Certificate to the website 443 Port in IIS

At that point the certificate is installed on the server. The certificate needs to be bound with your website port 443. First make sure that the port 443 is opened on your server, and second, use the binding IIS menu on your website. A binding entry will have to be added as shown in the picture below.

Once added just restart the website. Normally, you can now access your website through HTTPS urls. If not, you may have to tweak the DNS pointers somehow, but I cannot comment since we didn’t have a problem with that.

At that point, both HTTPS and HTTP are browsable. HTTP requests need to be redirected to HTTPS to complete the migration.

301 redirection with Web.Config and IIS UrlRewriter

HTTP to HTTPS redirection can be achieved by modifying the Web.Config file of your ASP.NET website, to tell the IIS Url rewriter how to redirect. After a few attempts based on googling, our redirection rules look like:

If you believe this can be improved, please let me know. At least it works 🙂

  • <add input=”{HTTPS}” pattern=”off” ignoreCase=”true” /> is the main redirection rule that redirects HTTP requests to HTTPS (this is called 301 redirection). You’ll find many sites on the web to test that your 301 redirection works fine.
  • Make sure to double check that urls with GET params are redirected well. On our side, url=“https://{HTTP_HOST}{REQUEST_URI}” processes GET params seamlessly
  • <add input=”{URL}” pattern=”(.*)XYZ” negate=”true” ignoreCase=”true”/> is important to avoid HTTP to HTTPS redirection for a page named XYZ. Typically, if you have special pages with POST requests, they might be broken with the HTTPS redirection, and thus the redirection needs to be discarded for those.
  • <add input=”{HTTP_HOST}” matchType=”Pattern” pattern=”^localhost(:\d+)?$” negate=”true” /> avoid the HTTPS redirection when testing on localhost.
  • <add input=”{HTTP_HOST}” pattern=”^www.*” negate=”true”/> just transforms ndepend.com requests into www.ndepend.com,
  • and  <add input=”{HTTP_HOST}” pattern=”localhost” negate=”true”/> avoids this WWW redirection on localhost.

Eliminate Mixed Content

At this point you are almost done. Yet depending on the topology of your web site(s) and resources, it is possible that some pages generate a mixed content warning. Mixed content means that some resources (like images or scripts) of an HTTPS web page are served through HTTP. When mixed content is detected, most browsers show a warning to users about a not fully secured page.

You’ll find tools to search for mixed content on your web site, but you can also crawl the site yourself and use the Chrome console to get details about mixed content found.

Update Google SiteMap and Analytics

Finally make sure that your Google sitemap now references HTTPS urls, and update your Google Analytics for HTTPS:

I hope this content saves a few headaches. I am certainly not a SSL nor an IIS expert, so once again, if some part of this tutorial can be improved, feel free to comment!

Understanding the Different Between Static And Dynamic Code Analysis

Understanding the Difference Between Static And Dynamic Code Analysis

I’m going to cover some relative basics today.  At least, they’re basics when it comes to differentiating between static and dynamic code analysis.  If you’re new to the software development world, you may have no idea what I’m talking about.  Of course, you might be a software development veteran and still not have a great idea.

So I’ll start from basic principles and not assume you’re familiar with the distinction.  But don’t worry if you already know a bit.  I’ll do my best to keep things lively for all reading.

Static and Dynamic Code Analysis: an Allegory

So as not to bore anyone, bear with me as I plant my tongue in cheek a bit and offer an “allegory” that neither personifies intangible ideas nor has any real literary value.  Really, I’m just trying to make the subject of static and dynamic code analysis the slightest bit fun on its face.

So pull your fingers off the keyboard and let’s head down to the kitchen.  We’re going to do some cooking.  And in order to that, we’re going to need a recipe for, say, chili.

We all know how recipes work in the general life sense.  But let’s break the cooking activity into two basic components.  First, you have the part where you read and synthesize the recipe, prepping your materials and understanding how things will work.  And then you have the execution portion of the activity, wherein you do the actual cooking — and then, if all goes well, the eating.

Static and Dynamic Recipe Analysis

Having conceived of preparing the recipe in two lights, think in a bit more detail about each activity.  What defines them?

First, the recipe synthesis.  Sure, you read through it to get an overview from a procedural perspective, rehearsing what you might do.  But you also make inferences about the eventual results.  If you’ve never actually had chili as a dish, you might contemplate the ingredients and what they’d taste like together.  Beef, tomato sauce, beans, spicy additives…an idea of the flavor forms in your head.

You can also recognize the potential for trouble.  The recipe calls for cilantro, but you have a dinner guest allergic to cilantro.  Yikes!  Reading through the recipe, you anticipate that following it verbatim will create a disastrous result, so you tweak it a little.  You omit the cilantro and double check against other allergies and dining preferences.

But then you have the actual execution portion of preparing a recipe.  However imaginative you might be, picturing the flavor makes a poor substitute for experiencing it.  As you prepare the food, you sample it for yourself so that you can make adjustments as you go.  You observe the meat to make sure it really does brown after a few minutes on high heat, and then you check on the onions to make sure they caramelize.  You observe, inspect, and adapt based on what’s happening around you.

Then you celebrate success by throwing cheese on the result and eating until you’re uncomfortably full.

Continue reading Understanding the Difference Between Static And Dynamic Code Analysis