NDepend

Improve your .NET code quality with NDepend

Static analysis of .NET Core 2.0 applications

NDepend v2017.3 has just been released with major improvements. One of the most requested features, now available, is the support for analyzing .NET Core 2.0 and .NET Standard 2.0 projects. .NET Core and its main flavor, ASP.NET Core, represents a major evolution for the .NET platform. Let’s have a look at how NDepend is analyzing .NET Core code.

Resolving .NET Core third party assemblies

In this post I’ll analyze the OSS application ASP.NET Core / EntityFramework MusicStore hosted on github. From the Visual Studio solution file, NDepend is resolving the application assembly MusicStore.dll and also two test assemblies that we won’t analyze here. In the screenshot below, we can see that:

  • NDepend recognizes the .NET profile, .NET Core 2.0, for this application.
  • It resolves several folders on the machine that are related to .NET Core, especially NuGet package folders.
  • It resolves all 77 third-party assemblies referenced by MusicStore.dll. This is important since many code rules and other NDepend features take into account what the application code is using.

It is worth noticing that the .NET Core platform assemblies have high granularity. A simple website like MusicStore references no fewer than 77 assemblies. This is because the .NET Core framework is implemented through a few NuGet packages that each contain many assemblies. The idea is to release the application only with needed assemblies, in order to reduce the memory footprint.

.NET Core 2.0 third party assemblies granularity

NDepend v2017.3 has a new heuristic to resolve .NET Core assemblies. This heuristic is based on .deps.json files that contain the names of the NuGet packages referenced. Here we can see that 3 NuGet packages are referenced by MusicStore. From these package names, the heuristic will resolve third-party assemblies (in the NuGet store) referenced by the application assemblies (MusicStore.dll in our case).

NuGet packages referenced in .deps.json file

Analyzing .NET Standard assemblies

Let’s be clear that NDepend v2017.3 can also analyze .NET Standard assemblies. Interestingly enough, since .NET Standard 2.0, .NET Standard assemblies reference a unique assembly named netstandard.dll and found in C:\Users\[user]\.nuget\packages\NETStandard.Library\2.0.0\build\netstandard2.0\ref\netstandard.dll.

By decompiling this assembly, we can see that it doesn’t contain any implementation, but it does contain all types that are part of .NET Standard 2.0. This makes sense if we remember that .NET Standard is not an implementation, but is a set of APIs implemented by various .NET profiles, including .NET Core 2.0, the .NET Framework v4.6.1, Mono 5.4 and more.

Browsing how the application is using .NET Core

Let’s come back to the MusicStore application that references 77 assemblies. This assembly granularity makes it impractical to browse dependencies with the dependency graph, since this generates dozens of items. We can see that NDepend suggests viewing this graph as a dependency matrix instead.

NDepend Dependency Graph on an ASP.NET Core 2.0 project

The NDepend dependency matrix can scale seamlessly on a large number of items. The numbers in the cells also provide a good hint about the represented coupling. For example, here we can see that  22 members of the assembly Microsoft.EntityFrameworkCore.dll are used by 32 methods of the assembly MusicStore.dll, and a menu lets us dig into this coupling.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

Clicking the menu item Open this dependency shows a new dependency matrix where only members involved are kept (the 32 elements in column are using the 22 elements in rows). This way you can easily dig into which part of the application is using what.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

All NDepend features now work when analyzing .NET Core

We saw how to browse the structure of a .NET Core application, but let’s underline that all NDepend features now work when analyzing .NET Core applications. On the Dashboard we can see code quality metrics related to Quality Gates, Code Rules, Issues and Technical Debt.

NDepend Dashboard on an ASP.NET Core 2.0 project

Also, most of the default code rules have been improved to avoid reporting false positives on .NET Core projects.

NDepend code rules on an ASP.NET Core 2.0 project

We hope you’ll enjoy using all your favorite NDepend features on your .NET Core projects!

Migrating from HTTP to HTTPS in a IIS / ASP.NET environment

Google is urging more and more webmasters to move their sites to HTTPS for security reasons. We did this move last week for our IIS / ASP.NET website https://www.NDepend.com and we learned a few tricks along the way. Once you’ve done it once it becomes pretty straightforward, but getting the big picture and handling every detail well is not trivial. So I hope this post will be useful.

HTTPS and Google Analytics Referrals

One reason for moving to HTTPS is that Google Analytics referrals don’t work when the user comes from a HTTPS website. And since most of your referrers websites are likely to be already HTTPS, if you keep up with HTTP, your GAnalytics becomes blind.

Notice that once you’ve moved to HTTPS, you still won’t be able to track referrers that come from an HTTP url, which is annoying since most of the time you don’t have edit-access to these urls.

Getting the Certificate

You can get free certificates from LetsEncrypt.com, but they have a 3 month lease. The renewal process can certainly be automated, but instead we ordered a 2 year certificate from gandi.net for only 20 EUR for the two years. For that price you’ll get the minimum and won’t obtain a certificate with the Green Address Bar, which costs around 240 EUR / year.

When ordering the certificate, a CSR (Certificate Sign Request) will be requested. The CRS can be obtained from IIS as explained here for example, through the menu Generate Certificate Request. A few questions about who you are will be asked, the most important being the Common Name, which will be typically www.yourdomain.com  (or, better, use a wildcard, as in *.yourdomain.com). If the Common Name doesn’t match the web site domain, the user will get a warning at browsing time, so this is a sensitive step.

Installing the Certificate in IIS

Once you’ve ordered the certificate, the certificate shop will provide you with a .crt or .cer crypted content. This is the certificate. But IIS doesn’t deal with the .crt nor .cer formats, it asks for a .pfx file! This is misleading and the number one explanation on the web is this one on the Michael Richardson blog. Basically you’ll use the IIS menu Complete Certificate Request (that follows the first Generate Certificate Request). Now restart IIS or the server to make sure it’ll take care of the certificate.

Binding the Certificate to the website 443 Port in IIS

At that point the certificate is installed on the server. The certificate needs to be bound with your website port 443. First make sure that the port 443 is opened on your server, and second, use the binding IIS menu on your website. A binding entry will have to be added as shown in the picture below.

Once added just restart the website. Normally, you can now access your website through HTTPS urls. If not, you may have to tweak the DNS pointers somehow, but I cannot comment since we didn’t have a problem with that.

At that point, both HTTPS and HTTP are browsable. HTTP requests need to be redirected to HTTPS to complete the migration.

301 redirection with Web.Config and IIS UrlRewriter

HTTP to HTTPS redirection can be achieved by modifying the Web.Config file of your ASP.NET website, to tell the IIS Url rewriter how to redirect. After a few attempts based on googling, our redirection rules look like:

If you believe this can be improved, please let me know. At least it works 🙂

  • <add input=”{HTTPS}” pattern=”off” ignoreCase=”true” /> is the main redirection rule that redirects HTTP requests to HTTPS (this is called 301 redirection). You’ll find many sites on the web to test that your 301 redirection works fine.
  • Make sure to double check that urls with GET params are redirected well. On our side, url=“https://{HTTP_HOST}{REQUEST_URI}” processes GET params seamlessly
  • <add input=”{URL}” pattern=”(.*)XYZ” negate=”true” ignoreCase=”true”/> is important to avoid HTTP to HTTPS redirection for a page named XYZ. Typically, if you have special pages with POST requests, they might be broken with the HTTPS redirection, and thus the redirection needs to be discarded for those.
  • <add input=”{HTTP_HOST}” matchType=”Pattern” pattern=”^localhost(:\d+)?$” negate=”true” /> avoid the HTTPS redirection when testing on localhost.
  • <add input=”{HTTP_HOST}” pattern=”^www.*” negate=”true”/> just transforms ndepend.com requests into www.ndepend.com,
  • and  <add input=”{HTTP_HOST}” pattern=”localhost” negate=”true”/> avoids this WWW redirection on localhost.

Eliminate Mixed Content

At this point you are almost done. Yet depending on the topology of your web site(s) and resources, it is possible that some pages generate a mixed content warning. Mixed content means that some resources (like images or scripts) of an HTTPS web page are served through HTTP. When mixed content is detected, most browsers show a warning to users about a not fully secured page.

You’ll find tools to search for mixed content on your web site, but you can also crawl the site yourself and use the Chrome console to get details about mixed content found.

Update Google SiteMap and Analytics

Finally make sure that your Google sitemap now references HTTPS urls, and update your Google Analytics for HTTPS:

I hope this content saves a few headaches. I am certainly not a SSL nor an IIS expert, so once again, if some part of this tutorial can be improved, feel free to comment!

Why NDepend Uses Google’s Page Rank

I remember my early days of blogging as sort of a comedy of errors.  Oh, don’t get me wrong.  I don’t think those early posts were terrible, since I’d always written a lot.  Rather, I knew very little about everything besides the writing.  For example, I initially thought link spammers were just somewhat daft blog commenters.  I stumbled through various mistakes and learned the art of blogging in fits and starts.  This included my discovery of something called page rank.

Page rank had a relatively involved calculation, but that didn’t interest me at the time.  Instead, I found myself dazzled by some gamification.  Sites like this one would take your domain and a captcha as input and spit out a score from 0 to 10 as output.  That simply, they turned my blogging world upside down.  I now had a score to chase and a means of comparing myself against others.  And I vaguely understood that getting more inbound links would increase my page rank score.

Of course, as an introvert, I struggle with outgoing self-promotion.  Cold outreach to people to see if they’d link to me never seriously occurred to me.  Instead, I reasoned that I would play the long game.  Write enough posts, and the shares start to come.  And then when the shares come, so too will the links.  So I watched my page rank inch slowly upward over time.

The Decline of Page Rank

My page rank ticked upward until one day it didn’t anymore.  Turns out, Google slowly killed it over the course of a number of years.  Ten months passed between its penultimate update and its final one.  So there I stood (metaphorically), waiting for a boost to my rank that would never come.

But why did Google kill page rank?  Wouldn’t such an easily digestible construct continue to help people?  Well, sort of.  Unfortunately, it disproportionately helped the wrong sort of people.

The Google founders developed the concept during their time at Stanford.  Conceptually, the page rank algorithm regards a link from site A to site B as a “vote” for site B, by site A.  But not all pages get to “vote” equally.  The higher a rank the page has, the more worthwhile its vote, creating a conceptual feedback loop.

On the surface, this sounds great, and, in many ways, it was.  As you can imagine, a site with a ton of inbound links, like a government study or a news outlet, would accumulate a great deal of rank.  Since employees would carefully curate such sites, you could put a lot of stock in a site to which they linked (and search engines did).  So in theory, you have a democratized system in which the sites best regarded by the public had the best rank.

But in this theory, no link spammers existed.  If you wanted good page rank, you could produce high quality, popular content.  Or you could pay some shady outfit to carpet bomb blog comment sections with links to your site.  Because of this fatal flaw, page rank eventually dwindled to obscurity.

A Useful Reappropriation of Page Rank

For clarity, understand that Google (probably) still uses some incarnation of this scheme.  But they no longer update the easily consumed public version of it.  They now use it as only one of many factors in what they display in response to searches.  The heyday of comparing page rank scores for sites has come and gone.  But that doesn’t mean we can’t use it elsewhere, and to great efficacy.

For instance, consider applying this to codebases.  Instead of a situation where website A links to website B, imagine a situation where type A refers directly to type B.  Now, imagine your codebase as a (hopefully acyclic) directed graph with edges and nodes.  You start to have an interesting vehicle for reasoning about your codebase.

What would a high rank mean in this context?  Well, relatively high rank for a type would mean that other types tended to refer to it at a high rate.  Types with relatively low (or zero) rank would take no dependencies, existing at the edge of your code.  And the types with the highest rank?  These would be types used by other types with high rank.

Continue reading Why NDepend Uses Google’s Page Rank

How to Evaluate Your Static Analysis Process

I often get inquiries from clients and prospects about setting up and operationalizing static analysis.  This makes sense.  After all, we live in a world short on time and with software developers in great demand.  These clients always seem to have more to do than bandwidth allows.  And static analysis effectively automates subtle but important considerations in software development.

Specifically, it automates peer review to a certain extent.  The static analyzer acts as a non-judging, mute reviewer of sorts.  It also stands in for a tiny bit of QA’s job, calling attention to possible issues before they leave the team’s environment.  And, finally, it helps you out by acting as architect.  Team members can learn from the tool’s guidance.

So, as I’ve said, receiving setup inquiries doesn’t surprise me.  And I applaud these clients for pursuing this path of improvement.

What does surprise me, however, is how few organizations seem to ask another, related question.  They rarely ask for feedback about the efficacy of their currently implemented process.  Many organizations seem to consider static analysis implementation a checkbox kind of activity.  Have you done it?  Check.  Good.

So today, I’ll talk about checking in on an existing static analysis implementation.  How should you evaluate your static analysis process?

Continue reading How to Evaluate Your Static Analysis Process

static analysis for the build machine

Static Analysis for the Build Machine?

I remember my earliest experiences with static analysis.  Probably a decade ago, I started to read about it during grad school and poke around with it at work.  Immediately, I knew I had discovered a powerful advantage for programmers.  These tools automated knowledge.

While I felt happy to share the knowledge with coworkers, their lack of interest didn’t disappoint me.  After all, it felt as though I had some sort of trade secret.  If those around me chose not to take advantage, I would shine by comparison.  (I have since, I’d like to think, matured a bit.)  Static analysis became my private competitive advantage — Sabermetrics for programmers.

So as you can imagine, running it on the build machine would not have occurred to me.  And that assumes a sophisticated enough setup that doing so made sense (not really the case back then).  Static analysis was my ace in the hole for writing good code — a personal choice and technique.

Fast forward a decade.  I have now grown up, worked with many more teams, and played many more roles.  And, of course, the technological landscape has changed.  All of that combined to cause a complete reversal of my opinion.  Static analysis and its advantages matter far too much not to use it on the build machine.  Today, I’d like to expand on that a bit.
Continue reading Static Analysis for the Build Machine?

Analyzing a complex solution

How to Analyze a Complex Solution

I’ve made no secret that I spend a lot of time these days analyzing code bases as a consultant, and I’ve also made no secret that I use NDepend (and its Java counterpart, JArchitect) to do this analysis.  As a result, I get a lot of questions about analyzing code bases and about the tooling.  Today, I’ll address a question I’ve heard.

Can NDepend analyze a complex solution (i.e. more than 100 projects)?  If so, how do you do this, and how does it work?

Can NDepend Handle It?

For the first question — in a word, yes.  You certainly can do this with NDepend.  As a matter of fact, NDepend will handle the crippling overhead of this many projects better than just about any tool out there.  It will be, so to speak, the least of your problems.

How should you use it in this situation?  You should use it to help yourself get out of the situation.  You should use it as an aid to consolidating and partitioning into different solutions.

The Trouble with Scale

If you download a trial of NDepend and use it on your complex solution, you’ll be treated to an impressive number of project rules out of the box.  One of those rules that you might not notice at first is “avoid partitioning the code base through many small library assemblies.”  You can see the rule and explanation here.

We advise having less, and bigger, .NET assemblies and using the concept of namespaces to define logical components.

You can probably now understand why I gave the flippant-seeming answer above.  In a sense, it’d be like asking, “how do I use NDepend on an assembly where I constantly swallow exceptions with empty catch blocks.”  The answer would be, “you can use it to help you stop doing that.”

Continue reading How to Analyze a Complex Solution