NDepend

Improve your .NET code quality with NDepend

Announcing the Singleton Challenge

Announcing the Singleton Challenge

About a month ago, I made a post about what the singleton pattern costs you.  Although I stated my case in terms of trade-offs rather than prescriptive advice, I still didn’t paint the singleton pattern in a flattering light.  Instead, I talked about the problems that singletons tend to create in codebases.

Whenever I’ve talked about the singleton pattern, anywhere I’m blogging, the general response follows a typical signature.  It attracts a relatively large amount of shares (usually indicating support) and traffic (often indicating support) while generating comments (these often contain objections).  Generally, the response follows this pattern from Stack Overflow.  At the time of writing:

  • 1,018 net upvotes for an answer explaining why they’re bad.
  • 377 net upvotes for an answer explaining why they’re usually a mistake.
  • 280 net upvotes for them getting a bad wrap only because people misuse them (with a disproportionate number of mitigating downvotes).
  • 185 net upvotes for an answer explaining why they’re terrible.

It seems as though something like 80 percent of the active developer population has come around to thinking, “Yeah, let’s not do this anymore.”  But a vocal 20 percent minority resents that, thinking the rest are throwing the baby out with the bathwater.

Perusing some of the comments on the blog and discussion sites where it was shared, I found two themes of objection to the post.

  1. If singleton is so bad, why do DI frameworks provide singleton object instances?
  2. Your singleton examples are bad, ergo the problem is that you misuse singletons.

Today, I’d like to address those threads of criticism.  But not ultimately in the way you might think.

Continue reading Announcing the Singleton Challenge

Understanding Cyclomatic Complexity

Wander the halls of an enterprise software outfit looking to improve, and you’ll hear certain things.  First and foremost, you’ll probably hear about unit test coverage.  But, beyond that, you’ll hear discussion of a smattering of other metrics, including cyclomatic complexity.

It’s actually sort of funny.  I mean, I understand why this happens, but hearing middle managers say “test coverage” and “cyclomatic complexity” has the same jarring effect as hearing developers spout business-meeting-speak.  It’s just not what you’d naturally expect.

And you wouldn’t expect it for good reason.  As I’ve argued in the past, code coverage shouldn’t be a management concern.  Nor should cyclomatic complexity.  These are shop-heavy specifics about particular code properties.  If management needs to micromanage at this level of granularity, you have a systemic problem.  You should worry about these properties of your code so that no one else has to.

With that in mind, I’d like to focus specifically on cyclomatic complexity today.  You’ve probably heard this term before.  You may even be able to rattle off a definition.  But let’s take a look in great detail to avoid misconceptions and clear up any hazy areas.

Defining Cyclomatic Complexity

First of all, let’s get a specific working definition.  This is actually surprisingly difficult because not all sources agree on the exact method for computing it.

How can that be?  Well, the term was dreamed up by a man named Thomas McCabe back in 1976.  He wanted a way to measure “the number of linearly independent paths through a program’s source code.”  But beyond that, he didn’t specify the mechanics exactly, leaving that instead to implementers of the metric.

He did, however, give it an intimidating-sounding name.  I mean, complexity makes sense, but what does “cyclomatic” mean, exactly?  Well, “cyclomatic number” serves as an alias for something more commonly called circuit rank.  Circuit rank measures the number of independent cycles within a cyclic graph.  So I suppose he coined the neologism “cyclomatic complexity” by borrowing a relatively obscure discrete math concept for path independence and applying it to code complexity.

Well then.  Now we have cyclomatic complexity, demystified as a term.  Let’s get our hands dirty with examples and implications.

Continue reading Understanding Cyclomatic Complexity

Marker Interface Isn't a Pattern or a Good Idea

Marker Interface Isn’t a Pattern or a Good Idea

Today, I have the unique opportunity to show you the shortest, easiest code sample of all time.  I’m talking about the so-called marker interface.  Want to see it?  Here you go.

I told you it was simple.  It’s dead simple for a code sample, so that makes it mind-blowingly simple for a design pattern.  And that’s how people classify it — as a design pattern.

How Is This “Marker Interface” Even a Pattern?

As you’ve inferred from the title, I’m going to go on to make the claim that this is not, in fact, a “design pattern” (or even a good idea).  But before I do that, I should explain what this is and why anyone would do it.  After all, if you’ve never seen this before, I can forgive you for thinking it’s pretty, well, useless.  But it’s actually clever, after a fashion.

The interface itself does nothing, as advertised.  Instead, it serves as metadata for types that “implement” it.  For example, consider this class.

The customer class doesn’t implement the interface.  It has no behavior, so the idea of implementing it is nonsense.  Instead, the customer class uses the interface to signify something to the client code using it.  It marks itself as containing sensitive information, using the interface as a sort of metadata.  Users of the class and marker interface then consume it with code resembling the following:

Using this scheme, you can opt your classes into special external processing.

Marker Interface Backstory

I’m posting code examples in C#, which makes sense.  After all, NDepend is a .NET ecosystem tool.  But the marker interface actually goes back a long way.  In fact, it goes back to the earlier days of Java, which baked it in as a first class concept, kind of how C# contains a first class implementation of the iterator design pattern.

In Java, concepts like serialize and clone came via marker interfaces.  If you wanted serialization in Java, for instance, you’d tag your class by “implementing” the marker interface Serializable.  Then, third party processing code, such as ORMs, IoC containers, and others would make decisions about how to process it.  This became common enough practice that a wide ecosystem of tools and frameworks agreed on the practice by convention.

C# did not really follow suit.  But an awful lot of people have played in both sandboxes over the years, carrying this practice into the .NET world.  In C#, you’ll see two flavors of this.  First, you have the classic marker interface, wherein people use it the way that I showed above.  Secondly, you have situations where people get clever with complex interface inheritance schemes and generics in order to force certain constraints on clients.  I won’t directly address that second, more complex use case, but note that all of my forthcoming arguments apply to it as well.

Now, speaking of arguments, let’s get to why I submit that this is neither a “pattern” nor a good idea in modern OOP.  NDepend tells you to avoid this, and I wholeheartedly agree.

Continue reading Marker Interface Isn’t a Pattern or a Good Idea

Migrating from HTTP to HTTPS in a IIS / ASP.NET environment

Google is urging more and more webmasters to move their sites to HTTPS for security reasons. We did this move last week for our IIS / ASP.NET website https://www.NDepend.com and we learned a few tricks along the way. Once you’ve done it once it becomes pretty straightforward, but getting the big picture and handling every detail well is not trivial. So I hope this post will be useful.

HTTPS and Google Analytics Referrals

One reason for moving to HTTPS is that Google Analytics referrals don’t work when the user comes from a HTTPS website. And since most of your referrers websites are likely to be already HTTPS, if you keep up with HTTP, your GAnalytics becomes blind.

Notice that once you’ve moved to HTTPS, you still won’t be able to track referrers that come from an HTTP url, which is annoying since most of the time you don’t have edit-access to these urls.

Getting the Certificate

You can get free certificates from LetsEncrypt.com, but they have a 3 month lease. The renewal process can certainly be automated, but instead we ordered a 2 year certificate from gandi.net for only 20 EUR for the two years. For that price you’ll get the minimum and won’t obtain a certificate with the Green Address Bar, which costs around 240 EUR / year.

When ordering the certificate, a CSR (Certificate Sign Request) will be requested. The CRS can be obtained from IIS as explained here for example, through the menu Generate Certificate Request. A few questions about who you are will be asked, the most important being the Common Name, which will be typically www.yourdomain.com  (or, better, use a wildcard, as in *.yourdomain.com). If the Common Name doesn’t match the web site domain, the user will get a warning at browsing time, so this is a sensitive step.

Installing the Certificate in IIS

Once you’ve ordered the certificate, the certificate shop will provide you with a .crt or .cer crypted content. This is the certificate. But IIS doesn’t deal with the .crt nor .cer formats, it asks for a .pfx file! This is misleading and the number one explanation on the web is this one on the Michael Richardson blog. Basically you’ll use the IIS menu Complete Certificate Request (that follows the first Generate Certificate Request). Now restart IIS or the server to make sure it’ll take care of the certificate.

Binding the Certificate to the website 443 Port in IIS

At that point the certificate is installed on the server. The certificate needs to be bound with your website port 443. First make sure that the port 443 is opened on your server, and second, use the binding IIS menu on your website. A binding entry will have to be added as shown in the picture below.

Once added just restart the website. Normally, you can now access your website through HTTPS urls. If not, you may have to tweak the DNS pointers somehow, but I cannot comment since we didn’t have a problem with that.

At that point, both HTTPS and HTTP are browsable. HTTP requests need to be redirected to HTTPS to complete the migration.

301 redirection with Web.Config and IIS UrlRewriter

HTTP to HTTPS redirection can be achieved by modifying the Web.Config file of your ASP.NET website, to tell the IIS Url rewriter how to redirect. After a few attempts based on googling, our redirection rules look like:

If you believe this can be improved, please let me know. At least it works 🙂

  • <add input=”{HTTPS}” pattern=”off” ignoreCase=”true” /> is the main redirection rule that redirects HTTP requests to HTTPS (this is called 301 redirection). You’ll find many sites on the web to test that your 301 redirection works fine.
  • Make sure to double check that urls with GET params are redirected well. On our side, url=“https://{HTTP_HOST}{REQUEST_URI}” processes GET params seamlessly
  • <add input=”{URL}” pattern=”(.*)XYZ” negate=”true” ignoreCase=”true”/> is important to avoid HTTP to HTTPS redirection for a page named XYZ. Typically, if you have special pages with POST requests, they might be broken with the HTTPS redirection, and thus the redirection needs to be discarded for those.
  • <add input=”{HTTP_HOST}” matchType=”Pattern” pattern=”^localhost(:\d+)?$” negate=”true” /> avoid the HTTPS redirection when testing on localhost.
  • <add input=”{HTTP_HOST}” pattern=”^www.*” negate=”true”/> just transforms ndepend.com requests into www.ndepend.com,
  • and  <add input=”{HTTP_HOST}” pattern=”localhost” negate=”true”/> avoids this WWW redirection on localhost.

Eliminate Mixed Content

At this point you are almost done. Yet depending on the topology of your web site(s) and resources, it is possible that some pages generate a mixed content warning. Mixed content means that some resources (like images or scripts) of an HTTPS web page are served through HTTP. When mixed content is detected, most browsers show a warning to users about a not fully secured page.

You’ll find tools to search for mixed content on your web site, but you can also crawl the site yourself and use the Chrome console to get details about mixed content found.

Update Google SiteMap and Analytics

Finally make sure that your Google sitemap now references HTTPS urls, and update your Google Analytics for HTTPS:

I hope this content saves a few headaches. I am certainly not a SSL nor an IIS expert, so once again, if some part of this tutorial can be improved, feel free to comment!

Understanding the Different Between Static And Dynamic Code Analysis

Understanding the Difference Between Static And Dynamic Code Analysis

I’m going to cover some relative basics today.  At least, they’re basics when it comes to differentiating between static and dynamic code analysis.  If you’re new to the software development world, you may have no idea what I’m talking about.  Of course, you might be a software development veteran and still not have a great idea.

So I’ll start from basic principles and not assume you’re familiar with the distinction.  But don’t worry if you already know a bit.  I’ll do my best to keep things lively for all reading.

Static and Dynamic Code Analysis: an Allegory

So as not to bore anyone, bear with me as I plant my tongue in cheek a bit and offer an “allegory” that neither personifies intangible ideas nor has any real literary value.  Really, I’m just trying to make the subject of static and dynamic code analysis the slightest bit fun on its face.

So pull your fingers off the keyboard and let’s head down to the kitchen.  We’re going to do some cooking.  And in order to that, we’re going to need a recipe for, say, chili.

We all know how recipes work in the general life sense.  But let’s break the cooking activity into two basic components.  First, you have the part where you read and synthesize the recipe, prepping your materials and understanding how things will work.  And then you have the execution portion of the activity, wherein you do the actual cooking — and then, if all goes well, the eating.

Static and Dynamic Recipe Analysis

Having conceived of preparing the recipe in two lights, think in a bit more detail about each activity.  What defines them?

First, the recipe synthesis.  Sure, you read through it to get an overview from a procedural perspective, rehearsing what you might do.  But you also make inferences about the eventual results.  If you’ve never actually had chili as a dish, you might contemplate the ingredients and what they’d taste like together.  Beef, tomato sauce, beans, spicy additives…an idea of the flavor forms in your head.

You can also recognize the potential for trouble.  The recipe calls for cilantro, but you have a dinner guest allergic to cilantro.  Yikes!  Reading through the recipe, you anticipate that following it verbatim will create a disastrous result, so you tweak it a little.  You omit the cilantro and double check against other allergies and dining preferences.

But then you have the actual execution portion of preparing a recipe.  However imaginative you might be, picturing the flavor makes a poor substitute for experiencing it.  As you prepare the food, you sample it for yourself so that you can make adjustments as you go.  You observe the meat to make sure it really does brown after a few minutes on high heat, and then you check on the onions to make sure they caramelize.  You observe, inspect, and adapt based on what’s happening around you.

Then you celebrate success by throwing cheese on the result and eating until you’re uncomfortably full.

Continue reading Understanding the Difference Between Static And Dynamic Code Analysis

Using NDepend to Get Going Quickly on C# Projects

Using NDepend To Get Going Quickly on C# Projects

Assuming you’ve had at least one job working on one or more C# projects, let me paint a familiar picture for you.  I’m talking about your onboarding time with a new group.  It’s always an exciting and awkward experience as you feel out teammates and new codebases.

On day one, you arrive.  But rarely does anyone expect you to contribute meaningfully on that day.  In fact, you probably won’t even contribute meaningfully that first week.  Instead, you hear plenty of talk about “learning curves” and how this environment is uniquely challenging.  Ironically, every shop I visit as a consultant claims to be uniquely challenging.

C# Projects: the Onboarding Phase

What, then, does onboarding usually look like?  I’ll build you a composite picture, based on my travels.  I’ll leave out the HR component, with new team member lunches and paperwork; here, we’ll consider only about the technical aspects.

On the first day, you show up and a developer on the team works with you on the basics.  You get access to the mundane necessities: network shares, the internal SharePoint, version control, etc.  If you get lucky, all of that goes smoothly, and you actually get source code on your machine.  But this could also drag out a day or two.

During this time, everyone on the team is pretty busy.  You have chunks of time left to your own devices, instructed to study the architecture and design documents and also to look around the codebase.  These documents paint impressive architectural pictures of layer cakes, distributed clusters, tiers, and CQRS paradigms.  You secretly wonder if they’re accurate.  (Spoiler: they aren’t.)

You’d get going on real work sooner, but the architect needs to walk you through the relevant design patterns.  This is necessary so you’ll know what goes where without making a serious blunder. Of course, the architect is really busy since all problems bubble up to her.  So you wait.

Continue reading Using NDepend To Get Going Quickly on C# Projects

What the Singleton Pattern Costs You

What the Singleton Pattern Costs You

Do you use the singleton pattern?  If not, I’m assuming that you either don’t know what it is or that you deliberately avoid it.  If you do use it, you’re probably sick of people judging you for it.  The pattern is something of a lightning rod in the world of object-oriented programming.

You can always use Stack Overflow as a litmus test for programming controversy.  Someone asked “what was so bad” about singletons, and voting, responding, and commenting went off the charts.  Most of those responses fell into the negative category.  Outside of Stack Overflow, people call singletons evil, pathological liars, and anti-patterns.  People really seem to hate this design pattern — at least some of them do, anyway.

NDepend takes a stance on the singleton pattern as well, both in its rules and on the blog.  Specifically, it encourages you to avoid it.

But I’m not going to take a stance today, exactly.  I understand the cathartic temptation to call something evil in the world of programming.  If some (anti) pattern, framework, or language approach has caused you monumental pain in the past, you come to view it as the tool of the devil.  I’ve experienced this and turned it into a blog post, myself.

Instead of going that route here, however, I’m going to speak instead about what it can cost you when you decide to use the singleton pattern — what it can cost the future you, that is.  I have a consulting practice assessing third-party codebases, so I’ve learned to view patterns and practices through a dispassionate lens.  Everything is just trade-offs.

Continue reading What the Singleton Pattern Costs You

The Role of Static Analysis in Testing

The Role of Static Analysis in Testing

“What do you do?”

In the United States, people ask this almost immediately upon meeting one another for the first time.  These days, I answer the question by saying that I do IT management consulting.  That always feels kind of weird rolling off the tongue, but it accurately describes how I’ve earned a living.

If you’re wondering what this means, basically I advise leadership in IT organizations.  I help managers, directors, and executives better understand how to manage and relate to the software developers in their groups.  So you might (but hopefully won’t) hear me say things like, “You should stop giving out pay raises on the basis of who commits the most lines of code.”

In this line of work, I get some interesting questions.  Often, these questions orient around how to do more with less.  “How can we keep the business happy when we’re understaffed?”  “What do we do to get away from this tech debt?”  “How should we prioritize our work?”  That sort of thing.

Sometimes, they get specific.  And weird.  “If we do this dependency injection thing, do we really need to deploy as often?”  Or “If we implement static analysis, do we still need to do QA?”

I’d like to focus on the latter question today — but not because it’s a particularly good or thought-provoking one.  People want to do more with less, which I get. But while that particular question is a bit of a non sequitur, it does raise an interesting discussion topic: what is the role of static analysis in testing?

Static Analysis in Testing: An Improbable (But Real) Relationship

If you examine it on the surface, you won’t notice much overlap between testing and static analysis.  Static analysis involves analyzing code without executing it, whereas QA involves executing the code without analyzing it (among other things).

A more generous interpretation, however, starts to show a relationship.  For instance, one could argue that both activities relate deeply to code quality.  Static analysis speaks to properties of the code and can give you early warnings about potential problems.  QA takes a black box approach to examining the code’s behavior, but it can confirm the problems about which you’ve received warnings.

But let’s dive even a bit deeper than that.  The fact that they have some purview overlap doesn’t speak to strategy.  I’d like to talk about how you can leverage static analysis as part of your testing strategy — directly using static analysis in testing.

Continue reading The Role of Static Analysis in Testing

Our experience with using third-party libraries

NDepend is a tool that helps .NET developers write beautiful code. The project was started in April 2004. It is now used by more than 6 000 companies worldwide.

In more than a decade, many decisions were made, each with important consequences on the code base evolution process, sometime good consequences, sometime less good. Relentlessly dog-fooding (i.e using NDepend to analyze and improve the NDepend code base) helped us a lot to obtain more maintainable code, less bugs, and to improve the tool usability and features.

When it comes to working on and maintaining a large code base for several years, some of the most important decisions made are related to relying (or not) on some third-party libraries. Choosing whether or not to rely on a library is a double-edged sword decision that can, over time, bring a lot of value or cause a lot of friction. Ultimately users won’t make a difference between your bugs and third-party libraries bugs, their problems become your problems. Consider this:

  • Sometimes the team just needs a fraction of a library and it may be more cost-effective, and more maintainable with time, to develop your own.
  • Sometimes the licensing of a free or commercial library will prevent you from achieving your goals.
  • Sometimes the library looks bright and shiny but becomes an obsolete project a few months later and precious resources will have to be spent maintaining others code, or migrating toward a trendy new library.
  • Sometimes the library code and/or authors are so fascinating that you’ll be proud to contribute and be part of it.

Of course we all hope for the last case, and we had the chance to experience this a few times for real. Here are some libraries we’ve had great success with:

Mono.Cecil (OSS)

Mono.Cecil is an open source project developed by Jean-Baptiste Evain that can read and write data nested in .NET assemblies, including compiled IL code. NDepend has relied on Cecil for almost a decade to read IL code and other data. Both the library and the support provided are outstanding. The performance of the library is near optimal and all bugs reported were fixed in a matter of days or even hours. For our usage, the library is actually close to bug free. If only all libraries had this level of excellence…

DevExpress WinForm (Commercial)

NDepend has also relied on DevExpress WinForm for almost a decade to improve the UI look and feel. NDepend is a Visual Studio extension and DevExpress WinForm makes smooth visual integration with Visual Studio. Concretely, thanks to this library we achieved the exact same Visual Studio theming and skinning, docking controls a la Visual Studio, menus, bars and special controls like BreadCrumb to explore directories. We have never been disappointed with DevExpress WinForm. The bugs we reported were fixed quickly, it supports high DPI ratio and it is rock solid in production.

Microsoft Automatic Graph Layout MSAGL (OSS)

NDepend has relied on MSAGL for several years to draw all sorts of graphs of dependencies between code elements including Call Graphs, Class Inheritance Graphs, Coupling Graphs, Path and Cycle Graphs…  This library used to be commercial but nowadays OSS.  Here also the bugs we reported were fixed quickly, it supports high DPI ratio and it is perfectly stable in production.

NRefactory (OSS)

NDepend used to have a C# Code Query LINQ editor in 2012, a few years before Roslyn became RTM. We wanted to offer users a great editing experience with code completion and documentation tooltips on mouse-hovering. At that time NRefactory was the best choice and it proved with the years to be stable in production. Nowadays Roslyn would certainly be a better choice, but since our initial investment NRefactory still does the job well, we didn’t feel the need (yet) to move to Roslyn.

 

Here are a few things we prefer to keep in-house:

Licensing

While there are libraries for licensing, these are vital, sensitive topics that require a lot of flexibility with time, and we preferred to avoid delegating it. This came at the cost of plenty of development/support and significant levels of acquired expertise. Even taking into account that these efforts could have been spent on product features, we still don’t regret our choice.

The licensing layer is a cornerstone in our relation with our users community and it cannot suffer from any compromise. As a side remark, several times I observed that the cost of developing a solid licensing layer postponed promising projects to become commercial for a while.

Persistence

As most of application, NDepend persists and restores a lot of data, especially the large amount of data in analysis results. We could have used relational or object databases, but for a UI tool embedded in VS, the worst thing would be to slow down the UI and make the user wait. We decided only optimal performance is acceptable for our users, and optimal performance in persistence means using custom streams of bytes. The consequence of this decision is less flexibility: each time our data schema evolves, we need to put in extra effort to keep ascendant compatibility.

I underline that most of the time it is not a good idea to develop a custom persistence layer because of the amount of work and expertise required. But taken account our particular needs and goals, I think we took the right decision.

Production Logs

I explained here about our production logs system. We consider it an essential component to making NDepend a super-stable product. Here also, we could have used a third-party library. We capitalize on our own logging system because, year after year, we customized it with a plethora of production information, which was required to help fix our very own problems. We kept the system lightweight and flexible, and it still helps us improve the overall stability and correctness of our products.

Dependency Matrix and Treemap UI Components

These UI components were developed years ago and are still up to date. Both then and now, I believe there is no good third-party alternative that meets all our requirements in terms of layout and performance. A few times, we received propositions to buy those components, but we are not a component provider and don’t have plans for that.

 

In this post I unveiled a few core choices we made over the years. We hope this information will be useful for other teams.

How Has Static Code Analysis Changed Through the Years?

Years ago, I found myself staring at about 10 lines of source code.  This code had me stymied, and I could feel the onset of a headache as I read and re-read it.  What did this brain-bending code do, you might wonder?  It sorted elements in an array.

Now you probably assume that I was learning to code at the time.  Interestingly, no.  I had worked gainfully as a software developer for years and was also completing a master’s degree in computer science.  In fact, the pursuit of this degree had brought me to this moment of frustration, rubbing my temples and staring tiredly at a simple bubble sort in C.

Neither inexperience nor the difficulty of the code had brought me to that point.  Rather, I struggled to formally prove the correctness of this tiny program, in the mathematical sense.  I had enrolled in a class called “Formal Methods of Software Development” that taught the math underpinning our lives as programmers.

This innocuous, simple bubble sort had seven execution paths.  Here was number five, from a piece of homework I kept in my digital files.

Code analysis ranges from the academic to the pragmatic.

Hopefully I now seem less like an incompetent programmer and more like a student grappling with some weighty concepts.  But why should a simple bubble sort prove so difficult?  Well, the short answer is that actually proving things about programs with any complexity is really, surprisingly hard.  The longer answer lies in the history of static code analysis, so let’s take a look at that.

Continue reading How Has Static Code Analysis Changed Through the Years?