NDepend

Improve your .NET code quality with NDepend

Separation of Concerns layered over dripping yellow liquid

Separation of Concerns, Explained

Software development is a very young field, particularly when you compare it to, say, medicine or law. Despite this, there’s no shortage of wisdom pearls, which accumulated in the decades that preceded us.

One interesting phenomenon I’ve observed in myself over the years—and I’m sure there’s a name for it—is that some of these sayings sound like they must be right, even if I don’t really understand them the first time I hear them. For instance, in my post about the SOLID principles, I mentioned how the SRP’s definition—”each class should have just one reason to change”—just ticks the right boxes for me in some way that I can’t even pinpoint. 

Unfortunately, just hearing a phrase and acknowledging that it kind of sounds right doesn’t do much to really make you understand the topic, right?

Then why do so many in our industry act like that’s the case? I’ve lost count of how many times I’ve seen experienced developers toss around catchphrases like this as if they’re able to automatically inject the necessary information into beginners’ heads.

In this spirit, I’ve decided to demystify one of these catchphrases that happens to be one of my favorites: “Separation of Concerns.” What does that mean and why should you care? That’s what today’s post is all about.

First of All: What Are Concerns?

Before we get to explain why concerns are best off when they’re separated, we should take a step back and understand what “concern” even means in the context of software development.

The current Wikipedia definition says this:

In computer science, a concern is a particular set of information that has an effect on the code of a computer program.

Frankly, I think that’s quite vague and not particularly useful. So instead, let’s try to come up with some examples. Think about a boring line-of-business application, such as a payroll application. What are its concerns?

  • Interaction with the user.
  • Generation of charts, graphs, and reports of different kinds.
  • Calculation of employee’s salaries, benefits, severance packages and so on.
  • Persistence of all the relevant data into some persistence storage.

In short, each one of the areas that an application covers and does something is a concern.

Concerns in the Software World: Why Keep’ Em Apart?

Suppose you and your team successfully release an application. You’re getting great feedback and all is nice in the world.

Then, like they always do, a requirement for a new feature comes in. And for the sake of the argument, this is a feature you absolutely have to do. You can’t refuse it (let’s say the competitor’s product already has it).

What would the potential risks be when you add this feature? We could cite a lot of them, but it all boils down to two things in the end really. First, it’s possible that the feature could be very hard to write because you’d have to write a lot of code in a lot of different places. You also run the risk of breaking current features that paying customers already depend on.

Keeping your concerns separated will decrease the above risks. If all the code related to a certain concern is kept together (for example, in the same layer) it becomes easier to change it. You don’t have to make a myriad of changes scattered throughout the code base. You don’t have to “look for” where a certain task is implemented since the code is organized according to its concerns.

Finally, you don’t risk breaking code unrelated to what you’ve implemented since this other code doesn’t even reside in the same place in the application.

Let’s get back to our boring payroll example. Think about the concerns we’ve identified for it. Would it make sense that a request to support Oracle Database beyond the current PostgreSQL could cause the app to miscalculate salaries? Or could a change in the location of a GUI element make a SQL query stop working? By keeping each concern as separated as possible, we can prevent those things from happening.

Let’s now see an example of what it looks like when concerns are not separated.

Desperation of Concerns—A Quick Example

Let’s write a toy app based on Roy Osherove’s String Calculator Kata. It’s going to be a very simple WinForms app with just one text field and one button. The user should input integer numbers separated by a comma and then click the button. The application will then calculate and display the sum of the inputted numbers. Only non-negative integers are allowed, though. If the input string contains any character that is not a non-negative integer, then the sum is aborted and an error message should be displayed along with the list of offending characters.

The following images depict a successful and an unsuccessful sum, respectively:

It isn’t that hard to write code for the application you can see above. The following listing shows the code for the “Add” button:

As you can see, the developer hasn’t kept concerns separated! We have error-handling, business logic, and UI code all in one big mess. How can we make things better? Why should we? It’s not hard to imagine that a new requirement could come in and ask to deploy the application as a console app. How could we do that without duplicating a lot of code? It should be very easy to do if concerns were separated. If all the string calculator logic itself was contained in an isolated location, it’d be a matter of adding a new project—a console app—to our solution and writing a few lines of code.

And how could we make this example a little bit better? First, I’ll add a new project to the solution. This new project will be of the type “Class Library.” Then, I’ll add a new class to this project, which I’ll name “StringCalculator.” The code in the class should be as follows:

All you need to do now is to edit the form’s code in order to use the new class. I’ll leave that as an exercise for the reader.

Write Software As If Your Users Were Blind. Teach As If You’re a Beginner.

Recently, I came up with this sentence: “Write software as if your users were blind.” I’ve been using this phrase as a mental framework for thinking about separation of concerns, particularly regarding decoupling logic from presentation. When I spot some code in the application’s domain layer mentioning visual aspects such as color names, I ask myself, “Would this still make sense if the people who’ll use this app were blind?” (And they might as well be, why not?).

If the answer is “no,” then I know I must move that code to the presentation layer, keeping in only the domain layer code related to the concepts themselves.

When teaching someone, you should always be aware of the curse of knowledge. Try to have empathy with your students. Put yourself in their shoes and try to remember when you were a beginner yourself. Don’t just toss catchphrases around; take the time to turn them into valuable lessons that will inform and empower the developers of tomorrow.

NDepend and .NET Fx v4.7.2: an extension method collision and how to solve it easily

In Oct 2017 I wrote about the potential collision problem with extension methods. At that time the .NET Framework 4.7.1 was just released with this new extension method that is colliding with our own NDepend.API Append() extension method with same signature.

The problem was solved easily because just one default rule consumed our Append() extension method, we just had to refactor this method to use it as a static method call instead of an extension method call:  ExtensionMethodsEnumerable.Append(...)

 


Unfortunately with the recent release of .NET Framework 4.7.2, the same problem just happened again, this time with this extension method:

This time 22 default code rules are relying on our ToHashSet() extension method. This method is used widely because it is often the cornerstone to improve significantly performances. But this means that after installing the .NET Fx v4.7.2, 22 default rules will break.

This time the problem is not solved easily by calling our ExtensionMethodsSet.ToHashSet<TSource>(this IEnumerable<TSource>)  extension method as a static method because in most of these 22 rules source code, changing the extension method call into a static method call require a few brain cycle. Moreover it makes the rules source code less readable: For example the first needs to be transformed into the second:

We wanted a straightforward and clean way for NDepend users to solve this issue on all their default-or-custom code rules.  The solution is the new extension method ToHashSetEx().

Solving the issue on an existing NDepend deployment is now as simple as replacing .ToHashSet()  with  .ToHashSetEx()  in all textual files that contain the user code rules and code queries (the files with extension .ndproj and .ndrules).

We just released NDepend v2018.1.1 with this new extension method  ExtensionMethodsSet.ToHashSetEx<TSource>(this IEnumerable<TSource>). Of course all default rules and generated queries now rely on ToHashSetEx() and also a smart error message is now shown to the user in such situation:

We hesitated between ToHashSetEx() and ToHashSet2() but we are confident that this problem won’t scale (more explanation on suffixing a class or method name with Ex here).

 


Actually we could have detected this particular problem earlier in October 2017 because Microsoft claimed that the .NET Fx will ultimately support .NET Standard 2.0 and  .NET Standard 2.0 already presented this ToHashSet() extension method. So this time we analyzed both C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\netstandard.dll and NDepend.API.dll to double-check with this code query that there is no more risk of extension method collision:

We find back both Append() and ToHashSet() collisions and since NDepend.API is not concerned with queryable, there is no more risk of collision:

 

 

A Look at .NET Core 2.1

A Look at .NET Core 2.1

The .NET Framework has certainly been through many changes since it was introduced by Microsoft in 2002. Arguably, .NET Core is the biggest change. First, .NET Core is open source. Also, you can now build .NET applications that run on Windows, Linux, and Mac. Developers can choose which packages and frameworks to include in their applications, different from the .NET Framework’s all-or-nothing methodology. .NET Core fundamentally changes how .NET developers write code. Now .NET Core 2.1 will add to the .NET revolution happening right now.

Before we review what .NET Core 2.1 brings to the table, it’s important to mention .NET Standard as well. .NET Standard provides a common set of APIs that each .NET implementation is guaranteed to have. .NET Core has to implement the .NET Standard APIs, so we’ll call out where it’s necessary when something in .NET Core 2.1 is put in because .NET Standard changed.

Faster Builds

Writing software is always easier when you can quickly execute code in order to test it and get fast feedback. Microsoft understands this and certainly has heard that .NET Core’s build times could be improved. That is exactly what Microsoft has done.

A key feature of .NET Core 2.1 is the significant performance improvements when building code. Each incremental build of .NET Core 2.1 has gotten faster, leading to a huge boost in performance from .NET Core 2.0 to 2.1.

incremental-build-core-2-1

This performance increase helps with development speeds as well as build speeds by using automated build tools, such as MSBuild. Large projects especially should see a dramatic increase in the speed of building your application.

Impactful New Features

Even though .NET Core 2.1 is an incremental update, it packs many good features that make it worthwhile to try out.

View Array Data with Span<T>

A big piece of .NET Core 2.1 is the introduction of the new Span<T> type. This type allows you to view pieces of memory and use them without copying what is in the memory. How do you pass the first 1,000 elements of a 10,000 element array? If you’re using 2.0, you have to copy those elements into a new array and then pass the new array into the method. As arrays get larger, this operation becomes a major hit on performance.

The Span<T> type allows you to view and access a certain piece of an array (and other blocks of memory) without copying it. Think of it as a drive-thru window. Instead of going into the entire “store” to access the array elements required, a method can simply drive past the “window” and receive what it needs to do its job.

A really useful feature of the Span<T> type is the slice method. Slice is the way you can create that “window” into an array. Let’s look at an example.

This is a simple example that highlights the basic uses of Span<T>. First, you can create a span from an existing array. You can then slice that span by telling the slice method where in the array to start and how far to go. Then you can use that sliced portion of the array as you see fit without any performance hits. You can check out this example here and here.

Sockets Performance

Sockets are the gateways into your server. They serve as the foundation for incoming and outgoing network communication between computers. Previous versions of .NET Core used native code (such as C) in order to implement sockets. Starting with .NET Core 2.1, sockets are created using a new managed (meaning built using C# itself) class.

There is a new class in town called SocketsHttpHandler. This class will provide access to sockets using .NET sockets and non-native sockets. This has several benefits like the following:

  • Better performance
  • No more reliance on native operating system libraries for socket functionality (requiring a different implementation for each operating system)
  • More consistent behavior across platforms

Self-Contained Applications

A really interesting and useful addition to .NET Core 2.1 is the self-contained publishing of applications. You can now choose the option of a self-contained application when you package an application to prepare it for deployment (called “publishing”). A self-contained application has the .NET Core libraries and runtime included in the package. This means it can be isolated from other applications when it is run. You can have two applications running different versions of .NET Core on the same machine because the necessary version of the runtime is packaged with the application.

This does make the final executable quite large and has some other drawbacks. However, in the right situation, self-contained applications can be quite useful.

New Security Features

Let’s face it, you’ll rarely read a post written by me that doesn’t touch on security. My security geekdom can prove to be useful. .NET Core 2.1 has changed and added some important security features to remain compliant with a new version of .NET Standard just released.

CryptographicOperations Class

The new CryptographicOperations class gives developers two powerful tools in order to increase the security of their applications: FixedTimeEquals and ZeroMemory.

FixedTimeEquals helps to prevent a subtle side-channel attack on login screens. An attacker may try to brute force your login page or try to guess a username and password. Some applications provide a subtle but dangerous clue that allows attackers to know how close they are to the right login information. An attacker will continually enter login credentials, waiting for the response to take a bit longer. That can be a clue that the username is correct but the password is wrong. Attackers use timing attacks to break in.

FixedTimeEquals ensures that any two inputs of the same length will always return in the same amount of time. Use this when doing any cryptographic verification, such as your login functionality, to help prevent timing attacks.

ZeroMemory is a memory-clearing routine that cannot be optimized away by the compiler. This may seem strange, but sometimes the compiler will “optimize” code that clears memory without later reading that memory by eliminating the clearing code. This is better for speed from a technical standpoint. However, this could lead to sensitive secrets, like if cryptographic keys are left in memory without you knowing it.

Other Crypto Fun

Some other cool secure features were added to .NET Core 2.1. First, elliptic-curve Diffie-Hellman (ECDH) is now available on .NET Core. It’s okay if you don’t know what that is. Just know that it is a really good public-key cryptographic algorithm that has great performance and is a great choice for mobile and IoT applications.

Some other improvements include expanding existing cryptographic APIs to work with the new span type, leading to a 15% performance increase for some algorithms. .NET Core 2.1 also has better support overall for the SHA-2 Hash Algorithm.

How to Get It

If you want to play with .NET Core 2.1—frankly, I can’t wait to myself—here’s how to get it. Download the SDK and the runtime so you can build applications using the command line. If you want to use Visual Studio to build .NET Core 2.1, it has to be Visual Studio 2017 15.7 Preview 1. You should also check out the release notes for Preview 1 and Preview 2.

.NET Core 2.1 is incremental in number but big on delivery. The new Span<T> type has driven major performance improvements for the core libraries and will do the same for your application. New security features will help you write more secure code. And new tech is fun. So have fun and try out .NET Core 2.1.

Null Is Evil. What's the Best Alternative Null

Null Is Evil. What’s the Best Alternative? Null.

“Null is evil.” If you’ve been a software developer for any reasonable length of time, I bet you’ve come across that statement several times.

I’d say it’s also very likely that you agree with the sentiment, i.e., that the null reference is a feature our programming languages would be better off without. Even its creator has expressed regret over the null reference, famously calling it his “billion-dollar mistake.”

Bashing poor old null tends to get old, so authors don’t do just that. They also offer alternatives. And while I do believe that many of the presented alternatives have their merits, I also think we may have overlooked the best solution for the whole thing.

In this post, we’re going to examine some of the common alternatives for returning null before making the argument that the best alternative is null itself. Let’s get started! Continue reading Null Is Evil. What’s the Best Alternative? Null.