NDepend

Improve your .NET code quality with NDepend

Separation of Concerns layered over dripping yellow liquid

Separation of Concerns, Explained

Software development is a very young field, particularly when you compare it to, say, medicine or law. Despite this, there’s no shortage of wisdom pearls, which accumulated in the decades that preceded us.

One interesting phenomenon I’ve observed in myself over the years—and I’m sure there’s a name for it—is that some of these sayings sound like they must be right, even if I don’t really understand them the first time I hear them. For instance, in my post about the SOLID principles, I mentioned how the SRP’s definition—”each class should have just one reason to change”—just ticks the right boxes for me in some way that I can’t even pinpoint. 

Unfortunately, just hearing a phrase and acknowledging that it kind of sounds right doesn’t do much to really make you understand the topic, right?

Then why do so many in our industry act like that’s the case? I’ve lost count of how many times I’ve seen experienced developers toss around catchphrases like this as if they’re able to automatically inject the necessary information into beginners’ heads.

In this spirit, I’ve decided to demystify one of these catchphrases that happens to be one of my favorites: “Separation of Concerns.” What does that mean and why should you care? That’s what today’s post is all about.

First of All: What Are Concerns?

Before we get to explain why concerns are best off when they’re separated, we should take a step back and understand what “concern” even means in the context of software development.

The current Wikipedia definition says this:

In computer science, a concern is a particular set of information that has an effect on the code of a computer program.

Frankly, I think that’s quite vague and not particularly useful. So instead, let’s try to come up with some examples. Think about a boring line-of-business application, such as a payroll application. What are its concerns?

  • Interaction with the user.
  • Generation of charts, graphs, and reports of different kinds.
  • Calculation of employee’s salaries, benefits, severance packages and so on.
  • Persistence of all the relevant data into some persistence storage.

In short, each one of the areas that an application covers and does something is a concern.

Concerns in the Software World: Why Keep’ Em Apart?

Suppose you and your team successfully release an application. You’re getting great feedback and all is nice in the world.

Then, like they always do, a requirement for a new feature comes in. And for the sake of the argument, this is a feature you absolutely have to do. You can’t refuse it (let’s say the competitor’s product already has it).

What would the potential risks be when you add this feature? We could cite a lot of them, but it all boils down to two things in the end really. First, it’s possible that the feature could be very hard to write because you’d have to write a lot of code in a lot of different places. You also run the risk of breaking current features that paying customers already depend on.

Keeping your concerns separated will decrease the above risks. If all the code related to a certain concern is kept together (for example, in the same layer) it becomes easier to change it. You don’t have to make a myriad of changes scattered throughout the code base. You don’t have to “look for” where a certain task is implemented since the code is organized according to its concerns.

Finally, you don’t risk breaking code unrelated to what you’ve implemented since this other code doesn’t even reside in the same place in the application.

Let’s get back to our boring payroll example. Think about the concerns we’ve identified for it. Would it make sense that a request to support Oracle Database beyond the current PostgreSQL could cause the app to miscalculate salaries? Or could a change in the location of a GUI element make a SQL query stop working? By keeping each concern as separated as possible, we can prevent those things from happening.

Let’s now see an example of what it looks like when concerns are not separated.

Desperation of Concerns—A Quick Example

Let’s write a toy app based on Roy Osherove’s String Calculator Kata. It’s going to be a very simple WinForms app with just one text field and one button. The user should input integer numbers separated by a comma and then click the button. The application will then calculate and display the sum of the inputted numbers. Only non-negative integers are allowed, though. If the input string contains any character that is not a non-negative integer, then the sum is aborted and an error message should be displayed along with the list of offending characters.

The following images depict a successful and an unsuccessful sum, respectively:

It isn’t that hard to write code for the application you can see above. The following listing shows the code for the “Add” button:

As you can see, the developer hasn’t kept concerns separated! We have error-handling, business logic, and UI code all in one big mess. How can we make things better? Why should we? It’s not hard to imagine that a new requirement could come in and ask to deploy the application as a console app. How could we do that without duplicating a lot of code? It should be very easy to do if concerns were separated. If all the string calculator logic itself was contained in an isolated location, it’d be a matter of adding a new project—a console app—to our solution and writing a few lines of code.

And how could we make this example a little bit better? First, I’ll add a new project to the solution. This new project will be of the type “Class Library.” Then, I’ll add a new class to this project, which I’ll name “StringCalculator.” The code in the class should be as follows:

All you need to do now is to edit the form’s code in order to use the new class. I’ll leave that as an exercise for the reader.

Write Software As If Your Users Were Blind. Teach As If You’re a Beginner.

Recently, I came up with this sentence: “Write software as if your users were blind.” I’ve been using this phrase as a mental framework for thinking about separation of concerns, particularly regarding decoupling logic from presentation. When I spot some code in the application’s domain layer mentioning visual aspects such as color names, I ask myself, “Would this still make sense if the people who’ll use this app were blind?” (And they might as well be, why not?).

If the answer is “no,” then I know I must move that code to the presentation layer, keeping in only the domain layer code related to the concepts themselves.

When teaching someone, you should always be aware of the curse of knowledge. Try to have empathy with your students. Put yourself in their shoes and try to remember when you were a beginner yourself. Don’t just toss catchphrases around; take the time to turn them into valuable lessons that will inform and empower the developers of tomorrow.

NDepend and .NET Fx v4.7.2: an extension method collision and how to solve it easily

In Oct 2017 I wrote about the potential collision problem with extension methods. At that time the .NET Framework 4.7.1 was just released with this new extension method that is colliding with our own NDepend.API Append() extension method with same signature.

The problem was solved easily because just one default rule consumed our Append() extension method, we just had to refactor this method to use it as a static method call instead of an extension method call:  ExtensionMethodsEnumerable.Append(...)

 


Unfortunately with the recent release of .NET Framework 4.7.2, the same problem just happened again, this time with this extension method:

This time 22 default code rules are relying on our ToHashSet() extension method. This method is used widely because it is often the cornerstone to improve significantly performances. But this means that after installing the .NET Fx v4.7.2, 22 default rules will break.

This time the problem is not solved easily by calling our ExtensionMethodsSet.ToHashSet<TSource>(this IEnumerable<TSource>)  extension method as a static method because in most of these 22 rules source code, changing the extension method call into a static method call require a few brain cycle. Moreover it makes the rules source code less readable: For example the first needs to be transformed into the second:

We wanted a straightforward and clean way for NDepend users to solve this issue on all their default-or-custom code rules.  The solution is the new extension method ToHashSetEx().

Solving the issue on an existing NDepend deployment is now as simple as replacing .ToHashSet()  with  .ToHashSetEx()  in all textual files that contain the user code rules and code queries (the files with extension .ndproj and .ndrules).

We just released NDepend v2018.1.1 with this new extension method  ExtensionMethodsSet.ToHashSetEx<TSource>(this IEnumerable<TSource>). Of course all default rules and generated queries now rely on ToHashSetEx() and also a smart error message is now shown to the user in such situation:

We hesitated between ToHashSetEx() and ToHashSet2() but we are confident that this problem won’t scale (more explanation on suffixing a class or method name with Ex here).

 


Actually we could have detected this particular problem earlier in October 2017 because Microsoft claimed that the .NET Fx will ultimately support .NET Standard 2.0 and  .NET Standard 2.0 already presented this ToHashSet() extension method. So this time we analyzed both C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\netstandard.dll and NDepend.API.dll to double-check with this code query that there is no more risk of extension method collision:

We find back both Append() and ToHashSet() collisions and since NDepend.API is not concerned with queryable, there is no more risk of collision:

 

 

A Look at .NET Core 2.1

A Look at .NET Core 2.1

The .NET Framework has certainly been through many changes since it was introduced by Microsoft in 2002. Arguably, .NET Core is the biggest change. First, .NET Core is open source. Also, you can now build .NET applications that run on Windows, Linux, and Mac. Developers can choose which packages and frameworks to include in their applications, different from the .NET Framework’s all-or-nothing methodology. .NET Core fundamentally changes how .NET developers write code. Now .NET Core 2.1 will add to the .NET revolution happening right now.

Before we review what .NET Core 2.1 brings to the table, it’s important to mention .NET Standard as well. .NET Standard provides a common set of APIs that each .NET implementation is guaranteed to have. .NET Core has to implement the .NET Standard APIs, so we’ll call out where it’s necessary when something in .NET Core 2.1 is put in because .NET Standard changed.

Faster Builds

Writing software is always easier when you can quickly execute code in order to test it and get fast feedback. Microsoft understands this and certainly has heard that .NET Core’s build times could be improved. That is exactly what Microsoft has done.

A key feature of .NET Core 2.1 is the significant performance improvements when building code. Each incremental build of .NET Core 2.1 has gotten faster, leading to a huge boost in performance from .NET Core 2.0 to 2.1.

incremental-build-core-2-1

This performance increase helps with development speeds as well as build speeds by using automated build tools, such as MSBuild. Large projects especially should see a dramatic increase in the speed of building your application.

Impactful New Features

Even though .NET Core 2.1 is an incremental update, it packs many good features that make it worthwhile to try out.

View Array Data with Span<T>

A big piece of .NET Core 2.1 is the introduction of the new Span<T> type. This type allows you to view pieces of memory and use them without copying what is in the memory. How do you pass the first 1,000 elements of a 10,000 element array? If you’re using 2.0, you have to copy those elements into a new array and then pass the new array into the method. As arrays get larger, this operation becomes a major hit on performance.

The Span<T> type allows you to view and access a certain piece of an array (and other blocks of memory) without copying it. Think of it as a drive-thru window. Instead of going into the entire “store” to access the array elements required, a method can simply drive past the “window” and receive what it needs to do its job.

A really useful feature of the Span<T> type is the slice method. Slice is the way you can create that “window” into an array. Let’s look at an example.

This is a simple example that highlights the basic uses of Span<T>. First, you can create a span from an existing array. You can then slice that span by telling the slice method where in the array to start and how far to go. Then you can use that sliced portion of the array as you see fit without any performance hits. You can check out this example here and here.

Sockets Performance

Sockets are the gateways into your server. They serve as the foundation for incoming and outgoing network communication between computers. Previous versions of .NET Core used native code (such as C) in order to implement sockets. Starting with .NET Core 2.1, sockets are created using a new managed (meaning built using C# itself) class.

There is a new class in town called SocketsHttpHandler. This class will provide access to sockets using .NET sockets and non-native sockets. This has several benefits like the following:

  • Better performance
  • No more reliance on native operating system libraries for socket functionality (requiring a different implementation for each operating system)
  • More consistent behavior across platforms

Self-Contained Applications

A really interesting and useful addition to .NET Core 2.1 is the self-contained publishing of applications. You can now choose the option of a self-contained application when you package an application to prepare it for deployment (called “publishing”). A self-contained application has the .NET Core libraries and runtime included in the package. This means it can be isolated from other applications when it is run. You can have two applications running different versions of .NET Core on the same machine because the necessary version of the runtime is packaged with the application.

This does make the final executable quite large and has some other drawbacks. However, in the right situation, self-contained applications can be quite useful.

New Security Features

Let’s face it, you’ll rarely read a post written by me that doesn’t touch on security. My security geekdom can prove to be useful. .NET Core 2.1 has changed and added some important security features to remain compliant with a new version of .NET Standard just released.

CryptographicOperations Class

The new CryptographicOperations class gives developers two powerful tools in order to increase the security of their applications: FixedTimeEquals and ZeroMemory.

FixedTimeEquals helps to prevent a subtle side-channel attack on login screens. An attacker may try to brute force your login page or try to guess a username and password. Some applications provide a subtle but dangerous clue that allows attackers to know how close they are to the right login information. An attacker will continually enter login credentials, waiting for the response to take a bit longer. That can be a clue that the username is correct but the password is wrong. Attackers use timing attacks to break in.

FixedTimeEquals ensures that any two inputs of the same length will always return in the same amount of time. Use this when doing any cryptographic verification, such as your login functionality, to help prevent timing attacks.

ZeroMemory is a memory-clearing routine that cannot be optimized away by the compiler. This may seem strange, but sometimes the compiler will “optimize” code that clears memory without later reading that memory by eliminating the clearing code. This is better for speed from a technical standpoint. However, this could lead to sensitive secrets, like if cryptographic keys are left in memory without you knowing it.

Other Crypto Fun

Some other cool secure features were added to .NET Core 2.1. First, elliptic-curve Diffie-Hellman (ECDH) is now available on .NET Core. It’s okay if you don’t know what that is. Just know that it is a really good public-key cryptographic algorithm that has great performance and is a great choice for mobile and IoT applications.

Some other improvements include expanding existing cryptographic APIs to work with the new span type, leading to a 15% performance increase for some algorithms. .NET Core 2.1 also has better support overall for the SHA-2 Hash Algorithm.

How to Get It

If you want to play with .NET Core 2.1—frankly, I can’t wait to myself—here’s how to get it. Download the SDK and the runtime so you can build applications using the command line. If you want to use Visual Studio to build .NET Core 2.1, it has to be Visual Studio 2017 15.7 Preview 1. You should also check out the release notes for Preview 1 and Preview 2.

.NET Core 2.1 is incremental in number but big on delivery. The new Span<T> type has driven major performance improvements for the core libraries and will do the same for your application. New security features will help you write more secure code. And new tech is fun. So have fun and try out .NET Core 2.1.

Null Is Evil. What's the Best Alternative Null

Null Is Evil. What’s the Best Alternative? Null.

“Null is evil.” If you’ve been a software developer for any reasonable length of time, I bet you’ve come across that statement several times.

I’d say it’s also very likely that you agree with the sentiment, i.e., that the null reference is a feature our programming languages would be better off without. Even its creator has expressed regret over the null reference, famously calling it his “billion-dollar mistake.”

Bashing poor old null tends to get old, so authors don’t do just that. They also offer alternatives. And while I do believe that many of the presented alternatives have their merits, I also think we may have overlooked the best solution for the whole thing.

In this post, we’re going to examine some of the common alternatives for returning null before making the argument that the best alternative is null itself. Let’s get started! Continue reading Null Is Evil. What’s the Best Alternative? Null.

In Defense of the SOLID Principles

In Defense of the SOLID Principles

From posts that politely offer their criticisms to others that outright deem them “for dummies,” it seems that bashing the SOLID principles is all the rage nowadays.

The fact that SOLID is being criticized isn’t a bad thing. The problem is that I don’t think the arguments against it are really that good. There’s some valid criticism, but it seems that a large portion of it comes from some misunderstanding of the principles. Some people even read obscure agendas in them.

This post is meant to investigate some of the more common criticisms of the SOLID principles, offering my take on why I believe they’re not quite justified.

SOLID Principles: Some Background

In object-oriented design, the SOLID principles (or simply SOLID) are a group of five design principles meant to make code cleaner, more flexible, and easier to change. The principles were compiled by Robert C. Martin, although he didn’t invent them. In fact, these specific principles are a subset of many principles Martin has been promoting over the years.

The name SOLID is an acronym, made up of the names of five principles. Namely, these principles are

  • the single responsibility principle (SRP),
  • the open-closed principle (OCP),
  • the Liskov substitution principle (LSP),
  • the interface segregation principle (ISP), and
  • the dependency inversion principle (DIP).

Martin himself didn’t come up with the acronym; rather, it was Michael Feathers that suggested it to him, several years after he was already teaching them. I’ll come back to this later because, believe it or not, the name itself is at the center of some criticisms.

Before we get to the meat of the article, I think it’d make sense to do a quick overview of the five principles so those of you who are familiar with them can get a reminder and those who aren’t can get a sense of what we’re talking about.

Continue reading In Defense of the SOLID Principles

Quickly assess your .NET code compliance with .NET Standard

Yesterday evening I had an interesting discussion about the feasibility of migrating parts of the NDepend code to .NET Standard to ultimately run it on .NET Core. We’re not yet there but this might make sense to run at least the code analysis on non Windows platform, especially for NDepend clones CppDepend (for C++), JArchitect (for Java) and others to come.

Then I went to sleep (as every developers know the brain is coding hard while sleeping), then this morning I went for an early morning jogging and it stroke me: NDepend is the perfect tool to  assess some .NET code compliance to .NET Standard, or to any other libraries actually! As soon on my machine I did a proof of concept in a few minutes, then spent half an hour to fix an unexpected difficulty (explained below) and then it worked.

The key is that .NET standard 2.0 types are all packet in a single assemblies netstandard.dll v2.0 that can be found under C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319 (on my machine). All these 2,334 types are actually type forward definitions and NDepend handles well this peculiarity. A quick analyze of netstandard.dll with NDepend with this quick code query makes it all clear: 

(Btw, I am sure that if you read this  you have an understanding of what is .NET Standard but if anything is still unclear, I invite you to read this great article by my friend Laurent Bugnion wrote 3 days ago A Brief History of .NET Standard)

.NET Standard Forwarded Types

Given that, what stoke me this morning is that to analyze some .NET code compliance to .NET Standard, I’d just have to include netstandard.dll in the list of my application assemblies and write a code query that  filters the dependencies the way I want. Of course to proof test this idea I wanted to explore the NDepend code base compliance to .NET Standard:

NetStandard assembly included in the NDepend assemblies to analyze

The code query was pretty straightforward to write. It is written in a way that:

  • it is easy to use to analyze compliance with any other library than .NET standard,
  • it is easy to explore the compliance and the non-compliance with a library in a comprehensive way, thanks to the NDepend code query result browsing facilities,
  • it is easy to refactor the query for querying more, for example below I refactor it to assess the usage of third-party non .NET Standard compliant code

The result looks like that and IMHO it is pretty interesting. For example we can see at a glance that NDepend.API is almost full compliant with .NET standard except for the usage of System.Drawing.Image (all the 1 type are the Image type actually) and for the usage of code contracts.

NDepend code base compliance with .NET standard

For a more intuitive assessment of the compliance to .NET Standard we can use the metric view, that highlights the code elements matched by the currently edited code query.

  • Unsurprisingly NDepend.UI is not compliant at all,
  • portions of NDepend.Core non compliant to .NET Standard are well defined (and I know it is mostly because of some UI code here too, that we consider Core because it is re-usable in a variety of situations).

With this information it’d be much easier to plan a major refactoring to segregate .NET standard compliant code from the non-compliant one, especially to anticipate hot spots that will be painful to refactor.

Treemap view of the compliance with .NET Standard

A quick word about the unexpected difficulty I stumbled on. Since netstandard.dll contains only type forward definitions, it doesn’t contain nested type. Concretely it contains List<T> but not List<T>+Enumerator (that is also part of the formal .NET Standard). Of course we don’t want to flag methods that use List<T>+Enumerator as non-compliant. To see the way we solved that, have a look at the tricky part in the code code query related to: allNetStandardNestedTypes


The code query to assess compliancy can be refactored at whim. For example I found it interesting to see which non-compliant third-party code elements were the most used. So I refactored the query this way:

Without surprise UI code that is non .NET Standard compliant popups first:

.NET Standard non-compliant third-party code usage

There is no limit to refactor this query to your own need, like assessing usage of non-compliant code — except UI code– for example, or assessing the usage of code non compliant to ASP.NET Core 2 (by changing the library).

Hope you’ll find this content useful to plan your migration to .NET Core and .NET Standard!

Moq a Detailed Look at its Code Quality

Moq: A Detailed Look at Its Code Quality

In case you haven’t seen it, I’ve been doing a series of research-oriented posts for this blog.  This is going to be in the same vein but focused on the Moq codebase instead of focusing on hundreds of codebases.  Why Moq?  Well, I’ll get to that in a moment.

I started this by making a set of observations relating unit test prevalence to properties of clean code.  That generated considerable buzz, so I did some more studies in that vein, refining the methodology and adding codebases as we went.  By the end of the series, we’d grown the sample size to 500 codebases and started doing actual regression analysis.

Since then, we’ve enlisted the help of someone who specializes in data analysis to do some PCA with the data, which far outstrips my background studying data.  We brought this to bear in studying the effects of functional-style programming on codebases and also on categorizing codebases according to simple vs. complex and monolithic vs. decoupled, in addition to functional vs. OOP.  Doing this across more than 500 C# codebases has produced a wealth of information.

Okay, But How Does Moq Fit in?

I give you all this backstory in case you want to read about it but also to explain that I’ve looked at these codebases en masse.  With hundreds of codebases and millions of lines of code, I’m not going in and poking around to see if the code looks clean.  I’m using NDepend to perform large-scale robo-analysis.

And, while that’s been great, I started to want to see just how these categorizations stacked up.  So I started scrolling through the summary data and Moq jumped out at me.

First of all, I know Moq pretty well from using it over the years, and secondly, it had stood out when looking at the rate of unit test methods in the codebase.  Nearly half of its methods are test methods, which seems reasonable for a tool designed to help you write unit tests.  Combine that with these stats in the PCA:

So as a quick interpretation, Moq counts as reasonably non-monolithic, very simple, and very functional.  Add to that the high degree of unit testing, and I figured we’d have a codebase that was a joy to look at.  So I popped open the source code (as it was at the beginning of the year when I was grabbing codebases en masse) and analyzed it with NDepend, fresh off my excitement about using the new dark theme. I wanted to see if it was as much of a joy to look at as all of this data and statistical rigor indicated it would be.  And spoiler alert, it was the kind of codebase I’d feel right at home in.  Let’s take a tour.

NDepend Analysis, Test Coverage, and First Look

The first thing I wanted to look at was code coverage.  This was because I wanted a quick test case to see if my assumption that a high rate of test methods would correspond to high coverage.  And, it did.  Here’s a quick look at what happened after I imported coverage data and then ran analysis on the project.

Now I could see with Visual Studio’s coverage tool itself that Moq was sporting roughly a 90% test coverage.  But by importing the coverage data into NDepend, you can paint a much more compelling picture.

The heat map dominating most of the screen shows squares corresponding to methods in the codebase.  Larger squares are larger methods.  And the coloring indicates test coverage.  Over on the right, among the 2,152 methods in question, you can actually scroll through them in order of percent coverage and navigate to them if you want to take a look.

Taking a Look at the Dashboard and Technical Debt

So far, all systems go.  Most of the stats on the codebase looked good, coming in from the broad aggregates I have in a spreadsheet.  And then, following the same trend, Moq looks great in the IDE from a testing perspective.  But I looked at the dashboard and saw this:

Basic stats about the codebase lined up, and there’s the test coverage, hovering around 90%.  But a C for its tech debt rating?  10 critical rules violated?  This surprised me, given how rosy everything looked in the statistical analysis.  I drilled in to take a look.

And, sure enough, there be some dragons.  Huge types and overly complex methods are a problem.  Also in there are mutually dependent namespaces, which create coupling that hurts you as a codebase grows.  And you’ve got some hiding of base class methods and global state.  To get a sense for what I was looking at, I created another heat map. In this case, we’re looking at types.  The bigger the square, the more lines of code, and the closer to red, the more methods in that type.

This explains why the averages looked good in my spreadsheet but why NDepend has some objections and critical rules violated.  By and large, you have a lot of little green types, which is what you’d hope to see.  But there are some pretty hefty types in the mix, both in terms of lines of code to the type and number of methods to the type.

Some of these are gigantic unit test classes, while others are the API.  For those of you familiar with Moq, this should make sense to you.  Think of how many static methods you invoke on the Mock class. This illustrates the classic tradeoff between shooting for clean code guidelines about methods per type and such and between providing the API you want to furnish.

Drilling Into the Sources of Debt

NDepend has a view where you can drill into the technical debt per type or per method, and sort it accordingly.  I did that per type, and here’s what I saw:

No big surprise there.  The technical debt was coming disproportionately from these large types.  So I took advantage of another view to see where the debt was coming from, by rule violation. Here’s what that looked like:

Topping that list is a series of things that I would make it a priority to address in my own codebases, time permitting: types that are too big, types that have too many methods, namespace dependency cycles, and so on.  There is, however, one exception to what I would worry about as a top priority issue—visibility of nested types as a design choice.  That’s based on a Microsoft guideline and I don’t personally favor an approach where I use this a lot.  But it doesn’t bother me, either.

I see that the creators of Moq did that 395 times, so clearly they view it as a useful design choice.  This made me curious about what would happen to the tech debt grade if I disabled that particular rule.  So I did that, and the result was a somewhat greener and more pleasing grade of B:

What’s the Verdict With Moq?

I also spent some time scrolling through various classes and methods.  I didn’t want the entirety of the experience to be just a matter of data gathering since that’s what I’ve been doing for months.  And the verdict I have is that this particular data point fits in nicely with the aggregate.

Moq has excellent stats for most of what I’ve been looking at.  And, indeed, there are a lot of simple, functional methods in the codebase, almost all of which are thoroughly tested.  I would happily work in this codebase.

But NDepend is calling out real and important opportunities for improvement.  If I were working on this codebase, I’d make an effort to break some of those gigantic unit test classes into smaller ones that are more cohesive over their context.  I’d also take a hard look at the mutually dependent namespaces and either merge them or rework the dependency direction a bit.  And even I’d give some idle thought to how I might segment the large Mock class somehow into smaller chunks if that wound up making sense.

So the whole thing winds up being an interesting microcosm to me.  Moq is, as the stats would indicate, a pretty nice codebase.  But, as with just about any codebase, there’s plenty of room for improvement.  And having a tool to show you where to improve quickly is invaluable.

On the Superiority of the Visual Studio Dark Theme

When I downloaded the newest version of NDepend, something wonderful awaited me.  Was it support for the latest .NET Core version?  The addition of checks for ubiquitous language for DDD projects?  Any of the various rule additions or bug fixes?

Nope.  I’m a lot more superficial than that, apparently.  I saw this and was incredibly excited:

Dark Theme

In case you’re scratching your head and wondering, “what?” instead of sharing my delight, I’ll be more explicit.  NDepend added support for the Visual Studio dark theme.  And I absolutely love it.

Asserting the Superiority of the Visual Studio Dark Theme

Why so excited?  Well, as a connoisseur of the Visual Studio dark theme, this makes my experience with the tool just that much better.

Don’t get me wrong.  I didn’t mind the interface up until this release, per se.  I’ve happily used NDepend for years and never really thought about its color scheme.  But this version is the equivalent of someone going into my house where I already like the bathroom and installing a radiant floor.  I never knew I was missing it until I had it.

Everything in Visual Studio is better in the dark.

Oh, I know there’s evidence to the contrary.  People are, apparently, 26% more accurate when reading dark text on a light background than vice-versa. And it’s easier to focus on and remember text in a light theme.  I understand all of that.  And I don’t care.

The dark theme is still better.

Continue reading On the Superiority of the Visual Studio Dark Theme

C# 8.0 Features A Final Glimpse of the Future

C# 8.0 Features: A Final Glimpse Of The Future

It was not that long ago when we published our first post about the future of C# 8.0 and the probable features it’s getting. On the first post, we covered extension everything, default implementation on interfaces, and nullable reference types.

A couple of months later, we published the second installment in the series where we covered null coalescing assignment and records

Now it’s time for a final glimpse into the future. Today we’ll cover another two possible C# 8.0 features: target-typed new expressions and covariant return types.

Continue reading C# 8.0 Features: A Final Glimpse Of The Future