NDepend

Improve your .NET code quality with NDepend

Don’t rely on someone else to protect your software

This morning I stumbled on this post Decompilation of C# code made easy with Visual Studio on the Visual Studio blog. Basically VS will soon be able to not only decompile third-party code but also generate some sort of PDB information that will make the decompiled code debuggable. The promise is no more  “No Symbols Loaded” or “Source Not Found” from within debugging session in VS and personally I found this awesome.

However most of this post’s comments are like “how do I protect my code from being decompiled then?!” and “Microsoft does not care about its customers’ intellectual property.”. These comments are absurd. Since its inception in 2002 .NET compiled code can be decompiled and read crystal clear with popular tools like .NET Reflector, IL Spy, dotPeak… This is a direct consequence of having IL/byte code and a CLR with a JIT compiler. I wonder to which extend those that wrote these comments are aware of that?

Protect your Intellectual Property

The first step is to make sure that your EULA forbid from decompiling your code, something like:  Licensee may not reverse-engineer, decompile, disassemble, modify, or translate the Product, or make any attempt to discover the source code of the Product;

The second step is to obfuscate your compiled code. Since 2002 those that want to protect their compiled code from decompilation just have to obfuscate it. There are mature free tools like ConfuserEx and paid tools available. This is what we do within our .NET shop since 2007 with success.

Also with .net native and AOT (Ahead of Time compilation) one can add a whole new layer of complexity by skipping the IL code and compile directly to machine code (e.g. X64 instructions).

But keep in mind that Obfuscators and AOT only protect the intellectual property to some extent. Your code best kept secrets are still executable in both scenarios, it means they are still there. Someone skilled ready to spend a large amount of time to reverse engineer your code can still have access to your intellectual property.

The ultimate way to protect your intellectual property is to provide your services as online SaaS. This way nobody will ever have access to your code. For example the whole SEO industry is based on guessing what Google and others web search engine are doing. These algorithms are protected because nobody except Google employees have access to them. In many scenarios Saas means also that one forces his clients to share their sensitive data, since they are processed on one’s server, and in many scenario this is not applicable. By spying your data Google and Facebook knows you better than you do, but it seems that only a fraction of the humanity disagree with that. Ok I digressed…

Protect your software from hackers

Keep in mind that Obfuscators and AOT don’t protect your code from hackers. It is still easy for any solid hacker to crack your license-checking-layer and provide a free version of your software online as a warez. The only possible protection from hackers that have access to your code are integrity checks. An integrity check is made of two parts:

  1. A layer that detects if some bytes of your compiled code have been tweaked (typically with a custom or standard hash function)
  2. A subtle malfunction of your software that prevents to use it if the compiled code has been tweaked.

The whole point of an integrity check is to consume the time of the hacker. This is why the word subtle is in bold, the malfunction is not a dumb exception to provoke a fail-fast, the malfunction must be something that appears minutes after the integrity check failed and finally makes the software totally unusable (like a massive memory leak – clearing some data – freezing some UIs – firing a timer to close the user session after a random number of minutes…)

By multiplying the integrity checks and the subtle corresponding malfunctions one can only hope to discourage a talented hackers to waste days, weeks or months to crack his software. But be aware that these guys are primarily driven by challenges… The good news is that with .NET there are tons of possible ways to write subtle integrity checks.

Conclusion

Protecting the intellectual property and the software itself is a difficult task. Don’t complain to Microsoft or anyone else that they should offer an out-of-the-box tool for that. If such a mainstream protection tool existed it would be a challenge for the best talented hackers and it wouldn’t resist long anyway.

The question you should ask for is: is it worth spending resources to protect my assets instead of spending these resources to make my paid clients even more happy. This is a difficult trade-off that must be carefully thought out. But always keep in mind that you must not rely on someone else to protect your software.

Not planning now to migrate your .NET 4.8 legacy, is certainly a mistake

2020 will see the achievement of the massive remodeling of the .NET platform initiated by Microsoft in November 2014 with the introduction of .NET Core 1, with the promise of an open-source, a multi-platform and a modernizable framework (thanks to no rock-solid backward compatibility constraint) – everything that the .NET Framework isn’t. This U-turn in the Microsoft plans for .NET is part of the new Microsoft’s strategy initiated by Satya Nadella that became CEO of Microsoft in February 2014, succeeding to Steve Ballmer.

.NET 5 will be released in November this year. Within 6 years Microsoft will have succeeded a complete platform shift like no other. The .NET Core brand was here to make clear that two .NET platforms were living side by side. But now we know that the .NET Framework 4.8 won’t evolve anymore and that all Microsoft efforts will be put on .NET Core continuation with well scheduled releases ahead. Let’s use the term .NET OSS to designate [.NET Core, .NET 5, .NET 6…. [ in the remainder of this post.

We can expect an early beta of .NET 5 before July 2020. Today in January 2020 the .NET 5.0 milestone is 72% achieved.

By now, the way to get prepared to .NET 5 and later is to migrate to .NET Core 3.1. Despite the branding change from .NET Core 3.1 to .NET 5 it is no mystery that .NET 5 will be mostly based on the actual .NET Core platform.

The cost of migration from .NET 4.8 to .NET OSS can get pretty high, especially if the legacy relies on some deprecated APIS (like WCF, WWF, WebForms or AppDomain). Thus it may seems attractive to stick with .NET 4.8 if your application is intended to run on Windows only.

Why it is not a good idea to not anticipate the migration now?

.NET 4.8 won’t evolve but some security patches will be provided for as long as we can foresee. However we can predict that .NET 4.8 will be quickly considered as a thing-of-the-past:

  • Developer mindset: .NET OSS and also C# will evolve. There will come a point where it’ll feel pretty awkward for .NET programmers working with 4.8 to not be able to use all the new goodies. Could you imagine programming with C#3 nowadays?
  • Third-Party Libraries: The .NET 4.8 / .NET OSS increasing gap will push open-sourced libraries authors toward .NET OSS. The cost of maintaining two code bases will be too high for an OSS developer. If your .NET 4.8 application consumes some OSS libraries, not migrating it will put you in an awkward situation where you’ll have to maintain the OSS code consumed yourself! Certainly serious commercial libraries will be maintained on both platforms for a longer period of time, but not forever.
  • Performance: We can expect more and more performance improvements with .NET OSS.
  • Tooling: Tools will continue to evolve and with time, less and less tool will support .NET 4.8 application.

Recently I’ve discussed with Jean-Baptiste Evain that develops the OSS library Cecil. Jb is also responsible for UnityVS at MS. Here at NDepend we’re relying on Cecil for more than a decade. Cecil processes compiled .NET assemblies bytes and thus will obviously benefit from Span<T> only available on .NET Core. This concrete situation illustrates well the points mentioned above:

  • By using Span<T> Jb is not enthusiast to have to maintain two versions of Cecil, one relying on Span<T> and the .NET 4.8 one.
  • Even though these two versions will co-exist because it is too early to discard the .NET 4.8 version of Cecil used by many serious projects, it is a matter of a few years until .NET 4.8 Cecil version gets deprecated.
  • Without Span<T> the .NET 4.8 version of Cecil will be slower.

Our case

NDepend is still running on .NET 4.8. NDepend is a CI tool, a standalone UI tool, an Azure DevOps extension and a Visual Studio extension. Developing an extension is a sensitive situation because we need to align our platform with the platform of the host. VS is such a massive application that I don’t expect it to run on .NET 5 in 2021. On the other hand VS is evolving so quickly nowadays that this possibility is not totally excluded. It is also possible that Microsoft takes an incremental approach and that the main VS process (devenv.exe) will remain on .NET Fx 4.8 for a while, while children processes run on .NET OSS (VS runs with quite a few children processes!).

At this point the reasonable move for us is to anticipate the migration to .NET OSS mostly by compiling as much code as possible against .NET Standard 2.0, supported by both .NET Fx 4.7.2+ and .NET Core. We also need to make sure that our WPF and Winforms code will be easily movable (which shouldn’t be a problem since most of the WPF/Winforms APIs are supported by .NET 3.1). We are also mulling over on having our own child process(es) but all the UI part must remain in the main VS process.

We also keep in mind that it will be tricky to support the future VS version running on .NET OSS and previous VS versions running on .NET v4.8 for a few years.

Conclusion

Those like us still working on a large .NET 4.8 legacy are entering into a turbulence zone for the years to come. However for all the reasons explained above, we can expect that in not so long (2023? 2025?) successful applications still running on .NET 4.8 will be the exception. Certainly not anticipating legacy migration now is likely a strategic mistake.

4 Predictions for the Future of .NET

In May 2019, Microsoft officially announced .NET 5, the future of .NET: it will be based on all the .NET Core work already achieved. Here is the schedule announced:

On one hand the future of .NET has never been so bright. On the other hand this represents a massive move for all .NET development shops, especially for those that still target .NET Framework 4.x that won’t evolve anymore. But not everything is clear from this announcement. Such massive move will have many collateral consequences that we can only guess by now. Certainly many points are not yet cast in stone and still debated.

Hence for large .NET legacy code bases some predictions must be made to plan now a seamless and in-time migration toward the future of .NET. So let’s do some predictions: it’ll still be interesting to come back in a few years and see how good or bad they were.

.NET Standard won’t evolve much

.NET Standard was introduced as a common API set that all .NET flavors must implement. .NET Standard superseded PCL (Portable Class Library). Now that several .NET frameworks will be unified upon .NET Core bases, and that the .NET Framework 4.x won’t support future versions of .NET Standard anymore, it sounds like the need for more .NET standard API will decrease significantly. Actually .NET Framework 4.8 doesn’t even support latest .NET Standard 2.1: “.NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1”.

However .NET Standard is certainly not dead yet: it is (and will be for years to come) an essential tool to compile code into portable components that can be reused across several .NET flavors. However with this unification process the future of .NET Standard is compromised.

Visual Studio will run on .NET 5 or 6 (and in a x64 process)

It has to. Imagine the consequences if in 3 years from now (2019 Q4) the main Microsoft IDE for .NET professional developments still run on .NET Framework v4.8:

  • Engineers working on VS would lack access to all new .NET APIs, performance improvements and langage improvements. They would remain locked in the past.
  • As a consequence they wouldn’t use their own tool (dogfooding) and dogfooding is a key aspect of developing tools for developers.
  • Overall the message sent wouldn’t be acceptable for the users.

On the other hand, if you know a bit how VS works, imagine how massive this migration is going to be. For more than a decade there have been a lot of complaints from the community about Visual Studio not running in a 64 bits process. See some discussions on reddit here for example. If I remember well this x64 request was the most voted one when VS feedback was still handled by UserVoices. Some technical explanations have been provided by Microsoft like those ones provided 10 years ago! If in 2019 Visual Studio still doesn’t run in a x64 process, this says a lot on how large and complex such migration is.

It seems inevitable that this time the Visual Studio legacy will evolve toward what will be the future of .NET. One key benefit will be to run in a x64 process and have plenty of memory to work with very large solutions. Another implication is that all Visual Studio extensions, like our extension, must evolve too. Here at NDepend we are already preparing it but it will take time, not because we’ll miss much API (we’ll mostly miss AppDomain) but because:

  • We depend on some third-parties that we’d like to get rid of to have full control over our migration, and overall code.
  • For several years we’ll have to support both future Visual Studio versions and Visual Studio 2019, 2017 and maybe 2015 that runs on .NET Framework v4.x (btw we still support VS 2013/2012/2010 but this will have to be discarded to benefit from .NET Standard reused DLLs)

We cannot know yet if Visual Studio vNext will run on .NET 5 or if it’ll take more years until we see it running upon .NET 6?

Btw here are 2 posts Quickly assess your .NET code compliance with .NET Standard and An in-depth analysis of .NET Core 3.0 support for WPF and Winforms APIs that can help plan your own legacy migration.

.NET will propose a cross-platform UI Framework: WPF or a similar XAML UI Framework

On October 4, 2019 Satya Nadella revealed why Windows may not be the future of Microsoft’s business. In August 2019 Microsoft provided a .NET Cross Platform UI Framework Survey. Clearly a .NET cross-platform UI Framework is wanted: the community is asking for it. So far Microsoft closed the debate about WPF: WPF won’t be multi-platform.

Let’s also be crystal clear. This (WPF cross platform) is a very hard project. If the cost was low, this would be a very different conversation and very likely a different outcome. We have enough trouble being compatible with OpenSSL and that’s just one library.  Rich Lander – Dec 5, 2018

But given the immense benefits of what WPF running cross-platform would offer, I wouldn’t be surprise to see WPF become cross-platforms within the next years. Or at least a similar XAML UI framework. Moreover WPF is now open-source so who knows…

The Visual Studio UI is mostly based on WPF hence one of the benefit of having WPF cross-platform would be to have a unique cross-platform Visual Studio: the same way Microsoft is now unifying .NET Frameworks, they could unify the Visual Studio suite into a single cross-platform product.

Xamarin Forms and Avalonia are also natural candidates to be the .NET cross-platform UI Framework. But it seems those frameworks doesn’t receive enough love from the community, this is my subjective feeling. Also we have to keep in mind that Microsoft did a survey and that the community is massively asking for it.

Blazor is promised to a bright future

If you didn’t follow the recent Blazor evolution, the promises of this technology are huge:

  • Run .NET code in all browsers (like Silverlight)
  • with no browser plugin needed (unlike Silverlight)
  • with near-native performance
  • with components compiled to a compact binary format

This is all possible thanks to the WebAssembly (Wasm) format supported by most browsers.

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

Blazor was initially a personal project created by Steve Sanderson from Microsoft. It was first introduced during NDC OSLO in July 2017: the video is worth being watched, also read how enthusiastics are the comments. However Blazor is not yet finalized and still has some limitations: it doesn’t offer yet a decent debugging experience and the application size to download (a few MBs) is still too large because dependencies have to be loaded too. Those ones are currently being addressed (see here for debugging and here for download size, runtime code will be trimmed and cached and usage of CDN (Content Distribution Network) is mentioned).

The community is enthusiast, the technology is getting mature and there is no technological nor political barrier in sight: the Blazor future looks bright. Don’t miss the Blazor FAQ to learn more.

.NET Core 3.0 New APIs

.NET Core 3.0 has just been released, see here the official announcement. In this post we’re going to explain how to list and explore the new APIs introduced since .NET Core 2.2 (this API diff is also available here).

To diff the API versions download NDepend trial, start VisualNDepend.exe, and click Compare 2 versions of a code base.

  • In the Older Build add assemblies in the folder: C:\Program Files\dotnet\shared\Microsoft.NETCore.App\2.2.5
  • In the Newer Build add assemblies in the folder: C:\Program Files\dotnet\shared\Microsoft.NETCore.App\3.0.0

Then click OK. Depending on your hardware it’ll take between a dozen of seconds and a minute to analyze these two versions of .NET Core.

Here are the lists of new APIs obtained:

To obtain these lists we’ve edited 5 code queries and exported their results to HTML. For example to obtain the 28 new namespaces we edited the query:

And got this result, that also lists new types for each new namespace:

.NET Core 3.0 New Namespaces

The 4 others code queries used to list new types, new types and their members and new methods and new fields introduced in existing types are:




Find API Breaking Changes in your .NET Libraries and Frameworks

If you are developing a framework, the last thing you want to happen when releasing a new version of your product is to break the code of your clients because of an API breaking change. For example, you want to make sure that all public methods you had in previous versions are here in the next version, unless you tagged some of them with System.ObsoleteAttribute for a while before deprecation.

Let’s underline that we are talking here of syntactic breaking changes that can be all detected with tooling as we are about to see. On the other hand semantic breaking changes are code behavior changes that will break your client code. Those breaking changes cannot be found by a tool, but are typically caught by a solid test suite.

One key feature of NDepend is to be able to compare all results against a baseline snapshot. Code querying can explore changes since the baseline. For example this code query detects methods of an API that are not anymore visible from clients:

This query code is pretty simple and readable but let’s detail it a bit:

  • The warnif count > 0 prefix transforms this code query into a code rule
  • We iterate on methods in the baseline thanks to codeBase.OlderVersion().Application.Methods
  • We want methods that were publicly visible in the baseline and not just public. Being declared as public is not enough for a method to be publicly visible. A public method can still be declared in a class declared as internal.
  • We don’t warn for public methods that were deemed as obsolete in the baseline
  • We handle both situations: A) publicly visible methods removed since the baseline and B) publicly visible methods not publicly visible anymore.

This query is pretty similar than the source of the default rule API Breaking Changes: Methods that is a bit more sophisticated to handle more situations, like when the public method return type changes. This rule also presents results in a polished way.

Similar rules are available for publicly visible types and publicly visible fields.

Some other sorts of breaking changes are detected when for example, a publicly visible interface or base class changes: in such situation clients code that implement such interface or derive from such abstract class will be broken.

Some rules also detect when a serializable types gets broken and when an enumeration Flags status change.

Real Experiments

From comparing NHibernate v5.2.x against NHibernate v4.1.x here are breaking changes found: 36 types, 995 methods, 205 fields (including enumeration values) and 121 interfaces or abstract classes have been changed. I guess most of those breaking changes are about elements that were not intended to be seen by API consumers in the baseline version but a detailed analysis would be needed here.

NHibernate v5.2 vs v4.1 Breaking Changes

I’ve also compared .NET Core v2.2.5 with .NET Core v3.0 Preview 8 but hopefully didn’t find any breaking change. However NDepend has an heuristic code query to find types moved from one namespace or assembly to another and here more than 200 types were matched like the class ArrayList moved from the assembly System.Collections.NonGeneric.dll to the assembly System.Private.CoreLib.dll. Hopefully such changes are invisible to the clients consuming .NET Core as a NuGet package.

ArrayList moved from System.Collections.NonGeneric.dll to System.Private.CoreLib.dll

Find new APIs consumed and APIs not used anymore

A related and interesting topic is to review new APIs used by an application or APIs not used anymore. This is possible from the NDepend > Search Elements by Changes panel with the buttons Third Party Code Elements – Used Recently / Not Used Anymore. For example the screenshot below shows new API consumed by the application eShopOnWeb:

New APIs used by eShopOnWeb

Before releasing a new version, this is interesting to have a glance at API consumption changes. Doing so often sheds light on interesting points.

SOLID Design: The Dependency Inversion Principle (DIP)

After having covered the Open-Close Principle (OCP), the Liskov Substitution Principle (LSP), the Single Responsibility Principle (SRP) and the Interface Segregation Principle (ISP) let’s talk about the Dependency Inversion Principle (DIP) which is the D in the SOLID acronym. The DIP definition is:

a. High-level modules should not depend on low-level modules. Both should depend on abstractions.
b. Abstractions should not depend on details (concrete implementation). Details should depend on abstractions.

The DIP has been introduced in the 90s by Robert C Martin. Here is the original article.

A Dependency is a Risk

As all SOLID principles DIP is about system maintainability and reusability. Inevitably some parts of the system will evolve and will be modified. We want a design that is resilient to changes. To avoid that a change breaks too much, we must:

  • first identify parts of the code that are changes-prone.
  • second avoid dependencies on those changes-prone code portion.

The Liskov Substitution Principle (LSP) and the Interface Segregation Principle (ISP) articles explains that interfaces must be carefully thought out. Both principles are 2 faces of the same coin:

  • ISP is the client perspective: If an interface is too fat probably the client sees some behaviors it doesn’t care for.
  • LSP is the implementer perspective: If an interface is too fat probably a class that implements it won’t implement all its behaviors. Some behavior will end up throwing something like a NotSupportedException.

Efforts put in applying ISP and LSP result in interfaces stability. As a consequence these well-designed interfaces are less subject to changes than concrete classes that implement them.

Also having stable interfaces results in improved reusability. I am pretty confident that the interface IDisposable will never change. My classes can safely implement it and this interface is re-used all over the world.

In this context, the DIP states that depending on interfaces is less risky than depending on concrete implementations. DIP is about transforming this code:

into this code:

DIP is about removing dependencies from high-level code (like the ClientCode() method) to low-level code, low-level code being implementation details like the SqlConnection class. For that we create interfaces like IDbConnection. Then both high-level code and low-level code depend on these interfaces. The key is that SqlConnection is not visible anymore from the ClientCode(). This way the client code won’t be impacted by implementation changes, like when replacing the SQL Server RDBMS implementation with MySql for example.

Let’s underline that this minimal code sample doesn’t do justice to the word Inversion in the DIP acronym. The inversion is about interfaces introduced (to be consumed by high-level code) and implementation details: implementation details depends on the interfaces, not the opposite, here is the inversion.

DIP and Dependency Injection (DI)

The acronym DI is used for Dependency Injection and since it is almost the same as the DIP acronym this provokes confusion. The I is used for Inversion or Injection which might add up confusion. Hopefully DI and DIP are very much related.

  • DIP states that classes that implement interfaces are not visible to the client code.
  • DI is about binding classes behind the interfaces consumed by client code.

DI means that some code, external to client code, configures which classes will be used at runtime by the client code. This is simple DI:

Many .NET DI frameworks exist to offer flexibility in binding classes behind interfaces. Those frameworks are based on reflection and thus, they offer some kind of magic. The syntax looks like:

And then comes what is called a Service Locator. The client can use the locator to create instances of the concrete type without knowing it. It is like invoking a constructor on an interface:

Thus while DIP is about maintainable and reusable design, DI is about flexible design. Both are very much related. Let’s notice that the flexibility obtained from DI is especially useful for testing purposes. Being DIP compliant improves the testability of the code:

DIP and Inversion of Control (IoC)

The Inversion word is used both in DIP and IoC acronyms. This provokes confusion. Remember that the word Inversion in the DIP acronym is about implementation details depending on interfaces, not the opposite. The Inversion word in the IoC acronym is about calls to Library transformed into callbacks from Framework.

IoC is what differentiates a Framework from a Library. A library is typically a collection of functions and classes. On the other hands a framework also offers reusable classes but massively relies on callbacks. For example UI frameworks offers many callback points through graphical events:

The method m_ButtonOnClick() bound to the Button.OnClick event is a callback method. Instead of client code calling a framework method, the framework is responsible for calling back client code. This is an inversion in the control flow.

We can see that IoC is not related to DIP. However we can see Dependency Injection has a specialization of IoC:  DI is an IoC used specifically to manage dependencies.

DIP and the Level metric

Several code metrics can be used to measure, and thus constraint, the usage of DIP. One of these metric is the Level metric. The Level metric is defined as followed:

From this diagram we can infer that:

  • The Level metric is not defined for components involved in a dependency cycle. As a consequence null values can help tracking component dependency cycles.
  • The Level metric is defined for any dependency graph. Thus a Level metric can be defined for various granularity: methods, types, namespaces, assemblies.

DIP mostly states that types with Level 0 must be interfaces and enumerations (note that interfaces using others interfaces have a Level value higher than 0). If we say that a component is a group of types (like a namespace or an assembly) the DIP states that components with Level 0 must contain mostly interfaces and enumerations. With a quick code query like this one you can have a glance at types Level and check if most of low level types are interfaces:

The Level metric can also be used to track classes with high Level values: it is a good indication that some interfaces must be introduced to break the long chain of concrete code calls:

The class Program has a Level of 8 and if we look at the dependency graphs of types used from Program we can certainly see opportunities to introduce abstractions to be more DIP compliant:

DIP and the Abstractness vs. Instability Graph

Robert C. Martin not only coined the DIP but also proposed some code metrics to measure the DIP compliance. See these metrics definitions here. From these metrics an intriguing Abstractness vs. Instability diagram can be plotted. Here we plotted the 3 assemblies of the OSS eShopOnWeb application. This diagram has been obtained from an NDepend report:

  • The Abstractness metric is normalized : it takes its values in the range [0,1]. It measures the interfaces / classes ratio (1 means the assembly contains only interfaces and enumerations).
  • The Instability metric is normalized and measures the assembly’s resilience to change. In this context, being stable means that a lot of code depends on you (which is wrong for a concrete class and fine for an interface) and being unstable means the opposite: not much code depends on you (which is fine for a concrete class and wrong for an interface, a poorly used interface is potentially a waste of design efforts).

This diagram shows a balance between the two metrics and defines some green/orange/red zones:

  • A dot in the red Zone of Pain means that the assembly is mostly concrete and used a lot. This is a pain because all those concrete classes will likely undergo a lot of changes and each change will potentially impact a lot of code. An example of a class living in the Zone of Pain would be the String class. It is massively used but it is concrete: if a change should occur today in the String class the entire world would be impacted. Hopefully we can count on the String implementation to be both performance-wise and bug-free.
  • A dot in the red Zone of Uselessness means that the assembly contains mostly interfaces and enumerations and is not much used. This makes these abstractions useless.
  • The Green zone revolves around the Main Sequence line. This line represents the right balance between both metrics. Containing mostly interfaces and being used a lot is fine. Containing mostly classes and not being used much is fine. And then comes all intermediate well balanced values between these 2 extremes represented by the Main Sequence line. The Distance from Main Sequence metric can be normalized and measures this balance. A value close to 0 means that the dot is near the line, in the green zone, and that the DIP is respected.

Conclusion

As the Open-Close Principle (OCP), the Liskov Substitution Principle (LSP) and the Interface Segregation Principle (ISP) the DIP is a key principle to wisely harness the OOP abstraction and polymorphism concepts in order to improve the maintainability, the reusability and the testability of your code. No principle is an island (except maybe the Single Responsibility Principle (SRP)) and they must be applied hands-in-hands.

This article concludes this SOLID posts serie. Being aware of SOLID principles is not enough: they must be kept in mind during every design decision. But they also must be constrained by the KISS principle, Keep It Simple Stupid, because as we explained in the post Are SOLID principles Cargo Cult? it is easy to write entangled code in the name of SOLID principles. Then one can learn from experience. With years, identifying the right abstractions and partitioning properly the business needs in well balanced classes is becoming natural.

Are SOLID principles Cargo Cult?

My last post about SOLID Design: The Single Responsibility Principle (SRP) generated some discussion on reddit. The discussion originated from a remark considering SOLID principles as a Cargo Cult. Taking account the definition of Cargo Cult the metaphor is a bit provocative but it is not unfounded.

cargo cult is a belief system among members of a relatively undeveloped society in which adherents practice superstitious rituals hoping to bring modern goods supplied by a more technologically advanced society

The recent Boeing’s 737 Max fiasco revealed that some parts of their software have been outsourced to $9-an-hour engineers. Those engineers shouldn’t be blamed for not achieving top notch software taking account the budget. Nevertheless it is clear that a lot of software written nowadays look like this cargo cult plane. For many real-world developers, SOLID principles are superstitious rituals whose primary goal is to succeed during job interview.

The SRP article underlines that SRP is the only SOLID principle not related to the usage of abstraction and polymorphism. SRP is about logic partitioning into code: which logic should be declared in which class. But SRP is so vague it is practically useless from its two definitions.

Definition 1: A class should have a single responsibility and this responsibility should be entirely encapsulated by the class.

Definition 2: A class should have one reason to change.

One can justify any class design choice by tweaking somehow what is a responsibility or what is a reason to change. In other words, as someone wrote in comment: Most people who “practice” it don’t actually know what it means and use it as an excuse to do whatever the hell they were going to do anyways.

We can feel bitterness in those comments, certainly coming from seasoned developers whose job is to fix mistakes of $9 an hour engineers.

SOLID Principles vs. OOP Patterns

We must remember that SOLID principles emerged in the 80s and 90s from the work of world-class OOP experts like Robert C. Martin (Uncle Bob) and Bertrand Meyer. Software writing is often considered as an art. Terminologies such as clean code or beautiful code have been widely used. But art is a subjective activity. In this context, SOLID principles necessarily remain vague and subject to interpretation. And this is what makes the difference between a SOLID principle and an OOP pattern:

  • A SOLID Principle is subjective. It helps to guide the usage of powerful concepts of Object Oriented Programming (OOP).
  • An OOP Pattern is objective. It is a set of recipes to implement a well identified situation with the OOP concepts.

Despite a restraint number of keywords and operators, the OOP toolbelt of languages such as C# or Java is very rich. With a few dozens of characters it is possible to write code that puzzle experts. C# especially gets richer and richer with many syntactic sugars to express complex situations with just a few characters. This power is a double edged sword: seasoned developers can write neat and compact code. But on the other hand it is easy to misuse this power, especially for junior developers and all those that write code just to pay their bills.

Always keep in mind the KISS principle

Someone wrote in comments: “SOLID encourages abstraction, and abstraction increases complexity. It’s not always worth it, but it’s always presented as the non-plus ultra of good approaches.”

The only reason to be for abstraction in OOP is to simplify the implementation of a complex business rule.

  • Abstracting Circle, Rectangle and Triangle with an IShape interface will dramatically simplify the implementation of a shape drawing software.
  • On the other hand, creating an interface for each class is a waste of resource: not every concepts in your program deserve an abstraction.

This is why the Keep It Simple Stupid KISS principle should be always kept in mind: don’t add up extra implementation complexity on top of the business complexity.

SOLID and Static Analysis

I have been in the .NET static analysis industry since 2004. At that time I was consulting for large companies with massive legacy apps that were very costly to maintain. Books like Robert Martin’s Agile Principles, Patterns, and Practices made me realize that the source code is data. This data can be measured with code metrics. And the same way relational data can be crawled with SQL queries, code as data can be crawled with code queries. For example:

This query will objectively match complex methods not fully covered by tests. There are situations where one can argue that static analysis returns false positives but there is no justification for complex methods not well tested.

Not all aspects of SOLID principles can be objectively measured and verified. However static analysis can help bring objectiveness. For example:

SOLID and Testability

Regularly applying such rules will avoid taking SOLID too far to the point it becomes detrimental. However there are still all those aspects of SOLID, and code design in general, that must be left to creativity and interpretation. Experience in software development helps a lot here: over the years one refines his/her gut feeling about which design will increase flexibility and maintainability.

By definition juniors developer have no experience. However anyone can relentlessly struggle for 100% code coverage by tests. Being able to fully cover your code means, by definition, that your code is testable. Testability doesn’t come by chance. The properties that leads to full testability are the same properties that leads to high maintainability. Those properties include:

  • Easiness to use API
  • Domain classes well isolated
  • Careful map of logic to classes
  • Short classes and short methods
  • Cohesive classes
  • Abstractions and polymorphism used judiciously
  • Careful management of states mutability

Advice to add up objectivity when applying SOLID principles

Not everyone is a senior developer with a passion for well designed code. As a consequence Cargo Cult usage of SOLID principles is common. To improve the design some objectivity needs to be added in the development process. Here are my 3 advices for that:

  • KISS principle first, always struggle for simplicity: if it is complicated it is not SOLID.
  • Use static analysis to automatically monitor some measurable aspects of SOLID. Gross violations of code quality rules and metrics are also SOLID principles violations.
  • Refactor your code until it becomes seamlessly 100% coverable by tests. Code that cannot be easily 100% covered by tests is not SOLID.

 

 

SOLID Design: The Single Responsibility Principle (SRP)

After having covered The Open-Close Principle (OCP) and The Liskov Substitution Principle (LSP) let’s talk about the Single Responsibility Principle (SRP) which is the S in the SOLID acronym. The SRP definition is:

A class should have a single responsibility and this responsibility should be entirely encapsulated by the class.

This leads to what is a responsibility in software design? There is no trivial answer, this is why Robert C. Martin (Uncle Bob) rewrote the SRP principle this way:

A class should have one reason to change.

This leads to what is a reason to change?

SRP is a principle that cannot be easily inferred from its definition. Moreover the SRP lets a lot of room for own opinions and interpretations. So what is SRP about? SRP is about logic partitioning into code: which logic should be declared in which class. Something to keep in mind is that SRP is the only SOLID principle not related to the usage of abstraction and polymorphism.

The goal of this post is to propose objective and concrete guidelines to increase your classes compliance with SRP, and in-fine, increase the maintainability of your code.

SRP and Concerns

Typically the ActiveRecord pattern is used to exhibit a typical SRP violation. An ActiveRecord class has two responsibilities:

  • First an ActiveRecord object stores in-memory data retrieved from a relational database.
  • Second the record is active in the sense that data in-memory and data in the relational database are kept mirrored. For that, the CRUD (Create Read Update Delete) operations are implemented by the ActiveRecord.

To make things concrete an ActiveRecord class can look like that:

If Employee was a POCO class that doesn’t know about persistence and if the persistence was handled in a dedicated persistence layer the API would be improved because:

  • Not all Employee consumer wants to deal with persistence.
  • More importantly an Employee consumer really needs to know when an expensive DB roundtrip is triggered: if the Employee class is responsible for the persistence who knows if the data is persisted as soon as a setter is invoked?

Hence better isolate the persistence layer accesses and make them more explicit. This is why at NDepend we promote rules like UI layer shouldn’t use directly DB types that can be easily adapted to enforce any sort of code isolation.

Persistence is what we call a cross-cutting concerns, an aspect of the implementation that tends to spawn all over the code. We can expect that most domain objects are concerned with persistence. Other cross-cutting-concerns we want to separate domain objects from include: validation, log, authentication, error handling, threading, caching. The need to separate domain entities from those cross-cutting concerns can be handled by some OOP pattern like the pattern decorator for example. Alternatively some Object-Relational Mapping (ORM) frameworks and some Aspect-Oriented-Programming (AOP) frameworks can be used.

SRP and Reason to Change

Let’s consider this version of Employee:

The ComputePay() behavior is under the responsibility of the finance people and the ReportHours() behavior is under the responsibility of the operational people. Hence if a financial person needs a change to be implemented in ComputePay() we can assume this change won’t affect the ReportHours() method. Thus according to the version of SRP that states “a class should have one reason to change”, it is wise to declare these methods in different dedicated modules. As a consequence a change in ComputePay() has no risk to affect the behavior of ReportHours() and vice-versa. In other words we want these two parts of the code to be independent because they will evolve independently.

This is why Robert C. Martin wrote that SRP is about people : make sure that logics controlled by different people are implemented in different modules.

SRP and High-Cohesion

The SRP is about encapsulating logic and data in a class because they fit well together. Fit well means that the class is cohesive in the sense that most methods use most fields. Actually cohesion of a class can be measured with the Lack of Cohesion Of Methods (LCOM) metric. See below an explanations of LCOM (extracted from this great Stuart Celarier placemat) What matters is to understand that if all methods of a class are using all instances fields, the class is considered utterly cohesive and has the best LCOM score, which is 0 or close to 0.

Typically the effect of a SRP violation is to partition a class methods and fields into groups with few connections. The fields needed to compute the pay of an employee are not the same than the fields needed to report pending work. This is why the LCOM metric can be used to measure adherence to SRP and take actions. You can use the rule Avoid types with poor cohesion to track classes with poor cohesion between methods and fields.

SRP and Fat Code Smells

While we can hardly find an easy definition for what is a responsibility we noticed that adhering to SRP usually results in classes with a good LCOM score. On the other hand, not adhering to SRP usually leads to the God class phenomenon: a class that knows too much and does too much. Such god class is usually too large: violations of rules like Avoid types too big, Avoid types with too many methods, Avoid types with too many fields are good candidate to spot god classes and refactor into a finer-grained design.

Guidelines to adhere to SRP

Here are a set of objective and concrete guidelines to adhere to SRP:

  • Domain classes must be isolated from Cross-Cutting Concerns: code responsible for persistence, validation, log, authentication, error handling, threading, caching…
  • When implementing your domain, favor POCO classes that do not have any dependency on an external framework. Note that a POCO class is not necessarily a fields and properties only class, but can implement logic/behavior related to its data.
  • Use your understanding of the Domain to partition code: logics related to different business functions should be kept separated to avoid interference.
  • Regularly check the Lack of Cohesion Of Methods (LCOM) score of your classes.
  • Regularly check for too large and too complex classes.

 

Identify .NET Code Structure Patterns with no Effort

The two pillars of code maintainability are automatic testing and clean code structure.

  • Testing is used to regularly challenge code correctness and detect regression early. Testing can be easily assessed with numbers like code coverage ratio and the amount of assertions tested.
  • A clean code structure prevents the phenomenon of spaghetti code, entangled code that is hard to understand and hard to maintain. However assessing the code structure cannot be achieved through numbers like for testing. Moreover the structure emerges from a myriad of details buried in many source files and thus appropriate tooling is needed.

For most engineers, code dependency graph is the tool of choice to explore code structure. Boxes and arrows graph is intuitive and well adapted to visualize a small amount of dependencies. However to visualize complex portion of code the Dependency Structure Matrix (DSM) is more adapted. See below the same set of 34 namespaces visualized with the NDepend Dependency Graph and the NDepend Dependency Matrix.

NDepend dependency graph
NDepend dependency graph

 

NDepend Dependency Matrix
NDepend Dependency Matrix

If the concept of dependency matrix is something new to you, it is important to note that:

  • The Matrix headers’ elements represent graph boxes
  • The Matrix non-empty cells correspond to graph arrows. Numbers on the cells represents a measure of the coupling in terms of numbers of methods and fields involved. In a symmetric matrix a pair of blue and green cell is symmetric because both cells represents the same thing: the blue cell represents A uses B and the green cells represents B is used by A.

Here is a 5 minutes introduction video if you are not familiar with the dependency matrix:

Clearly the graph is more intuitive, but apart the two red arrows that represent two pairs of namespaces mutually dependent this graph tells few things about the overall structure.

On the other hand the matrix algorithm naturally attempts to layer code elements, exhibit dependency cycles, shows which element is used a lot or not… Let’s enumerate some structural patterns that can be visualized at a glance with the dependency matrix:

Layers

One pattern that is made obvious by a DSM is layered structure (i.e acyclic structure). When the matrix is triangular, with all blue cells in the lower-left triangle and all green cells in the upper-right triangle, then it shows that the structure is perfectly layered. In other words, the structure doesn’t contain any dependency cycle.

On the right part of the snapshot, the same layered structure is represented with a graph. All arrows have the same left to right direction. The problem with graph, is that the graph layout doesn’t scale. Here, we can barely see the big picture of the structure. If the number of boxes would be multiplied by 2, the graph would be completely unreadable. On the other side, the DSM representation wouldn’t be affected; we say that the DSM scales better than graph.

Notice that NDepend proposes 2 rules out of the box to control layering by preventing dependency cycles to appear: ND1400 Avoid namespaces mutually dependent and ND1401 Avoid namespaces dependency cycles.

Interestingly enough, most of graph layout algorithms rely on the fact that a graph is acyclic. To compute layout of a graph with cycles, these algorithms temporarily discard some dependencies to deal with a layered graph, and then append the discarded dependencies at the last step of the computation.

Cycles

If a structure contains a cycle, the cycle is displayed by a red square on the DSM. We can see that inside the red square, green and blue cells are mixed across the diagonal. There are also some black cells that represent mutual direct usage (i.e A is using B and B is using A).

The NDepend’s DSM comes with the option Indirect Dependency. An indirect dependency between A and B means that A is using something, that is using something, that is using something … that is using B. Below is shown the same DSM with a cycle but in indirect mode. We can see that the red square is filled up with only black cells. It just means that given any element A and B in the cycle, A and B are indirectly and mutually dependent.

Here is the same structure represented with a graph. The red arrow shows that several elements are mutually dependent. But the graph is not of any help to highlight all elements involved in the parent cycle.

Notice that in NDepend, we provided a button to highlight cycles in the DSM (if any). If the structure is layered, then this button has for effect to triangularize the matrix and to keep non-empty cells as closed as possible to the diagonal.

High Cohesion / Low-Coupling

The idea of high-cohesion (inside a component) / low-coupling (between components) is popular nowadays. But if one cannot measure and visualize dependencies, it is hard to get a concrete evaluation of cohesion and coupling. DSM is good at showing high cohesion. In the DSM below, an obvious squared aggregate around the diagonal is displayed. It means that elements involved in the square have a high cohesion: they are strongly dependent on each other although. Moreover, we can see that they are layered since there is no cycle. They are certainly candidate to be grouped into a parent artifact (such as a namespace or an assembly).

On the other hand, the fact that most cells around the square are empty advocate for low-coupling between elements of the square and other elements.

In the DSM below, we can see 2 components with high cohesion (upper and lower square) and a pretty low coupling between them.

While refactoring, having such an indicator can be pretty useful to know if there are opportunities to split coarse components into several more fine-grained components.

Too many responsibilities

The popular Single Responsibility Principle (SRP) states that: a class shouldn’t have more than one reason to change. Another way to interpret the SRP is that a class shouldn’t use too many different other types. If we extend the idea at other level (assemblies, namespaces and method), certainly, if a code element is using dozens of other different code elements (at same level), it has too many responsibilities. Often the term God class or God component is used to qualify such piece of code.

DSM can help pinpoint code elements with too many responsibilities. Such code element is represented by columns with many blue cells and by rows with many green cells. The DSM below exposes this phenomenon.

Popular Code Elements

A popular code element is used by many other code elements. Popular code elements are unavoidable (think of the String class for example).

A popular code element is not a flaw. However it is advised that popular elements are interfaces and enumerations. This way consumers rely on abstractions and not on implementations details. The benefit is that consumers are less often broken because abstraction are less subject to change than implementations.

A popular code element is represented by columns with many green cells and by rows with many blue cells. The DSM below highlights a popular code element.

Something to notice is that when one is keeping its code structure perfectly layered, popular components are naturally kept at low-level. Indeed, a popular component cannot de-facto use many things, because popular component are low-level, they cannot use something at a higher level. This would create a dependency from low-level to high-level and this would break the acyclic property of the structure.

Mutual dependencies

You can see the coupling between 2 components by right clicking a non-empty cell, and select the menu Open this dependency.

If the opened cell was black as in the snapshot above (i.e if A and B are mutually dependent) then the resulting rectangular matrix will contains both green and blue cells (and eventually black cells as well) as in the snapshot below.

In this situation, you’ll often notice a deficit of green or blue cells (3 blue cells for 1 green cell here). It is because even if 2 code elements are mutually dependent, there often exists a natural level order between them. For example, consider the System.Threading namespaces and the System.String class. They are mutually dependent; they both rely on each other. But the matrix shows that Threading is much more dependent on String than the opposite (there are much more blue cells than green cells). This confirms the intuition that Threading is upper level than String.

An in-depth analysis of .NET Core 3.0 support for WPF and Winforms APIs

.NET Core 3.0 will be RTM soon and it supports WPF and Winforms APIs.

In my last post I’ve been exploring .NET Core 3.0 new APIs by comparing compiled bits with NDepend, of .NET Core 3.0 against .NET Core 2.2.

In this post I will compare .NET Core 3.0 Windows Forms (Winforms) and WPF APIs with .NET Framework 4.x.

I won’t make the suspense last: .NET Core 3.0 support for Winforms and WPF APIs is almost complete, I found very few breaking changes.

I will now explain what I’ve done with NDepend to explore this API diff, and then dig into the results. If you are wondering How to port desktop applications to .NET Core 3.0 see Microsoft explanations here.

Comparing .NET Core 3.0 Winforms and WPF APIs vs. NET Framework 4.x with NDepend

From the NDepend Start Page select Compare 2 versions of a code base menu. Then use Add assemblies in Folder buttons to add .NET Framework assemblies from folder C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319 and Microsoft.WindowsDesktop.app nuget package assemblies (from folder C:\Users\psmac\.nuget\packages\microsoft.windowsdesktop.app\3.0.0-preview-27325-3\ref\netcoreapp3.0 on my machine).

Comparing .NET Core 3.0 Winforms WPF APIs with .NET Framework

A minor difficulty was to isolate the exact set of assemblies to focus on. Here is the list of concerned assemblies I came up with:

Notice that to preserve the correspondance between APIs and assemblies packaging, the attribute TypeForwardedToAttribute is massively used to delegate implementations.

Usage of TypeForwardedToAttribute

The few breaking changes

With the default NDepend rules about API breaking changes, I’ve only found 16 public types and 52 public methods missing. Here are the types:

16 types missing in WPF Winforms .NET Core 3.0 API

16 types missing on a total of 4.095 public types, well done!

Number of public types per assemblies

The 52 public methods missing are: (on a total of 42.645 public methods)

Parent Assembly Name Parent Type Name Method name
System.Security.Principal.Windows System.Security.Principal.WindowsIdentity .ctor(String,String)
System.Security.Principal.Windows System.Security.Principal.WindowsIdentity Impersonate()
System.Security.Principal.Windows System.Security.Principal.WindowsIdentity Impersonate(IntPtr)
System.Security.Principal.Windows System.Security.Principal.IdentityReferenceCollection get_IsReadOnly()
System.Security.Permissions System.Net.EndpointPermission ToString()
System.Security.Permissions System.Security.HostProtectionException GetObjectData(SerializationInfo,StreamingContext)
System.Security.Permissions System.Security.Policy.ApplicationDirectory Clone()
System.Security.Permissions System.Security.Policy.ApplicationTrust Clone()
System.Security.Permissions System.Security.Policy.PermissionRequestEvidence Clone()
System.Security.Permissions System.Security.Policy.Site Clone()
System.Security.Permissions System.Security.Policy.StrongName Clone()
System.Security.Permissions System.Security.Policy.Url Clone()
System.Security.Permissions System.Security.Policy.Zone Clone()
System.Security.Permissions System.Security.Policy.GacInstalled Clone()
System.Security.Permissions System.Security.Policy.Hash Clone()
System.Security.Permissions System.Security.Policy.Publisher Clone()
System.Security.Cryptography.Pkcs System.Security.Cryptography.Pkcs.EnvelopedCms .ctor(SubjectIdentifierType,ContentInfo)
System.Security.Cryptography.Pkcs System.Security.Cryptography.Pkcs.EnvelopedCms .ctor(SubjectIdentifierType,ContentInfo,AlgorithmIdentifier)
System.Security.Cryptography.Pkcs System.Security.Cryptography.Pkcs.EnvelopedCms Encrypt()
System.Security.Cryptography.Pkcs System.Security.Cryptography.Pkcs.ContentInfo Finalize()
System.Security.Cryptography.Cng System.Security.Cryptography.ECDiffieHellmanCng FromXmlString(String)
System.Security.Cryptography.Cng System.Security.Cryptography.ECDiffieHellmanCng ToXmlString(Boolean)
System.Security.Cryptography.Cng System.Security.Cryptography.ECDsaCng FromXmlString(String)
System.Security.Cryptography.Cng System.Security.Cryptography.ECDsaCng ToXmlString(Boolean)
System.Security.Cryptography.Cng System.Security.Cryptography.RSACng DecryptValue(Byte[])
System.Security.Cryptography.Cng System.Security.Cryptography.RSACng EncryptValue(Byte[])
System.Security.Cryptography.Cng System.Security.Cryptography.RSACng get_KeyExchangeAlgorithm()
System.Security.Cryptography.Cng System.Security.Cryptography.RSACng get_SignatureAlgorithm()
System.Printing System.Printing.PrintQueue set_Name(String)
System.Printing System.Printing.IndexedProperties.PrintInt32Property op_Implicit(PrintInt32Property)
System.Printing System.Printing.IndexedProperties.PrintStringProperty op_Implicit(PrintStringProperty)
System.Printing System.Printing.IndexedProperties.PrintStreamProperty op_Implicit(PrintStreamProperty)
System.Printing System.Printing.IndexedProperties.PrintQueueAttributeProperty op_Implicit(PrintQueueAttributeProperty)
System.Printing System.Printing.IndexedProperties.PrintQueueStatusProperty op_Implicit(PrintQueueStatusProperty)
System.Printing System.Printing.IndexedProperties.PrintBooleanProperty op_Implicit(PrintBooleanProperty)
System.Printing System.Printing.IndexedProperties.PrintThreadPriorityProperty op_Implicit(PrintThreadPriorityProperty)
System.Printing System.Printing.IndexedProperties.PrintServerLoggingProperty op_Implicit(PrintServerLoggingProperty)
System.Printing System.Printing.IndexedProperties.PrintDriverProperty op_Implicit(PrintDriverProperty)
System.Printing System.Printing.IndexedProperties.PrintPortProperty op_Implicit(PrintPortProperty)
System.Printing System.Printing.IndexedProperties.PrintServerProperty op_Implicit(PrintServerProperty)
System.Printing System.Printing.IndexedProperties.PrintTicketProperty op_Implicit(PrintTicketProperty)
System.Printing System.Printing.IndexedProperties.PrintByteArrayProperty op_Implicit(PrintByteArrayProperty)
System.Printing System.Printing.IndexedProperties.PrintProcessorProperty op_Implicit(PrintProcessorProperty)
System.Printing System.Printing.IndexedProperties.PrintQueueProperty op_Implicit(PrintQueueProperty)
System.Printing System.Printing.IndexedProperties.PrintJobPriorityProperty op_Implicit(PrintJobPriorityProperty)
System.Printing System.Printing.IndexedProperties.PrintJobStatusProperty op_Implicit(PrintJobStatusProperty)
System.Printing System.Printing.IndexedProperties.PrintDateTimeProperty op_Implicit(PrintDateTimeProperty)
System.Printing System.Printing.IndexedProperties.PrintSystemTypeProperty op_Implicit(PrintSystemTypeProperty)
System.Printing System.Windows.Xps.XpsDocumentWriter raise__WritingProgressChanged(Object,WritingProgressChangedEventArgs)
System.Printing System.Windows.Xps.XpsDocumentWriter raise__WritingCompleted(Object,WritingCompletedEventArgs)
System.Printing System.Windows.Xps.XpsDocumentWriter raise__WritingCancelled(Object,WritingCancelledEventArgs)
System.Drawing System.Drawing.FontConverter Finalize()

Portability to .NET Core 3.0 analysis

Microsoft offers a Portability Analyzer tool to analyze changes in desktop API that will break your desktop app. I’ve tested it on NDepend but I just got very coarse results. Did I miss something? At least it is mostly green 🙂

Portability Analyzer analysis on NDepend

I wrote last year a post named Quickly assess your .NET code compliance with .NET Standard let me know in comment if it is worth revisiting this post for desktop APIs. Btw, my guess is that desktop APIs won’t be part of .NET Standard vNext (since there is no plan to support it on all platforms) but I haven’t found any related info on the web.

Why migrate your desktop app to .NET Core 3.0?

This is a great news that Microsoft embeds good-old desktop APIs in .NET Core 3.0 with such an outstanding compatibility. It is worth noting that so far (February 2019) there is no plan to port Windows Forms and WPF on other platforms than Windows.  So, what are the benefits of porting an existing application to .NET Core 3.0?

I found answers in this recent How to Port Desktop Applications to .NET Core 3.0 Channel9 30 minutes video at 5:12. Basically you’ll get more deployment flexibility, Core Runtime and API improvements and also more performances.

Microsoft promises to not urge anyone to port existing Winforms and WPF application to .NET Core 3.0. However for a Visual Studio extension shop like us if it is decided that VS will run on .NET Core 3.0 in the future, we hope to be notified many months ahead. We discussed that on twitter with Amanda Silver in January 2019. It looks like this spring 2019 they will take a decision. As a consequence to support both Visual Studio past versions running on .NET fx and new versions running on .NET Core 3, an extension will need to support both .NET Fx and .NET Core 3 desktop APIs.