NDepend

Improve your .NET code quality with NDepend

Case Study: 2 Simple Principles to achieve High Code Maintainability

High Code Maintainability is the key to make both the management and the developers happy:

  • Maintainability lets a product evolves naturally at a sustained pace with controlled cost.
  • Maintainability lets developers add new features and improve existing ones without spending most of their time refactoring old dusty code and fixing bugs.

After 16 years of development on our product NDepend (first release in April 2004!) we came to the conclusion that:

Highly Maintainable Code can be achieved through two simple, objective and verifiable principles: Layered Architecture and High Test Coverage Ratio

Layered Architecture prevents entangled code, the well know spaghetti code phenomenon. Dependencies get mastered and when it is time for the code to evolve new classes and interfaces naturally integrate with existing ones.

High Test Coverage Ratio means that when code covered by tests get refactored, existing tests get impacted. With not much efforts the developer discovers regression problems and fix them before they go to production and become bugs to fix. The more code is covered by tests the more you’ll benefit from this shield.

When writing a tool for developers, the most satisfying part is to challenge the tool on its own code: this practice is named dogfooding. We just rewrote completely the dependency graph of NDepend so let’s use this important refactoring as a case study. Then we’ll see how to automatize the validation of these principles.

Case Study: Layered Architecture

Let’s first present the layered architecture principle and then the test coverage principle.

See below a graph of the 250+ classes, interfaces and enumerations used to implement the new dependency graph. A 2.500+ classes, methods and fields SVG vector dependency graph is available here.

The class GraphController is selected:

  • The blue classes are the ones directly used by GraphController
  • The light-blue classes are the ones indirectly used by GraphController (indirectly means used by a classes used by a class … used by GraphController). Clearly GraphController relies on everything.
  • The red classes are the ones mutually dependent with GraphController.
The NDepend Dependency Graph used to visualize its own code

Several things can be said on how this code is structured:

  • This is not an API so we can use namespaces the way we want. Here namespaces implement the concept of components.
  • Box size is proportional to the number of lines of code. We can see that the overall namespaces box size is well balanced. This is a good practice to avoid having a few monster components and tons of smaller components.
  • The biggest component in terms of number of classes and lines of code is the implementation of the Undo/Redo system. More than 30 actions are implemented (expand/collapse, change GroupBy, select/unselect, generate a call graph…). These actions are relatively low level in the structure. While they act on the entire system they are not coupled with the controller, the UI rendering or the layout computation.
  • The two lowest components are Base and Model. Both contain few logic and are used by almost all other components.

In the future, whether we add new actions on the graph or decide to improve the layout somehow, this architecture won’t undergo drastic modifications. Thanks to this view it’ll be easy to decide in which component to add our new classes or if new components should be added and what they can and cannot use.

Ideally the GraphControl class shouldn’t be entangled with the GraphController class. These two classes have been developed together. See below the coupling graph between GraphController and GraphControl. It has been obtained by double-clicking the red arrow between the two classes. It wouldn’t be difficult to introduce an interface to inject one implementation in the other one but we didn’t do it (see below the coupling graph between the two classes) . This is the key when it comes to care for maintainability: which move will offer the highest ROI? Not everything has to be perfect just for the sake of it. Experience shows that having only two classes entangled does not impact much the maintainability. We estimated that spending our resources to satisfy the two principles has a better ROI in the long run.

Coupling Graph between GraphController class and GraphControl class

 

Case Study: High Test Coverage Ratio

The graph implementation is 90% covered by tests. It is not because there is a lot of UI code that it should not been well tested. We didn’t spend a good part of our resources in writing tests just for the sake of it. We did it because we know by experience that it’ll pay off: probably a few bugs will be reported as for every 1.0 implementation although beta test phases already caught some. But we are confident that it won’t take a lot of resources to fix them. We can look forward the future confidently (like supporting properly .NET 5 that will be released in November 2020).

The picture below shows all namespace, classes and methods. Smaller rectangles are methods and the color of each rectangle indicates how well a method is covered by tests. Clearly we tolerate some gaps in UI code, while non UI code like Undo/Redo actions implementations are 100% covered. Here also experience tells us how to balance our resources and that everything does not have to be perfect to achieve high maintainability.

NDepend Graph Implementation 90% covered by tests

In terms of lines of code the NDepend Graph is not even 5% of the entire product, it is a tool in the toolset. The worst case scenario would be that each tool implementation regularly spits some bugs: all our resources would be spent fixing them, we couldn’t continue adding value to the product and the business would probably die at a point. Not even mentioning the frustration of users of a buggy product.

Each year we fix a few dozens of bugs that each impact few users but that doesn’t take us more than a tiny percentage of our overall development resources. The overall code base is 86.5% covered by tests and is entirely layered: maintenance doesn’t cost us much.

Typically at this point comes the remark: but code coverage is not enough, results must be asserted by unit-tests. And indeed, if nothing gets asserted nothing gets tested even if the code is entirely covered by tests.  We want tests to fail when something is going wrong. In this next post Case Study : Complex UI Testing I explain how millions of assertions get checked while running our test suite against the graph implementation.

Automatically Validate Layered Architecture and High Test Coverage Ratio

NDepend offers hundreds of default code rules but only 4 of them are used to validate these key points:

The fourth rule Avoid namespaces mutually dependent helps a lot to layer a large super-component. In this situation the first thing to do is to make sure there is no pair of components that use each other. For each such pair of namespaces matched, this rule has an heuristic and tells which type should not use which other type, same for method level. A technical-debt estimation is also given in terms of development effort it’ll cost to fix each pair. Here it says that 11 man-day (8 hours a day) should be spent if someone decides to layer the NHibernate code base. Unfortunately this is not possible because it would break thousands of client code base bound with it. Also let’s note that an interest estimation is also given in terms of: how much development effort does it takes per year if I let issues unfixed. Here this rule estimates that not fixing all those pairs of namespaces entangled costs 5 man-days a year to the development team.

Avoid Namespaces Mutually Dependent with advices on what to do and costs estimation

These rules can be validated during the build process (Azure DevOps / TFS, Jenkins, TeamCity, Bamboo, SonarQube…) and the team can know when the new code written diverges from these two maintainability goals.

 

Conclusion: Objective, Verifiable, Simple

What is interesting with these two simple concepts, layering and code coverage, is that they can be objectively applied, validated and measured. Last year in 2019 I wrote a blog post series on SOLID principles and there have been so much debate about how to apply them in the real-world. SOLID principles are a great way to improve our understanding of Object Oriented Programming and how encapsulation, abstraction, polymorphism, inheritance … should be used and not used. But when it comes to write maintainable code everyone has a different opinion.

If it is decided that the code structure should be layered there is not much debate about which part should be abstracted from other ones. If a class A should use a class B and B is in a higher layer than A, somehow an interface IB must be created at A level to inject the B implementation in A without breaking the layering.

These 2 concepts emerged over the years because we had the utter need to produce maintainable code. What I really like is that they are simple. And KISS (Keep It Simple Stupid) is a great principle in software engineering.

If a third principle should be added it would definitely be about user documentation: we offer free email support to users but we also offer tons of embedded and online documentation. Everytime a question starts to be asked a few times, we make sure that users can get the response immediately from both a tooltip (or a smart UI change) and from the online documentation. Some other ISV decides to make money with support. Personally I don’t find this fair because it is a clear incentive to produce rotted documentation and hence frictions for the user.

How did we obtain the image in this post

Let’s show that all those images in this post have been obtained within a few clicks.

  • First let’s search for Graph Panel in the entire NDepend code base (they get zoomed automatically).
  • Then let’s reset the metric view with NDepend.UI.Graph.* namespaces to get the colored treemap.
  • Then let’s go back to the graph and only keep NDepend.UI.Graph.* namespaces matched by the search.
  • Then un-group by parent assembly to get a graph made of namespaces only.
  • Then change the layout direction from Top to Bottom to have a nicer layout.
  • Then expand all namespaces to get all classes.
  • Finally expand all classes to get all methods and fields.
Using the NDepend Graph to obtain a clear view of the implementation of the Graph

 

Not planning now to migrate your .NET 4.8 legacy, is certainly a mistake

2020 will see the achievement of the massive remodeling of the .NET platform initiated by Microsoft in November 2014 with the introduction of .NET Core 1, with the promise of an open-source, a multi-platform and a modernizable framework (thanks to no rock-solid backward compatibility constraint) – everything that the .NET Framework isn’t. This U-turn in the Microsoft plans for .NET is part of the new Microsoft’s strategy initiated by Satya Nadella that became CEO of Microsoft in February 2014, succeeding to Steve Ballmer.

.NET 5 will be released in November this year. Within 6 years Microsoft will have succeeded a complete platform shift like no other. The .NET Core brand was here to make clear that two .NET platforms were living side by side. But now we know that the .NET Framework 4.8 won’t evolve anymore and that all Microsoft efforts will be put on .NET Core continuation with well scheduled releases ahead. Let’s use the term .NET OSS to designate [.NET Core, .NET 5, .NET 6…. [ in the remainder of this post.

We can expect an early beta of .NET 5 before July 2020. Today in January 2020 the .NET 5.0 milestone is 72% achieved.

By now, the way to get prepared to .NET 5 and later is to migrate to .NET Core 3.1. Despite the branding change from .NET Core 3.1 to .NET 5 it is no mystery that .NET 5 will be mostly based on the actual .NET Core platform.

The cost of migration from .NET 4.8 to .NET OSS can get pretty high, especially if the legacy relies on some deprecated APIS (like WCF, WWF, WebForms or AppDomain). Thus it may seems attractive to stick with .NET 4.8 if your application is intended to run on Windows only.

Why it is not a good idea to not anticipate the migration now?

.NET 4.8 won’t evolve but some security patches will be provided for as long as we can foresee. However we can predict that .NET 4.8 will be quickly considered as a thing-of-the-past:

  • Developer mindset: .NET OSS and also C# will evolve. There will come a point where it’ll feel pretty awkward for .NET programmers working with 4.8 to not be able to use all the new goodies. Could you imagine programming with C#3 nowadays?
  • Third-Party Libraries: The .NET 4.8 / .NET OSS increasing gap will push open-sourced libraries authors toward .NET OSS. The cost of maintaining two code bases will be too high for an OSS developer. If your .NET 4.8 application consumes some OSS libraries, not migrating it will put you in an awkward situation where you’ll have to maintain the OSS code consumed yourself! Certainly serious commercial libraries will be maintained on both platforms for a longer period of time, but not forever.
  • Performance: We can expect more and more performance improvements with .NET OSS.
  • Tooling: Tools will continue to evolve and with time, less and less tool will support .NET 4.8 application.

Recently I’ve discussed with Jean-Baptiste Evain that develops the OSS library Cecil. Jb is also responsible for UnityVS at MS. Here at NDepend we’re relying on Cecil for more than a decade. Cecil processes compiled .NET assemblies bytes and thus will obviously benefit from Span<T> only available on .NET Core. This concrete situation illustrates well the points mentioned above:

  • By using Span<T> Jb is not enthusiast to have to maintain two versions of Cecil, one relying on Span<T> and the .NET 4.8 one.
  • Even though these two versions will co-exist because it is too early to discard the .NET 4.8 version of Cecil used by many serious projects, it is a matter of a few years until .NET 4.8 Cecil version gets deprecated.
  • Without Span<T> the .NET 4.8 version of Cecil will be slower.

Our case

NDepend is still running on .NET 4.8. NDepend is a CI tool, a standalone UI tool, an Azure DevOps extension and a Visual Studio extension. Developing an extension is a sensitive situation because we need to align our platform with the platform of the host. VS is such a massive application that I don’t expect it to run on .NET 5 in 2021. On the other hand VS is evolving so quickly nowadays that this possibility is not totally excluded. It is also possible that Microsoft takes an incremental approach and that the main VS process (devenv.exe) will remain on .NET Fx 4.8 for a while, while children processes run on .NET OSS (VS runs with quite a few children processes!).

At this point the reasonable move for us is to anticipate the migration to .NET OSS mostly by compiling as much code as possible against .NET Standard 2.0, supported by both .NET Fx 4.7.2+ and .NET Core. We also need to make sure that our WPF and Winforms code will be easily movable (which shouldn’t be a problem since most of the WPF/Winforms APIs are supported by .NET 3.1). We are also mulling over on having our own child process(es) but all the UI part must remain in the main VS process.

We also keep in mind that it will be tricky to support the future VS version running on .NET OSS and previous VS versions running on .NET v4.8 for a few years.

Conclusion

Those like us still working on a large .NET 4.8 legacy are entering into a turbulence zone for the years to come. However for all the reasons explained above, we can expect that in not so long (2023? 2025?) successful applications still running on .NET 4.8 will be the exception. Certainly not anticipating legacy migration now is likely a strategic mistake.

4 Predictions for the Future of .NET

In May 2019, Microsoft officially announced .NET 5, the future of .NET: it will be based on all the .NET Core work already achieved. Here is the schedule announced:

On one hand the future of .NET has never been so bright. On the other hand this represents a massive move for all .NET development shops, especially for those that still target .NET Framework 4.x that won’t evolve anymore. But not everything is clear from this announcement. Such massive move will have many collateral consequences that we can only guess by now. Certainly many points are not yet cast in stone and still debated.

Hence for large .NET legacy code bases some predictions must be made to plan now a seamless and in-time migration toward the future of .NET. So let’s do some predictions: it’ll still be interesting to come back in a few years and see how good or bad they were.

.NET Standard won’t evolve much

.NET Standard was introduced as a common API set that all .NET flavors must implement. .NET Standard superseded PCL (Portable Class Library). Now that several .NET frameworks will be unified upon .NET Core bases, and that the .NET Framework 4.x won’t support future versions of .NET Standard anymore, it sounds like the need for more .NET standard API will decrease significantly. Actually .NET Framework 4.8 doesn’t even support latest .NET Standard 2.1: “.NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1”.

However .NET Standard is certainly not dead yet: it is (and will be for years to come) an essential tool to compile code into portable components that can be reused across several .NET flavors. However with this unification process the future of .NET Standard is compromised.

Visual Studio will run on .NET 5 or 6 (and in a x64 process)

It has to. Imagine the consequences if in 3 years from now (2019 Q4) the main Microsoft IDE for .NET professional developments still run on .NET Framework v4.8:

  • Engineers working on VS would lack access to all new .NET APIs, performance improvements and langage improvements. They would remain locked in the past.
  • As a consequence they wouldn’t use their own tool (dogfooding) and dogfooding is a key aspect of developing tools for developers.
  • Overall the message sent wouldn’t be acceptable for the users.

On the other hand, if you know a bit how VS works, imagine how massive this migration is going to be. For more than a decade there have been a lot of complaints from the community about Visual Studio not running in a 64 bits process. See some discussions on reddit here for example. If I remember well this x64 request was the most voted one when VS feedback was still handled by UserVoices. Some technical explanations have been provided by Microsoft like those ones provided 10 years ago! If in 2019 Visual Studio still doesn’t run in a x64 process, this says a lot on how large and complex such migration is.

It seems inevitable that this time the Visual Studio legacy will evolve toward what will be the future of .NET. One key benefit will be to run in a x64 process and have plenty of memory to work with very large solutions. Another implication is that all Visual Studio extensions, like our extension, must evolve too. Here at NDepend we are already preparing it but it will take time, not because we’ll miss much API (we’ll mostly miss AppDomain) but because:

  • We depend on some third-parties that we’d like to get rid of to have full control over our migration, and overall code.
  • For several years we’ll have to support both future Visual Studio versions and Visual Studio 2019, 2017 and maybe 2015 that runs on .NET Framework v4.x (btw we still support VS 2013/2012/2010 but this will have to be discarded to benefit from .NET Standard reused DLLs)

We cannot know yet if Visual Studio vNext will run on .NET 5 or if it’ll take more years until we see it running upon .NET 6?

Btw here are 2 posts Quickly assess your .NET code compliance with .NET Standard and An in-depth analysis of .NET Core 3.0 support for WPF and Winforms APIs that can help plan your own legacy migration.

.NET will propose a cross-platform UI Framework: WPF or a similar XAML UI Framework

On October 4, 2019 Satya Nadella revealed why Windows may not be the future of Microsoft’s business. In August 2019 Microsoft provided a .NET Cross Platform UI Framework Survey. Clearly a .NET cross-platform UI Framework is wanted: the community is asking for it. So far Microsoft closed the debate about WPF: WPF won’t be multi-platform.

Let’s also be crystal clear. This (WPF cross platform) is a very hard project. If the cost was low, this would be a very different conversation and very likely a different outcome. We have enough trouble being compatible with OpenSSL and that’s just one library.  Rich Lander – Dec 5, 2018

But given the immense benefits of what WPF running cross-platform would offer, I wouldn’t be surprise to see WPF become cross-platforms within the next years. Or at least a similar XAML UI framework. Moreover WPF is now open-source so who knows…

The Visual Studio UI is mostly based on WPF hence one of the benefit of having WPF cross-platform would be to have a unique cross-platform Visual Studio: the same way Microsoft is now unifying .NET Frameworks, they could unify the Visual Studio suite into a single cross-platform product.

Xamarin Forms and Avalonia are also natural candidates to be the .NET cross-platform UI Framework. But it seems those frameworks doesn’t receive enough love from the community, this is my subjective feeling. Also we have to keep in mind that Microsoft did a survey and that the community is massively asking for it.

Blazor is promised to a bright future

If you didn’t follow the recent Blazor evolution, the promises of this technology are huge:

  • Run .NET code in all browsers (like Silverlight)
  • with no browser plugin needed (unlike Silverlight)
  • with near-native performance
  • with components compiled to a compact binary format

This is all possible thanks to the WebAssembly (Wasm) format supported by most browsers.

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

Blazor was initially a personal project created by Steve Sanderson from Microsoft. It was first introduced during NDC OSLO in July 2017: the video is worth being watched, also read how enthusiastics are the comments. However Blazor is not yet finalized and still has some limitations: it doesn’t offer yet a decent debugging experience and the application size to download (a few MBs) is still too large because dependencies have to be loaded too. Those ones are currently being addressed (see here for debugging and here for download size, runtime code will be trimmed and cached and usage of CDN (Content Distribution Network) is mentioned).

The community is enthusiast, the technology is getting mature and there is no technological nor political barrier in sight: the Blazor future looks bright. Don’t miss the Blazor FAQ to learn more.

log4net vs NLog A Comparison of How They Affect Codebases

Log4net vs NLog: A Comparison of How They Affect Codebases

Ah, the old “versus” Google search.  Invariably, you’re in the research stage of some decision when you type this word into a search engine.  Probably not something like Coke vs Pepsi.  Maybe “C# vs Java for enterprise projects” or “angular vs react.”  Or if you landed here, perhaps you’re looking at “log4net vs NLog.”

With a search like this, you expect a certain standard script.  The writer should describe each one anecdotally, perhaps with a history.  Then comes the matrix with a list of features and checks and exes for each one, followed by a sober list of strengths and weaknesses.  Then, with a flourish, I should finish with a soggy conclusion that it really depends on your needs, but I maybe kinda sorta like one better.

I’m not going to do any of that. Continue reading Log4net vs NLog: A Comparison of How They Affect Codebases

NDepend and .NET Fx v4.7.2: an extension method collision and how to solve it easily

In Oct 2017 I wrote about the potential collision problem with extension methods. At that time the .NET Framework 4.7.1 was just released with this new extension method that is colliding with our own NDepend.API Append() extension method with same signature.

The problem was solved easily because just one default rule consumed our Append() extension method, we just had to refactor this method to use it as a static method call instead of an extension method call:  ExtensionMethodsEnumerable.Append(...)

 


Unfortunately with the recent release of .NET Framework 4.7.2, the same problem just happened again, this time with this extension method:

This time 22 default code rules are relying on our ToHashSet() extension method. This method is used widely because it is often the cornerstone to improve significantly performances. But this means that after installing the .NET Fx v4.7.2, 22 default rules will break.

This time the problem is not solved easily by calling our ExtensionMethodsSet.ToHashSet<TSource>(this IEnumerable<TSource>)  extension method as a static method because in most of these 22 rules source code, changing the extension method call into a static method call require a few brain cycle. Moreover it makes the rules source code less readable: For example the first needs to be transformed into the second:

We wanted a straightforward and clean way for NDepend users to solve this issue on all their default-or-custom code rules.  The solution is the new extension method ToHashSetEx().

Solving the issue on an existing NDepend deployment is now as simple as replacing .ToHashSet()  with  .ToHashSetEx()  in all textual files that contain the user code rules and code queries (the files with extension .ndproj and .ndrules).

We just released NDepend v2018.1.1 with this new extension method  ExtensionMethodsSet.ToHashSetEx<TSource>(this IEnumerable<TSource>). Of course all default rules and generated queries now rely on ToHashSetEx() and also a smart error message is now shown to the user in such situation:

We hesitated between ToHashSetEx() and ToHashSet2() but we are confident that this problem won’t scale (more explanation on suffixing a class or method name with Ex here).

 


Actually we could have detected this particular problem earlier in October 2017 because Microsoft claimed that the .NET Fx will ultimately support .NET Standard 2.0 and  .NET Standard 2.0 already presented this ToHashSet() extension method. So this time we analyzed both C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\netstandard.dll and NDepend.API.dll to double-check with this code query that there is no more risk of extension method collision:

We find back both Append() and ToHashSet() collisions and since NDepend.API is not concerned with queryable, there is no more risk of collision:

 

 

Quickly assess your .NET code compliance with .NET Standard

Yesterday evening I had an interesting discussion about the feasibility of migrating parts of the NDepend code to .NET Standard to ultimately run it on .NET Core. We’re not yet there but this might make sense to run at least the code analysis on non Windows platform, especially for NDepend clones CppDepend (for C++), JArchitect (for Java) and others to come.

Then I went to sleep (as every developers know the brain is coding hard while sleeping), then this morning I went for an early morning jogging and it struck me: NDepend is the perfect tool to assess some .NET code compliance to .NET Standard, or to any other libraries actually! As soon as I was on my machine, I did a proof of concept in less than an hour.

The key is that .NET standard 2.0 types and members are all packet in a single assemblies netstandard.dll v2.0 that can be found under C:\Program Files\dotnet\sdk\NuGetFallbackFolder\netstandard.library\2.0.3\build\netstandard2.0\ref\netstandard.dll (on my machine).  A quick analyze of netstandard.dll with NDepend shows 2 317 types in 78 namespaces, with 24 303 methods and 884 fields. Let’s precise that netstandard.dll doesn’t contain any code, it is a standard not an implementation. The 68K IL instructions represent the IL code for throw null which is the method body for all non-abstract methods.

.NET Standard 2.0 analyzed by NDepend

(Btw, I am sure that if you read this you have an understanding of what is .NET Standard but if anything is still unclear, I invite you to read this great article by my friend Laurent Bugnion wrote 3 days ago A Brief History of .NET Standard)

Given that, what struck me this morning is that to analyze some .NET code compliance to .NET Standard, I’d just have to include netstandard.dll in the list of my application assemblies and write a code query that  filters the dependencies the way I want. Of course to proof test this idea I wanted to explore the NDepend code base compliance to .NET Standard:

NetStandard assembly included in the NDepend assemblies to analyze

The code query was pretty straightforward to write. It is written in a way that:

  • it is easy to use to analyze compliance with any other library than .NET standard,
  • it is easy to explore the compliance and the non-compliance with a library in a comprehensive way, thanks to the NDepend code query result browsing facilities,
  • it is easy to refactor the query for querying more, for example below I refactor it to assess the usage of third-party non .NET Standard compliant code

The result looks like that and IMHO it is pretty interesting. For example we can see at a glance that NDepend.API is almost full compliant with .NET standard except for the usage of System.Drawing.Image (all the 1 type are the Image type actually) and for the usage of code contracts.

NDepend code base compliance with .NET standard

For a more intuitive assessment of the compliance to .NET Standard we can use the metric view, that highlights the code elements matched by the currently edited code query.

  • Unsurprisingly NDepend.UI is not compliant at all,
  • portions of NDepend.Core non compliant to .NET Standard are well defined (and I know it is mostly because of some UI code here too, that we consider Core because it is re-usable in a variety of situations).

With this information it’d be much easier to plan a major refactoring to segregate .NET standard compliant code from the non-compliant one, especially to anticipate hot spots that will be painful to refactor.

The code query to assess compliancy can be refactored at whim. For example I found it interesting to see which non-compliant third-party code elements were the most used. So I refactored the query this way:

Without surprise UI code that is non .NET Standard compliant popups first:

.NET Standard non-compliant third-party code usage

There is no limit to refactor this query to your own need, like assessing usage of non-compliant code — except UI code– for example, or assessing the usage of code non compliant to ASP.NET Core 2 (by changing the library).

Hope you’ll find this content useful to plan your migration to .NET Core and .NET Standard!