NDepend

Improve your .NET code quality with NDepend

12 Visual Studio Debugging Productivity Tips

In this post we assume the the reader knows the basics of debugging with Visual Studio:

  • F5 to start running with the debugger
  • F9 set breakpoint on the current line
  • F10 run till next breakpoint
  • F5 to resume the execution from a stopped program debugged
  • F11 step into the function (if the instruction pointer points to a function)
  • F10 step over the function (if the instruction pointer points to a function)
  • Shift+F11 step out the executed function
  • Pause execution
  • Attach to Process
  • Quick watch an element in source code with mouse hover
  • Debug Windows : Locals, Watch, Immediate, Modules, Stack Trace, Exception

Many developers handle their debugging sessions with this powerful-enough knowledge kit. However the Visual Studio Debugging tools have much more to offer. Here is a list of Visual Studio Debugging productivity tips. Note that those tips and shortcuts have been validated with Visual studio 2019 16.6 EN-US edition with no extension installed.

1) Run to Cursor

With the shortcut Ctrl+F10 you can tell the debugger to run until the line pointed by the cursor.

Run to Cursor Ctrl+F10
Run to Cursor Ctrl+F10

2) Run through here with a mouse click

When hovering the source code while debugging a Run execution through here green glyph appears. This glyph can be clicked.

Run Execution Through Here Glyph
Run Execution Through Here with a Mouse Click

3) Set next statement to here

The Run execution through here green glyph can be transformed into Set next statement to here by holding the key Ctrl. It is different than Run execution through here because the statement in between are not executed. Hence in the small animation below we can see in the Watch window that the reference obj remains null: the MyClass constructor in between hasn’t been executed.

Set next statement to here
Set next statement to here

4) Data breakpoint: Break when value changes

If you set a breakpoint to a non-static property setter it will be hit when changing the property value for all objects. The same behavior can be obtained for a single object thanks to the Locals (or Watch) window right click : Break When Value Changes menu.

This facility is illustrated with the animation above. The hit occurs only when obj2.Prop is changed, not when obj1.Prop is changed.

Note that a data breakpoint is bound to a live object during a debugging session. Hence it gets lost once the debugged process stops, it cannot be reused during future debugging session.

Note that the menu Break When Value Changes is also available when right clicking a field in the Locals window but unfortunately the debugger doesn’t break on field change, I am not sure if it’s a bug or a feature not yet implemented?

Data breakpoint: break when value changes
Data breakpoint: break when value changes

5) Conditional breakpoint

A condition can be attached to a breakpoint to break only in a certain scenario. In the animation below we define the breakpoint with condition i > 6 within the loop. Then we click Continue and can see that once the breakpoint is stopped, the i value is actually 7.

Conditional Breakpoint
Conditional Breakpoint

6) Trace breakpoint

Halting the program execution is the most common action upon a breakpoint hit. However you can choose instead to print some traces in the Output window without (or with) halting. This possibility is illustrated by the animation below where we trace the value of i from 0 to 9 in the Output window. Notice that a trace breakpoint has the diamond shape in the code editor gutter.

Note that both a condition and a trace action can be specified on a breakpoint.

Trace Breakpoint
Trace Breakpoint

7) Track Objects that Are Out-Of-Scope

In the Watch window objects are tracked by the name of their references in the currently executed scope. However when such tracked reference goes out-of-scope, it becomes meaningless in the context of the Watch window and it gets disabled, even though the referenced object is still live.

There are many situations where we’d like to continue tracking the state of an out-of-scope object. To do so, right click such reference in the Watch window, click the menu Make Object ID and add $1 in the items to watch (or $2 or $3… depending on how many object IDs you’ve already created).

The animation belows shows how to track the state of an out-of-scope object’s property getter that returns the actual date-time as a string. It shows well that when the reference obj goes out-of-scope in the context of Fct(), obj item to watch gets disabled and $1.Prop still gets updated.

Tracking an object whose reference goes out-of-scope
Tracking an object whose reference goes out-of-scope

8) View values returned by functions

The value returned by a function is sometime ignored by the source code. Or sometime this value is just not obviously accessible at debug-time.

Such returned value can be shown in the Debug > Windows > Autos windows. The pseudovariables $ReturnValue can also be used in the Immediate and Watch Windows to view the last function call returned value.

Note that the menu Debug > Windows > Autos is available only when the Visual Studio debugger is attached to a process and the program is halted by the debugger.

View the value returned by a function
View the value returned by a function

9) Reattach To Process

Since Visual Studio 2017 the Reattach to process Shift+Alt+P facility is proposed and it’s very handy. Once you’ve been attaching the debugger to a process Visual Studio remembers it and proposes to re-attach the debugger to the same process. Same is in italic because there is an heuristic here about the process identity:

  • If the process you’ve been attached to is still alive Reattach to process re-attach to it.
  • Else Visual Studio attempts to find a single process with the same previous process name and re-attach the debugger to it.
  • If several processes are found with this name, the Attach to Process dialog is opened with only those processes with same name shown
  • If no process with this name can be found the Attach to Process dialog is shown
Reattach To Process
Reattach To Process

Reattach To Process also works with debug session involving multiple processes. In this situation Visual Studio attempts to find all processes it has been attached to with the same heuristics explained above.

10) No-Side-Effect evaluation in Immediate Window and in the Watch Window

Sometime when evaluating an expression in the Immediate or in the Watch window some state gets changed. This behavior is often indesirable, you don’t want to corrupt the state of your debugged program just because you needed to evaluate the value of an expression. This situation is known as an Heisenbug , the term is a pun on the name of Werner Heisenberg, the physicist who first asserted the observer effect of quantum mechanics, which states that the act of observing a system inevitably alters its state.

To avoid changing any state you can suffix your expression with , nse (No-Side-Effect). This possibility is illustrated by the animation below (watch the _State value changing or not in the Watch window):

No Side Effect expression evaluation
No Side Effect expression evaluation

Here is ,nse used in the Watch window. This sample is less trivial than the previous one because of the Refresh evaluation button in the SideEffectFct() watched item.

No Side Effect expression evaluation in the Watch Window
No Side Effect expression evaluation in the Watch Window

11) Show Threads in Source

Debugging a multithreaded application is notoriously complex. Hopefully the Show Threads in Source button can help a lot.  It introduces marker icons in the editor gutter to keep track of the locations on which other threads are halted. This marker can be used to show the thread ids and eventually switch to another thread. Notice that a different marker glyph is shown if at least two threads are halted on the same location.

Show Threads In Source
Show Threads In Source

More tips to debug multithreaded applications are available in this Microsoft documentation: Get started debugging multithreaded applications (C#, Visual Basic, C++)

Here is the source code of this small demo if you’d like to play with it:

12) Debug source code decompiled from IL code

Often we depend on some black-box components: assemblies for which we don’t have the source code.

However when debugging a complex behavior it is convenient to observe and even debug the logic nested in black-boxes referenced. This is why since version 16.5 Visual Studio 2019 can generate some source code from compiled assemblies. Such source code is then debuggable. This feature is based on the OSS project ILSpy.

The decompilation menu can be proposed in the assembly right-click menu in the Modules window (as shown in the animation below) and in the Source Not Found or No Symbols Loaded dialogs.

Decompiling IL code to source code cannot be perfect because some source information is lost at compilation time. Hence this feature has a few limitations explained at the end of this official documentation: Generate source code from .NET assemblies while debugging.

Decompile IL code to source code that can be debugged
Decompile IL code to source code that can be debugged

Conclusion

Visual Studio shines but it especially shines when it comes to debugging. Here I tried to select some tips that are both quite hidden but often useful, I hope they will help improve your productivity.

 

 

10 Visual Studio Ninja Code Editor Productivity Tips

Among the multiple daily development tasks (planning, testing, refactoring, bug fix…) code edition is arguably the most satisfying one. Code edition can be even more satisfying and productive by mastering the following Visual Studio productivity tips. In my opinion these tricks are not optionals: they should be part of all Visual Studio developer skills.

1) Move one or several lines up and down

The line that contains the editing caret can be moved up and down with Alt+Up and Alt+Down.

The same way several lines selected can be moved up and down.

Move lines up and down with Alt+Up and Alt+Down
Move lines up and down with Alt+Up and Alt+Down

2) Rectangular Selection

Rectangular selection is very useful to quickly edit a code portion. This can be achieved with Alt+Shift+Arrows shortcuts.

Rectangular Selection with Alt+Shift+Up/Down/Left/Right.
Rectangular Selection with Alt+Shift+Arrows.

3) Multi-Lines Edition

Once mastering rectangular selection, it can be used to edit multiple lines at once:

Multiple Lines Edition
Multiple Lines Edition

4) Multi-Carets Edition

Often we need to repeat the same edition at multiple locations. To do so multiple carets can be defined with Ctrl+Alt+Mouse Click:

Multiple Carets with Ctrl+Alt+Mouse Click:
Multiple Carets with Ctrl+Alt+Mouse Click:

5) Multi-Carets Edition on Same Matches

Multiple carets can be set to next locations of the selected word. To do so first select the word and then use Ctrl+Shift+; .

Multiple carets can be set to all locations of the selected word with Ctrl+Shift+$ .

Same Match Selection
Same Match Selection

6) Expand / Contract Selection

You can expand the selection with Shift+Alt+= and contract it with Shift+Alt+.

Expand Selection
Expand Selection

7) Make Selection Uppercase / lowercase

You can Uppercase the selection with Ctrl+Shift+U and lowercase it with Ctrl+U

Make Selection Uppercase / lowercase
Make Selection Uppercase / lowercase

8) Outlining

The most important keyboard shortcut when it comes to outlining is Ctrl+M Ctrl+M. This shortcut expands / collapses the code portion that contains the caret.

Not all developers enjoy outlining and if you are one of those, you can discard it with Ctrl+M Ctrl+P.

Expand / Collapse Outlining
Expand / Collapse Outlining

9) Vertical Scrollbar Map Mode

The map mode is one of the most useful Visual Studio code editor facility. It does really help finding your way in source file.

Vertical Scrollbar Map Mode 
Vertical Scrollbar Map Mode

It can be set from Visual Studio > Tools > Options > Text Editor > All Languages > Scroll Bars > Behavior.

The width of the map can be: Narrow Medium Wide

Set Vertical Scrollbar Map Mode 
Set Vertical Scrollbar Map Mode

10) Bookmarks

When developing we often have to come back and forth through multiple locations in several source files. Fortunately you can concretely define this locations-set with bookmarks. Here are bookmarks shortcuts:

  • Ctrl+K, Ctrl+K to toggle a bookmark at caret position
  • Ctrl+K Ctrl+N go to Next bookmark
  • Ctrl+K Ctrl+P go to Previous bookmark
  • Ctrl+K Ctrl+L clear all bookmarks

Note that Visual Studio remembers bookmarks when closing it and restarting it.

Toggle Bookmarks with Ctrl+K, Ctrl+K
Toggle Bookmarks with Ctrl+K, Ctrl+K

11) Bonus: Visual Studio 2019 Clipboard Ring Preview

Ctrl+C copy some data to the clipboard. Actually Visual Studio maintains a clipboard ring to store several data.

With Visual Studio 2019 Ctrl+Shift+V shows a preview of the various data in the clipboard ring. This is quite useful to navigate through the copy history.

Visual Studio 2019 Clipboard Ring Preview
Visual Studio 2019 Clipboard Ring Preview

Conclusion: Visual Studio > Edit > Advanced

The Visual Studio menu Edit > Advanced is both a good start and a good reminder for code editor keyboard shortcuts.

There are more sub-menu including Bookmarks, Outlining, Intellisense and Multiple Carets.

Visual Studio Edit Advanced
Visual Studio Edit Advanced

I like the ninja-coder analogy because these tips must be repeated again and again as kata to be mastered in your daily coding routine.

Kata is a Japanese word meaning literally “form” referring to a detailed choreographed pattern of martial arts movements made to be practised alone, and also within groups and in unison when training. It is practised in Japanese martial arts as a way to memorize and perfect the movements being executed

Why not take 8 minutes now with this background theme and practice?

Be curious: there are many more Visual Studio facilities that will help becoming a ninja coder, first one being code snippets.

10 Visual Studio Solution Explorer Productivity Tips

The Visual Studio Solution Explorer panel is like home for Visual Studio users. It presents all projects, source files and items thanks to a treeview layout.

This panel is quite sophisticated and it is likely that you don’t use all the power of this great tool. Here are some tips:

1) Automatically Track active Item in Solution Explorer

On a large Visual Studio solution with hundreds or thousands of source files, the Visual Studio Solution Explorer button Sync with Active Document is often clicked to find your way.

Visual Studio Solution Explorer Sync with Active Document

This sync can be done automatically. For that you just need to check: Visual Studio  >  top menus   >  Tools  >  Options…  >  Projects and Solutions  > General  >  Track Active Item in Solution Explorer.

Track Active Item in Solution Explorer
Track Active Item in Solution Explorer

2) Improve Visual Studio Startup Performance

Visual Studio startup is faster with these 3 settings:

  • Allow parallel project initialization checked
  • Reopen documents on solution load unchecked
  • Restore Solution Explorer project hierarchy state on solution load unchecked

This tip is especially useful to accelerate Visual Studio Startup when opening a large solution.

Improve Visual Studio Startup Performance
Improve Visual Studio Startup Performance

3) Search in Visual Studio Solution Explorer

The shortcut Ctrl+; set the focus in the Visual Studio Solution Explorer search textbox and you can start typing your search term(s). If the Solution Explorer panel is not visible this shortcut makes it visible. When displaying search results the treeview hierarchy is kept so you know the location of matched items.

The search setting Search within file contents is set by default but can be turned off to restreint the search on file name only.

Search in Visual Studio Solution Explorer
Search in Visual Studio Solution Explorer

The search setting Search within external items is set by default to match files in External Dependencies folders. Turning it off prevents searching within the C/C++ External Dependencies folder. This setting is not related to C#, VB.NET and F# projects.

Search within external items
Search within external items

4) Browse Types, Methods and Fields

In Visual Studio Solution Explorer a source file item can be expanded to browse type(s), methods and fields declared in the source file. Types, methods and fields items are shown in the same order of their declarations in the source file. These items can be double-clicked to open the source declaration and right-clicked to show the related menus.

Visual Studio Solution Explorer Types, Methods and Fields Items
Visual Studio Solution Explorer Types, Methods and Fields Items

5) Filter File Opened or Pending Changes

Two filters are proposed:

  • Open Files Filter: This filter is especially useful when a code review or a change spawn on many files.
  • Pending Changes Filter: Often a refactoring session spawn on several files and this filter lets keep track of file changed.

It is also possible to create your own filter as explained in this documentation: Extend the Solution Explorer filter

Open Files or Pending Changes Filters
Open Files or Pending Changes Filters

6) Quickly Set the Startup Project

Most developers I know setup the startup project this way: 1) find the project in the Solution Explorer 2) right click the project and find in the long list of menus the Set as Startup Project menu.

Here is a quicker way : select the Solution item, Alt+Enter shortcut to view the Solution Property Pages dialog, there you can quickly chose the startup project in a combo-box and even define multiple startup projects:

Quickly Set the Startup Project
Quickly Set the Startup Project

7) Preview Selected Items on Project

When the checkbox Preview Selected Items is checked, when selecting a source file it is automatically opened in the preview tab. This feature is widely used but what you might not know is that this feature also work to open project files XML content!

Preview Selected Item Works on Project Item
Preview Selected Item Works on Project Item

8) Scope to a Single Project

Often a development session only concerns a single project. You can restreint the Visual Studio Solution Explorer to only a single project with the project right-click menu Scope to This.

Scope to a Single Project
Scope to a Single Project

9) Multiple Solution Explorer Views

I wish that the Scope to This facility shown above would be available when right-clicking several projects to restreint the focus on a few projects, but it is not.

However it is possible to work with multiple solution explorer views, each one scoping to a project. To do so right-click a project and click the menu New Solution Explorer View. Notice that extra Solution Explorer View(s) are not persisted. When you close and re-open Visual Studio they are not here anymore.

New Solution View
New Solution View

10) Switch to Folder Views

Most of the time the Visual Studio Solution Explorer displays assets structured in a logical way. For example solution folders used to group projects don’t necessarily represent real OS file system folders. It is often useful to switch to a physical presentation of the solution folders and files. This can be achieved with the Switch Views combo box.

Solution Explorer Folder View
Solution Explorer Folder View

11) Bonus: Explore Solution Architecture

The new NDepend powerful dependency graph is like a second Solution Explorer that focuses on the solution architecture. Folders and files can be drag-and-dropped from the Visual Studio Solution Explorer to the graph.

Drag and Drop from the Visual Studio Solution Explorer to the NDepend Dependency Graph
Drag and Drop from the Visual Studio Solution Explorer to the NDepend Dependency Graph

There are many more facilities presented in the videos below:

  • It can handle live very large solutions made of dozens or hundreds of projects.
  • Right clicking a solution explorer item like a project or a source file shows a menu to generate a graph of the item’s child, callers and callees.
  • Box size in the graph is proportional to the number of lines of code to get an intuitive view.
  • Items in the graph can be searched, expanded and collapsed the same way as in the Visual Studio Solution Explorer.
  • The graph has a navigation bar to quickly generate call graph, coupling graph, class diagram, graph of dependency cycles, graph of changes since baseline and more.

 

Case Study : Complex UI Testing

In the previous post Case Study: 2 Simple Principles to achieve High Code Maintainability I explained that the principles layered code + high coverage ratio by test are 2 simple principles that can be objectively applied, validated and measured. When these 2 principles are applied they lead to High Code Maintainability: As a consequence the management saves money in the long term and developers are happy to work in a cleaner code base with principles easy to follow.

Large and complex UI 90%+ covered by tests

In this previous post I used the example of the new NDepend v2020.1 graph. This new tool is a large and complex UI with dozens of actions proposed to the user and drastic performance requirement (it scales live on 100.000+ elements). This graph implementation is 90% covered by tests. It is not because there is a lot of UI code that it should not been well tested. We didn’t spend a good part of our resources in writing tests just for the sake of it. We did it because we know by experience that it’ll pay off: probably a few bugs will be reported as for all 1.0 implementation although beta test phases already caught some. But we are confident that it won’t take a lot of resources to fix them. We can look forward the future confidently (like supporting properly .NET 5 that will be released in November 2020).  And indeed 10 days after its 1.0 release no bug has been reported (nor logged) on this new graph although many users downloaded it: so far it looks rock-solid and we can focus on what’s next.

The picture below shows all namespace, classes and methods of the graph implementation. Smaller rectangles are methods and the color of each rectangle indicates how well a method is covered by tests. Clearly we tolerate some gaps in UI code, while non UI code like Undo/Redo actions implementations are 100% covered. Experience told us how to balance our resources and that everything does not have to be perfect to achieve high maintainability.

NDepend Graph Implementation 90% covered by tests

How did we achieve High Coverage Ratio on UI Code?

It is easy: we have a simple MVC (Model View Controller) design. Some controller classes contain the logic for all actions the user can do and those classes pilot the UI. Concretely in our scenario actions are: load/save, change group-by, change layout direction, zoom, generate a call graph for this method, change filters…

Then we wrote a test suite that first starts the UI and then invokes all actions. Each complex peculiarity of each action gets fully tested, hence complex actions get invoked several times by tests but differently each time, to make sure all scenarios get tested.

The video below shows the UI under testing: more than 40 actions get tested in less than a minute. It would take more than an hour to do all this work manually and any change in code could potentially ruin the validity of manual tests.

In such a complex UI there are many classes that are not directly related to UI. For example the grape of classes that describe the underlying model are tested separately.

As usual, a side benefit of writing tests is better design : the code gets structured in a way that makes it easy to invoke it through tests. Concretely some abstractions are introduced (that wouldn’t make sense without tests), some classes and some methods get splitted, some logic gets refined and as a result developers are happy to live in a code base where the logic is smoothly implemented.

High Coverage Ratio is not Enough: Assertions to the rescue

Typically at this point comes the remark: but code coverage is not enough, results must be asserted. And indeed, if nothing gets asserted nothing gets tested even if the code is entirely covered by tests.  We want tests to fail if something can go wrong.

Of course our tests contain many assertions for example load / save actions are invoked and asserted this way:

But these assertions are not enough. Per definition the UI code contains tons of visual peculiarities represented by states that can be potentially corrupted. As a consequence our UI code is stuffed with thousands of assertions: everything that can be asserted gets asserted.

  • A Rectangle with width/height in certain range
  • The state of a node or an edge when another element gets selected (is it a caller, a callee…?).
  • The current application state when a new graph is demanded by the controller.
  • The graph UI contains many asynchronous computation to avoid UI freezing. This leads to many assertions to check that mutable states are not corrupted by concurrent accesses.

All those states asserted would be hardly reachable from test code. However they get naturally accessed by the UI code itself so it is the right place to assert that they are not corrupted.

Btw, We still use the good old System.Diagnostics.Debug.Assert(…) for that, it has several advantages:

  • It is simple.
  • It is understood by tools like Roslyn/Resharper/CodeRush analyzers.

  • An assertion that fails cannot be missed both when running automatic tests and when running manual tests on the Debug mode version.
  • Debug assertions are removed by the compiler in Release mode: assertions are not executed in production and users get better performance. The idea is to not consider users as testers: code released in production is supposed to be rock-solid. Assertions are like scaffolding that gets removed when a building gets delivered. If there is still a bug we’ll discover it from users feedback, from production logs or from our own manual tests.

Debug.Assert(…) is enough for us and it is understandable that some other teams wants more sophisticated assertions framework. The key is to take the habit to assert everything that can be asserted when writing code (UI code or not). Each assertion is a guard that helps making the code rock-solid. Also each assertion improves the code readability. At code-review time we’ve all been wondering: can this integer be zero? can this string be empty? can this reference be null?. Hopefully C#8 non-nullable discards the last question but so many questions remain open without assertions.

Design by Contracts

This idea of stuffing code with assertions is actually an important software correctness methodology named DbC, Design by Contract, that is really worth knowing. Contracts mean much more than the usual approach with exception:

  • Explicitly throwing an exception says: zero is tolerated, it is not a bug, but you won’t get the result you’d like, be prepared to catch some exceptions.
  • Writing a contract says: don’t even thing of passing a zero value. The real type of argument is not Int32 it is [Int32 minus 0].  Ideally such violation could be caught by compilers and analyzers (and is indeed sometime caught as we saw in the screenshot above).

Conclusion

Any complex UI can be automatically tested as long as:

  • It is well designed with some controllers that pilot the UI and that can be invoked from tests.
  • UI code gets stuffed with assertions to make sure that no state becomes corrupted at runtime.

In short assertions embedded in code tested matter as much as assertions embedded in tests. If an assertion gets violated there is a problem, no matter the assertion location, and it must not be missed nor ignored. This powerful idea doesn’t necessarily applies only to UI code and is known as DbC, Design by Contract.

Actually in this post I added a third principle to achieve high code maintainablity and high code correctness : layered code + high coverage ratio by test + contracts

Case Study: 2 Simple Principles to achieve High Code Maintainability

High Code Maintainability is the key to make both the management and the developers happy:

  • Maintainability lets a product evolves naturally at a sustained pace with controlled cost.
  • Maintainability lets developers add new features and improve existing ones without spending most of their time refactoring old dusty code and fixing bugs.

After 16 years of development on our product NDepend (first release in April 2004!) we came to the conclusion that:

Highly Maintainable Code can be achieved through two simple, objective and verifiable principles: Layered Architecture and High Test Coverage Ratio

Layered Architecture prevents entangled code, the well know spaghetti code phenomenon. Dependencies get mastered and when it is time for the code to evolve new classes and interfaces naturally integrate with existing ones.

High Test Coverage Ratio means that when code covered by tests get refactored, existing tests get impacted. With not much efforts the developer discovers regression problems and fix them before they go to production and become bugs to fix. The more code is covered by tests the more you’ll benefit from this shield.

When writing a tool for developers, the most satisfying part is to challenge the tool on its own code: this practice is named dogfooding. We just rewrote completely the dependency graph of NDepend so let’s use this important refactoring as a case study. Then we’ll see how to automatize the validation of these principles.

Case Study: Layered Architecture

Let’s first present the layered architecture principle and then the test coverage principle.

See below a graph of the 250+ classes, interfaces and enumerations used to implement the new dependency graph. A 2.500+ classes, methods and fields SVG vector dependency graph is available here.

The class GraphController is selected:

  • The blue classes are the ones directly used by GraphController
  • The light-blue classes are the ones indirectly used by GraphController (indirectly means used by a classes used by a class … used by GraphController). Clearly GraphController relies on everything.
  • The red classes are the ones mutually dependent with GraphController.
The NDepend Dependency Graph used to visualize its own code

Several things can be said on how this code is structured:

  • This is not an API so we can use namespaces the way we want. Here namespaces implement the concept of components.
  • Box size is proportional to the number of lines of code. We can see that the overall namespaces box size is well balanced. This is a good practice to avoid having a few monster components and tons of smaller components.
  • The biggest component in terms of number of classes and lines of code is the implementation of the Undo/Redo system. More than 30 actions are implemented (expand/collapse, change GroupBy, select/unselect, generate a call graph…). These actions are relatively low level in the structure. While they act on the entire system they are not coupled with the controller, the UI rendering or the layout computation.
  • The two lowest components are Base and Model. Both contain few logic and are used by almost all other components.

In the future, whether we add new actions on the graph or decide to improve the layout somehow, this architecture won’t undergo drastic modifications. Thanks to this view it’ll be easy to decide in which component to add our new classes or if new components should be added and what they can and cannot use.

Ideally the GraphControl class shouldn’t be entangled with the GraphController class. These two classes have been developed together. See below the coupling graph between GraphController and GraphControl. It has been obtained by double-clicking the red arrow between the two classes. It wouldn’t be difficult to introduce an interface to inject one implementation in the other one but we didn’t do it (see below the coupling graph between the two classes) . This is the key when it comes to care for maintainability: which move will offer the highest ROI? Not everything has to be perfect just for the sake of it. Experience shows that having only two classes entangled does not impact much the maintainability. We estimated that spending our resources to satisfy the two principles has a better ROI in the long run.

Coupling Graph between GraphController class and GraphControl class

 

Case Study: High Test Coverage Ratio

The graph implementation is 90% covered by tests. It is not because there is a lot of UI code that it should not been well tested. We didn’t spend a good part of our resources in writing tests just for the sake of it. We did it because we know by experience that it’ll pay off: probably a few bugs will be reported as for every 1.0 implementation although beta test phases already caught some. But we are confident that it won’t take a lot of resources to fix them. We can look forward the future confidently (like supporting properly .NET 5 that will be released in November 2020).

The picture below shows all namespace, classes and methods. Smaller rectangles are methods and the color of each rectangle indicates how well a method is covered by tests. Clearly we tolerate some gaps in UI code, while non UI code like Undo/Redo actions implementations are 100% covered. Here also experience tells us how to balance our resources and that everything does not have to be perfect to achieve high maintainability.

NDepend Graph Implementation 90% covered by tests

In terms of lines of code the NDepend Graph is not even 5% of the entire product, it is a tool in the toolset. The worst case scenario would be that each tool implementation regularly spits some bugs: all our resources would be spent fixing them, we couldn’t continue adding value to the product and the business would probably die at a point. Not even mentioning the frustration of users of a buggy product.

Each year we fix a few dozens of bugs that each impact few users but that doesn’t take us more than a tiny percentage of our overall development resources. The overall code base is 86.5% covered by tests and is entirely layered: maintenance doesn’t cost us much.

Typically at this point comes the remark: but code coverage is not enough, results must be asserted by unit-tests. And indeed, if nothing gets asserted nothing gets tested even if the code is entirely covered by tests.  We want tests to fail when something is going wrong. In this next post Case Study : Complex UI Testing I explain how millions of assertions get checked while running our test suite against the graph implementation.

Automatically Validate Layered Architecture and High Test Coverage Ratio

NDepend offers hundreds of default code rules but only 4 of them are used to validate these key points:

The fourth rule Avoid namespaces mutually dependent helps a lot to layer a large super-component. In this situation the first thing to do is to make sure there is no pair of components that use each other. For each such pair of namespaces matched, this rule has an heuristic and tells which type should not use which other type, same for method level. A technical-debt estimation is also given in terms of development effort it’ll cost to fix each pair. Here it says that 11 man-day (8 hours a day) should be spent if someone decides to layer the NHibernate code base. Unfortunately this is not possible because it would break thousands of client code base bound with it. Also let’s note that an interest estimation is also given in terms of: how much development effort does it takes per year if I let issues unfixed. Here this rule estimates that not fixing all those pairs of namespaces entangled costs 5 man-days a year to the development team.

Avoid Namespaces Mutually Dependent with advices on what to do and costs estimation

These rules can be validated during the build process (Azure DevOps / TFS, Jenkins, TeamCity, Bamboo, SonarQube…) and the team can know when the new code written diverges from these two maintainability goals.

 

Conclusion: Objective, Verifiable, Simple

What is interesting with these two simple concepts, layering and code coverage, is that they can be objectively applied, validated and measured. Last year in 2019 I wrote a blog post series on SOLID principles and there have been so much debate about how to apply them in the real-world. SOLID principles are a great way to improve our understanding of Object Oriented Programming and how encapsulation, abstraction, polymorphism, inheritance … should be used and not used. But when it comes to write maintainable code everyone has a different opinion.

If it is decided that the code structure should be layered there is not much debate about which part should be abstracted from other ones. If a class A should use a class B and B is in a higher layer than A, somehow an interface IB must be created at A level to inject the B implementation in A without breaking the layering.

These 2 concepts emerged over the years because we had the utter need to produce maintainable code. What I really like is that they are simple. And KISS (Keep It Simple Stupid) is a great principle in software engineering.

If a third principle should be added it would definitely be about user documentation: we offer free email support to users but we also offer tons of embedded and online documentation. Everytime a question starts to be asked a few times, we make sure that users can get the response immediately from both a tooltip (or a smart UI change) and from the online documentation. Some other ISV decides to make money with support. Personally I don’t find this fair because it is a clear incentive to produce rotted documentation and hence frictions for the user.

How did we obtain the image in this post

Let’s show that all those images in this post have been obtained within a few clicks.

  • First let’s search for Graph Panel in the entire NDepend code base (they get zoomed automatically).
  • Then let’s reset the metric view with NDepend.UI.Graph.* namespaces to get the colored treemap.
  • Then let’s go back to the graph and only keep NDepend.UI.Graph.* namespaces matched by the search.
  • Then un-group by parent assembly to get a graph made of namespaces only.
  • Then change the layout direction from Top to Bottom to have a nicer layout.
  • Then expand all namespaces to get all classes.
  • Finally expand all classes to get all methods and fields.
Using the NDepend Graph to obtain a clear view of the implementation of the Graph

 

Advices to Become a Remote Programmer

With the actual COVID-19 worldwide outbreak many programmers are already forced to work remote, and we can expect that most of us will soon be home.

In the IT and especially in the development industry we are lucky enough: the bulk of our work can be done remotely. I guess many of us will enjoy changing their daily routine will less social obligations and more time spent within the IDE.

I am working remotely for 15 years so here are some advices I’d give.

Care for your Workspace

Most passionate programmers already have a workspace at home. I won’t go through the invest in great hardware advice because as a programmer you certainly have thought about all this. However it is certainly time to improve your workspace.

The number one move for me has been to work in a room different than the bedroom. Your brain needs to identify clearly where to code and where to rest. Mulling over programming problems is common outside the work office, but working at home bring that to a different level. Unfortunately working outside of the bedroom is not always practicable, especially taking account that kids will likely have to stay at home if the government’s choose to close schools as they already did in several countries. If you can free-up working time in a room different than the one where kids are staying, it is time to invest in noise cancelling headphone that can be combined with a pair of earplugs. Custom silicone earplugs (made for my ears by a professional audioprosthetist) combined with my Bose QuietComfort lead to great noise reduction.

When working at home reducing distraction is a major challenge. If one of your window has a nicer view, or at least a clear sky view, it is worth working near and turn the head 90 degrees for micro-pauses. If possible work in a room where there is no TV nor game console. However the biggest source of distraction are chats, meetings, social networks, mails, news, youtube.. Of course some time must be dedicated to all those but I advise to be radical: turn-off the WIFI when it is not time to go online. This is inspired from some of my most productive coding sessions that took place in planes. Not being able to access github or stackoverflow can be problematic so you can be less radical by logging-off from all social media and use the browser in incognito mode: the key is to voluntary stay away from temptation. The more you resist, the easier it gets to resist.

Having a secondary informal workspace can help, especially if you must stay home. Working on a couch for one or two hours with my laptop often leads to quite productive coding sessions. But avoid working in your bed: your brain will associate bed with work and you’ll get hard time to fall asleep.

Structure your daily routine

Identify the time of the day when you are the most productive and structure your daily routine around it. You must agree with your company when you’ll be available online for virtual chats and meetings. If they are not flexible on social work schedule, insist to get some chunks of uninterrupted time. Working at home is your chance to avoid being interrupted during your peak coding hours and you might be well experiencing at home some of your highest productivity time.

There are some times of the day that deserve particular attention:

  • Starting the working day, here is a tip recently shared by Scott Hanselman. More generally I’d advise to do anything non-digital for 20 to 40 minutes before getting started (exercise, walk, time with family, breakfast, mindfulness meditation…). Also keep in mind that getting started is the hardest part: be especially careful of what you are doing during the first minutes of your working day, this will determine your productivity for the next hours.

  • Lunchtime: if possible this is the time to go outside and meet people. Also you’ll benefit from daylight which is an important component for good sleep at night.
  • After lunch: a 10/15 minutes nap can help a lot to counter the digestion tiredness, to start the afternoon fresh and focused.
  • Log-off time: unless you are the kind of programmers that gets hyper-productive during the night, it is important to decide ahead when you’ll log-off. Coding is a quite an addictive activity that can ruin your evenings, your sleep cycle and your social life.

Communicate

You are certainly already using a remote code hosting platform like github. But there are many coding situations where a short face-to-face video chat is more efficient than hours of back and forth over. This necessity needs to be acknowledged by all team members and you must agree ahead which situations deserve a synchronous interruption.

Also everyone is different and you should be aware (and take account) of communication mediums favored by each of your colleague.

Trust plays a big role in a team of remote developers. It is a good habit to advocate yourself by explaining what you’ve done and ask for feedback. If for some reason you estimate that you don’t progress as quickly as expected, discuss with others. However talking too much about yourself can quickly become bothering for others: find the right balance and also take some extra care to listen to others.

Ask for code reviews and review your peer code, especially if reviewing code is not part of your team habits. Keep in mind that the code itself is a great way to communicate with other developers.

Care for Yourself

By losing your daily routine at work it is easy to become a sloth potato, especially if you are living alone. This is why you should take care of yourself by abiding by a number of no-brainers routines:

  • Keep your shower and shave time.
  • Get dressed. Even better, dedicate some comfortable clothes for work.
  • Exercise at the same time everyday. It is a great mindset trick to consider exercise as an integral part of your work to never miss a session.
  • Take care of what you are eating, no need to ramble on how important is quality food.
  • If you can go out, go out at least an hour every day. Else make sure to take some daylight from your window.
  • Get started with mindfulness meditation. The goal is to step back and watch yourself working most of the time. This way when your mind slides to a social network or procrastinate you know it is time to take a break.
  • Ideally you should anticipate when you’ll take breaks.
  • Make sure you sleep enough, at least 7 hours a day, 8h preferably. I know some developers that sleep less to work more but they are ruining their health and their long-term productivity. This is not a judgment but a scientific prediction.
  • If you are working alone at home for weeks you’ll quickly feel lonely. Be aware of this threat and do everything you can to fight it: go out and meet your friends and relatives if possible, else do some video chats with them. Listening to music while coding can help (personally this kind of music bring me quickly in the zone).
  • Before having children I enjoyed working during the weekend with no email interruption, and do weekend’s stuff during the week to avoid the crowd. By now my weekends are entirely dedicated to family and social time and even during busy period I don’t sacrifice them. It is really up to you to decide to work at night and/or during weekends but keep in mind that the standard work hours scheme exists for a reason. The key to all great achievements is to take some rest at the right pace.

Conclusion

We’re facing weird time and we can only expect worse for the weeks to come. Many professional programmers will experience for the first time remote work. The good news is that mathematic tells us that the COVID-19 worldwide outbreak will curve down by this spring. Take care.

 

 

 

Don’t rely on someone else to protect your software

This morning I stumbled on this post Decompilation of C# code made easy with Visual Studio on the Visual Studio blog. Basically VS will soon be able to not only decompile third-party code but also generate some sort of PDB information that will make the decompiled code debuggable. The promise is no more  “No Symbols Loaded” or “Source Not Found” from within debugging session in VS and personally I found this awesome.

However most of this post’s comments are like “how do I protect my code from being decompiled then?!” and “Microsoft does not care about its customers’ intellectual property.”. These comments are absurd. Since its inception in 2002 .NET compiled code can be decompiled and read crystal clear with popular tools like .NET Reflector, IL Spy, dotPeak… This is a direct consequence of having IL/byte code and a CLR with a JIT compiler. I wonder to which extend those that wrote these comments are aware of that?

Protect your Intellectual Property

The first step is to make sure that your EULA forbid from decompiling your code, something like:  Licensee may not reverse-engineer, decompile, disassemble, modify, or translate the Product, or make any attempt to discover the source code of the Product;

The second step is to obfuscate your compiled code. Since 2002 those that want to protect their compiled code from decompilation just have to obfuscate it. There are mature free tools like ConfuserEx and paid tools available. This is what we do within our .NET shop since 2007 with success.

Also with .net native and AOT (Ahead of Time compilation) one can add a whole new layer of complexity by skipping the IL code and compile directly to machine code (e.g. X64 instructions).

But keep in mind that Obfuscators and AOT only protect the intellectual property to some extent. Your code best kept secrets are still executable in both scenarios, it means they are still there. Someone skilled ready to spend a large amount of time to reverse engineer your code can still have access to your intellectual property.

The ultimate way to protect your intellectual property is to provide your services as online SaaS. This way nobody will ever have access to your code. For example the whole SEO industry is based on guessing what Google and others web search engine are doing. These algorithms are protected because nobody except Google employees have access to them. In many scenarios Saas means also that one forces his clients to share their sensitive data, since they are processed on one’s server, and in many scenario this is not applicable. By spying your data Google and Facebook knows you better than you do, but it seems that only a fraction of the humanity disagree with that. Ok I digressed…

Protect your software from hackers

Keep in mind that Obfuscators and AOT don’t protect your code from hackers. It is still easy for any solid hacker to crack your license-checking-layer and provide a free version of your software online as a warez. The only possible protection from hackers that have access to your code are integrity checks. An integrity check is made of two parts:

  1. A layer that detects if some bytes of your compiled code have been tweaked (typically with a custom or standard hash function)
  2. A subtle malfunction of your software that prevents to use it if the compiled code has been tweaked.

The whole point of an integrity check is to consume the time of the hacker. This is why the word subtle is in bold, the malfunction is not a dumb exception to provoke a fail-fast, the malfunction must be something that appears minutes after the integrity check failed and finally makes the software totally unusable (like a massive memory leak – clearing some data – freezing some UIs – firing a timer to close the user session after a random number of minutes…)

By multiplying the integrity checks and the subtle corresponding malfunctions one can only hope to discourage a talented hackers to waste days, weeks or months to crack his software. But be aware that these guys are primarily driven by challenges… The good news is that with .NET there are tons of possible ways to write subtle integrity checks.

Conclusion

Protecting the intellectual property and the software itself is a difficult task. Don’t complain to Microsoft or anyone else that they should offer an out-of-the-box tool for that. If such a mainstream protection tool existed it would be a challenge for the best talented hackers and it wouldn’t resist long anyway.

The question you should ask for is: is it worth spending resources to protect my assets instead of spending these resources to make my paid clients even more happy. This is a difficult trade-off that must be carefully thought out. But always keep in mind that you must not rely on someone else to protect your software.

Mythical man month : 10 lines per developer day

The mythical book, Mythical man month quotes that no matter the programming language chosen, a professional developer will write on average 10 lines of code (LoC) day.

After 14 years of full-time development on the tool NDepend I’d like to elaborate a bit here.

Let’s start with the definition of logical Line of Code. Basically, a logical LoC is a PDB sequence point except sequence points corresponding to opening and closing method brace. So here we have a 5 logical lines of code method for example:

I already hear readers complaining that LoC has nothing to do with productivity. Bill Gates once said “Measuring software productivity by lines of code is like measuring progress on an airplane by how much it weighs.“.

And indeed, measured on a few days or a few weeks range, LoC has nothing to do with productivity. As a full-time developer some days I write 200 LoC in a row, some days I spend 8 hours fixing a pesky bug by not even adding a LoC. Some day I clean dead code and remove some LoC. Some other days I refactor existing code without, all in all, adding a single LoC. Some days I create a large and complex UI control and the editor generates automatically 300 additional LoC. Some days are dedicated solely to performance enhancement or writing tests…

What is interesting is the average number of LoC obtained from the long term. And if I do the simple math our average is around 80 LoC per day. Let’s precise that we are strict on high code quality standard both in terms of code structure and formatting, and in terms of testing and code coverage ratio (see the last picture of this post that shows the NDepend code coverage map). For a code quality tool for developers, being strict on code quality means dogfooding☺.

So this average score of 80 LoC produced per day doesn’t sacrifice to code quality, and is a sustainable rhythm. Things get interesting with LoC after calibration: caring about counting LoC becomes an accurate estimation tool. After coding and measuring dozens of features achieved in this particular context of development, the size of any feature can be estimated accurately in terms of LoC. Hence with simple math, the time it’ll take to deliver a feature to production can be accurately estimated. To illustrate this fact, here is a decorated treemap view of the NDepend code base, K means 1.000 LoC. This view is obtained from the NDepend metric view panel with handmade coloring to illustrate my point. The small rectangles are methods grouped by parent classes, parent namespaces and parent assemblies. A rectangle area is proportional to the corresponding method #LoC.

treemap code-metric

Thanks to this map, I can compare the size in terms of LoC of most components. Coupling this information with the fact that the average coding score if 80 LoC per day, and looking back on cost in times for each component, we have an accurate method to tune our way of coding and estimate future schedules.

Of course not all components are equals. Most of them are the result of a long evolutive coding process. For example, the code model had undergone much more refactoring since the beginning than say, the dependency matrix for example that had been delivered out-of-the-box after a few months of development.

This picture reveals something else interesting. We can see that all these years spent polishing the tool to meet high professional standards in terms of ergonomy and performance, consumed actually quite a few LoC. Obviously building a performant code query engine based of C# LINQ that is now the backbone of the product took years. This feature alone now weights 34K LoC. More surprisingly just having a clean Project Properties UI management and model takes (model + UI) =(4K + 7K) = 11K LoC. While a flagship feature such as the interactive Dependency Graph only consumes 8K LoC, not as much as the Project Properties implementation. Of course the interactive Dependency Graph capitalizes a lot on the existing infrastructure developed for other features including the Dependency Model. But as a matter of fact, it took the same amount of effort to develop  the Dependency Graph than to develop a polished Project Properties model and UI.

All this confirms an essential lesson for everyone in charge of an ISV. It is lightweight and easy to develop a nice and flashy prototype application that’ll bring enthusiast users. What is really costly is to transform it into something usable, stable, clean, fast with all possible ergonomy candy to make the life of the user easier. And these are all these non-functional requirements that will make the difference between a product used by a few dozens of enthusiast users only, and a product used by the mass.

To finish, it is also interesting to visualize the code base through the prism of code coverage ratio. The NDepend code base being 86% covered, by comparing both pictures we can easily see which part is almost 100% covered and which part need more testing effort.

Not planning now to migrate your .NET 4.8 legacy, is certainly a mistake

2020 will see the achievement of the massive remodeling of the .NET platform initiated by Microsoft in November 2014 with the introduction of .NET Core 1, with the promise of an open-source, a multi-platform and a modernizable framework (thanks to no rock-solid backward compatibility constraint) – everything that the .NET Framework isn’t. This U-turn in the Microsoft plans for .NET is part of the new Microsoft’s strategy initiated by Satya Nadella that became CEO of Microsoft in February 2014, succeeding to Steve Ballmer.

.NET 5 will be released in November this year. Within 6 years Microsoft will have succeeded a complete platform shift like no other. The .NET Core brand was here to make clear that two .NET platforms were living side by side. But now we know that the .NET Framework 4.8 won’t evolve anymore and that all Microsoft efforts will be put on .NET Core continuation with well scheduled releases ahead. Let’s use the term .NET OSS to designate [.NET Core, .NET 5, .NET 6…. [ in the remainder of this post.

We can expect an early beta of .NET 5 before July 2020. Today in January 2020 the .NET 5.0 milestone is 72% achieved.

By now, the way to get prepared to .NET 5 and later is to migrate to .NET Core 3.1. Despite the branding change from .NET Core 3.1 to .NET 5 it is no mystery that .NET 5 will be mostly based on the actual .NET Core platform.

The cost of migration from .NET 4.8 to .NET OSS can get pretty high, especially if the legacy relies on some deprecated APIS (like WCF, WWF, WebForms or AppDomain). Thus it may seems attractive to stick with .NET 4.8 if your application is intended to run on Windows only.

Why it is not a good idea to not anticipate the migration now?

.NET 4.8 won’t evolve but some security patches will be provided for as long as we can foresee. However we can predict that .NET 4.8 will be quickly considered as a thing-of-the-past:

  • Developer mindset: .NET OSS and also C# will evolve. There will come a point where it’ll feel pretty awkward for .NET programmers working with 4.8 to not be able to use all the new goodies. Could you imagine programming with C#3 nowadays?
  • Third-Party Libraries: The .NET 4.8 / .NET OSS increasing gap will push open-sourced libraries authors toward .NET OSS. The cost of maintaining two code bases will be too high for an OSS developer. If your .NET 4.8 application consumes some OSS libraries, not migrating it will put you in an awkward situation where you’ll have to maintain the OSS code consumed yourself! Certainly serious commercial libraries will be maintained on both platforms for a longer period of time, but not forever.
  • Performance: We can expect more and more performance improvements with .NET OSS.
  • Tooling: Tools will continue to evolve and with time, less and less tool will support .NET 4.8 application.

Recently I’ve discussed with Jean-Baptiste Evain that develops the OSS library Cecil. Jb is also responsible for UnityVS at MS. Here at NDepend we’re relying on Cecil for more than a decade. Cecil processes compiled .NET assemblies bytes and thus will obviously benefit from Span<T> only available on .NET Core. This concrete situation illustrates well the points mentioned above:

  • By using Span<T> Jb is not enthusiast to have to maintain two versions of Cecil, one relying on Span<T> and the .NET 4.8 one.
  • Even though these two versions will co-exist because it is too early to discard the .NET 4.8 version of Cecil used by many serious projects, it is a matter of a few years until .NET 4.8 Cecil version gets deprecated.
  • Without Span<T> the .NET 4.8 version of Cecil will be slower.

Our case

NDepend is still running on .NET 4.8. NDepend is a CI tool, a standalone UI tool, an Azure DevOps extension and a Visual Studio extension. Developing an extension is a sensitive situation because we need to align our platform with the platform of the host. VS is such a massive application that I don’t expect it to run on .NET 5 in 2021. On the other hand VS is evolving so quickly nowadays that this possibility is not totally excluded. It is also possible that Microsoft takes an incremental approach and that the main VS process (devenv.exe) will remain on .NET Fx 4.8 for a while, while children processes run on .NET OSS (VS runs with quite a few children processes!).

At this point the reasonable move for us is to anticipate the migration to .NET OSS mostly by compiling as much code as possible against .NET Standard 2.0, supported by both .NET Fx 4.7.2+ and .NET Core. We also need to make sure that our WPF and Winforms code will be easily movable (which shouldn’t be a problem since most of the WPF/Winforms APIs are supported by .NET 3.1). We are also mulling over on having our own child process(es) but all the UI part must remain in the main VS process.

We also keep in mind that it will be tricky to support the future VS version running on .NET OSS and previous VS versions running on .NET v4.8 for a few years.

Conclusion

Those like us still working on a large .NET 4.8 legacy are entering into a turbulence zone for the years to come. However for all the reasons explained above, we can expect that in not so long (2023? 2025?) successful applications still running on .NET 4.8 will be the exception. Certainly not anticipating legacy migration now is likely a strategic mistake.

4 Predictions for the Future of .NET

In May 2019, Microsoft officially announced .NET 5, the future of .NET: it will be based on all the .NET Core work already achieved. Here is the schedule announced:

On one hand the future of .NET has never been so bright. On the other hand this represents a massive move for all .NET development shops, especially for those that still target .NET Framework 4.x that won’t evolve anymore. But not everything is clear from this announcement. Such massive move will have many collateral consequences that we can only guess by now. Certainly many points are not yet cast in stone and still debated.

Hence for large .NET legacy code bases some predictions must be made to plan now a seamless and in-time migration toward the future of .NET. So let’s do some predictions: it’ll still be interesting to come back in a few years and see how good or bad they were.

.NET Standard won’t evolve much

.NET Standard was introduced as a common API set that all .NET flavors must implement. .NET Standard superseded PCL (Portable Class Library). Now that several .NET frameworks will be unified upon .NET Core bases, and that the .NET Framework 4.x won’t support future versions of .NET Standard anymore, it sounds like the need for more .NET standard API will decrease significantly. Actually .NET Framework 4.8 doesn’t even support latest .NET Standard 2.1: “.NET Framework 4.8 will remain on .NET Standard 2.0 rather than implement .NET Standard 2.1”.

However .NET Standard is certainly not dead yet: it is (and will be for years to come) an essential tool to compile code into portable components that can be reused across several .NET flavors. However with this unification process the future of .NET Standard is compromised.

Visual Studio will run on .NET 5 or 6 (and in a x64 process)

It has to. Imagine the consequences if in 3 years from now (2019 Q4) the main Microsoft IDE for .NET professional developments still run on .NET Framework v4.8:

  • Engineers working on VS would lack access to all new .NET APIs, performance improvements and langage improvements. They would remain locked in the past.
  • As a consequence they wouldn’t use their own tool (dogfooding) and dogfooding is a key aspect of developing tools for developers.
  • Overall the message sent wouldn’t be acceptable for the users.

On the other hand, if you know a bit how VS works, imagine how massive this migration is going to be. For more than a decade there have been a lot of complaints from the community about Visual Studio not running in a 64 bits process. See some discussions on reddit here for example. If I remember well this x64 request was the most voted one when VS feedback was still handled by UserVoices. Some technical explanations have been provided by Microsoft like those ones provided 10 years ago! If in 2019 Visual Studio still doesn’t run in a x64 process, this says a lot on how large and complex such migration is.

It seems inevitable that this time the Visual Studio legacy will evolve toward what will be the future of .NET. One key benefit will be to run in a x64 process and have plenty of memory to work with very large solutions. Another implication is that all Visual Studio extensions, like our extension, must evolve too. Here at NDepend we are already preparing it but it will take time, not because we’ll miss much API (we’ll mostly miss AppDomain) but because:

  • We depend on some third-parties that we’d like to get rid of to have full control over our migration, and overall code.
  • For several years we’ll have to support both future Visual Studio versions and Visual Studio 2019, 2017 and maybe 2015 that runs on .NET Framework v4.x (btw we still support VS 2013/2012/2010 but this will have to be discarded to benefit from .NET Standard reused DLLs)

We cannot know yet if Visual Studio vNext will run on .NET 5 or if it’ll take more years until we see it running upon .NET 6?

Btw here are 2 posts Quickly assess your .NET code compliance with .NET Standard and An in-depth analysis of .NET Core 3.0 support for WPF and Winforms APIs that can help plan your own legacy migration.

.NET will propose a cross-platform UI Framework: WPF or a similar XAML UI Framework

On October 4, 2019 Satya Nadella revealed why Windows may not be the future of Microsoft’s business. In August 2019 Microsoft provided a .NET Cross Platform UI Framework Survey. Clearly a .NET cross-platform UI Framework is wanted: the community is asking for it. So far Microsoft closed the debate about WPF: WPF won’t be multi-platform.

Let’s also be crystal clear. This (WPF cross platform) is a very hard project. If the cost was low, this would be a very different conversation and very likely a different outcome. We have enough trouble being compatible with OpenSSL and that’s just one library.  Rich Lander – Dec 5, 2018

But given the immense benefits of what WPF running cross-platform would offer, I wouldn’t be surprise to see WPF become cross-platforms within the next years. Or at least a similar XAML UI framework. Moreover WPF is now open-source so who knows…

The Visual Studio UI is mostly based on WPF hence one of the benefit of having WPF cross-platform would be to have a unique cross-platform Visual Studio: the same way Microsoft is now unifying .NET Frameworks, they could unify the Visual Studio suite into a single cross-platform product.

Xamarin Forms and Avalonia are also natural candidates to be the .NET cross-platform UI Framework. But it seems those frameworks doesn’t receive enough love from the community, this is my subjective feeling. Also we have to keep in mind that Microsoft did a survey and that the community is massively asking for it.

Blazor is promised to a bright future

If you didn’t follow the recent Blazor evolution, the promises of this technology are huge:

  • Run .NET code in all browsers (like Silverlight)
  • with no browser plugin needed (unlike Silverlight)
  • with near-native performance
  • with components compiled to a compact binary format

This is all possible thanks to the WebAssembly (Wasm) format supported by most browsers.

WebAssembly (abbreviated Wasm) is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications.

Blazor was initially a personal project created by Steve Sanderson from Microsoft. It was first introduced during NDC OSLO in July 2017: the video is worth being watched, also read how enthusiastics are the comments. However Blazor is not yet finalized and still has some limitations: it doesn’t offer yet a decent debugging experience and the application size to download (a few MBs) is still too large because dependencies have to be loaded too. Those ones are currently being addressed (see here for debugging and here for download size, runtime code will be trimmed and cached and usage of CDN (Content Distribution Network) is mentioned).

The community is enthusiast, the technology is getting mature and there is no technological nor political barrier in sight: the Blazor future looks bright. Don’t miss the Blazor FAQ to learn more.