NDepend

Improve your .NET code quality with NDepend

Case Study : Complex UI Testing

In the previous post Case Study: 2 Simple Principles to achieve High Code Maintainability I explained that the principles layered code + high coverage ratio by test are 2 simple principles that can be objectively applied, validated and measured. When these 2 principles are applied they lead to High Code Maintainability: As a consequence the management saves money in the long term and developers are happy to work in a cleaner code base with principles easy to follow.

Large and complex UI 90%+ covered by tests

In this previous post I used the example of the new NDepend v2020.1 graph. This new tool is a large and complex UI with dozens of actions proposed to the user and drastic performance requirement (it scales live on 100.000+ elements). This graph implementation is 90% covered by tests. It is not because there is a lot of UI code that it should not been well tested. We didn’t spend a good part of our resources in writing tests just for the sake of it. We did it because we know by experience that it’ll pay off: probably a few bugs will be reported as for all 1.0 implementation although beta test phases already caught some. But we are confident that it won’t take a lot of resources to fix them. We can look forward the future confidently (like supporting properly .NET 5 that will be released in November 2020).  And indeed 10 days after its 1.0 release no bug has been reported (nor logged) on this new graph although many users downloaded it: so far it looks rock-solid and we can focus on what’s next.

The picture below shows all namespace, classes and methods of the graph implementation. Smaller rectangles are methods and the color of each rectangle indicates how well a method is covered by tests. Clearly we tolerate some gaps in UI code, while non UI code like Undo/Redo actions implementations are 100% covered. Experience told us how to balance our resources and that everything does not have to be perfect to achieve high maintainability.

NDepend Graph Implementation 90% covered by tests

How did we achieve High Coverage Ratio on UI Code?

It is easy: we have a simple MVC (Model View Controller) design. Some controller classes contain the logic for all actions the user can do and those classes pilot the UI. Concretely in our scenario actions are: load/save, change group-by, change layout direction, zoom, generate a call graph for this method, change filters…

Then we wrote a test suite that first starts the UI and then invokes all actions. Each complex peculiarity of each action gets fully tested, hence complex actions get invoked several times by tests but differently each time, to make sure all scenarios get tested.

The video below shows the UI under testing: more than 40 actions get tested in less than a minute. It would take more than an hour to do all this work manually and any change in code could potentially ruin the validity of manual tests.

In such a complex UI there are many classes that are not directly related to UI. For example the grape of classes that describe the underlying model are tested separately.

As usual, a side benefit of writing tests is better design : the code gets structured in a way that makes it easy to invoke it through tests. Concretely some abstractions are introduced (that wouldn’t make sense without tests), some classes and some methods get splitted, some logic gets refined and as a result developers are happy to live in a code base where the logic is smoothly implemented.

High Coverage Ratio is not Enough: Assertions to the rescue

Typically at this point comes the remark: but code coverage is not enough, results must be asserted. And indeed, if nothing gets asserted nothing gets tested even if the code is entirely covered by tests.  We want tests to fail if something can go wrong.

Of course our tests contain many assertions for example load / save actions are invoked and asserted this way:

But these assertions are not enough. Per definition the UI code contains tons of visual peculiarities represented by states that can be potentially corrupted. As a consequence our UI code is stuffed with thousands of assertions: everything that can be asserted gets asserted.

  • A Rectangle with width/height in certain range
  • The state of a node or an edge when another element gets selected (is it a caller, a callee…?).
  • The current application state when a new graph is demanded by the controller.
  • The graph UI contains many asynchronous computation to avoid UI freezing. This leads to many assertions to check that mutable states are not corrupted by concurrent accesses.

All those states asserted would be hardly reachable from test code. However they get naturally accessed by the UI code itself so it is the right place to assert that they are not corrupted.

Btw, We still use the good old System.Diagnostics.Debug.Assert(…) for that, it has several advantages:

  • It is simple.
  • It is understood by tools like Roslyn/Resharper/CodeRush analyzers.

  • An assertion that fails cannot be missed both when running automatic tests and when running manual tests on the Debug mode version.
  • Debug assertions are removed by the compiler in Release mode: assertions are not executed in production and users get better performance. The idea is to not consider users as testers: code released in production is supposed to be rock-solid. Assertions are like scaffolding that gets removed when a building gets delivered. If there is still a bug we’ll discover it from users feedback, from production logs or from our own manual tests.

Debug.Assert(…) is enough for us and it is understandable that some other teams wants more sophisticated assertions framework. The key is to take the habit to assert everything that can be asserted when writing code (UI code or not). Each assertion is a guard that helps making the code rock-solid. Also each assertion improves the code readability. At code-review time we’ve all been wondering: can this integer be zero? can this string be empty? can this reference be null?. Hopefully C#8 non-nullable discards the last question but so many questions remain open without assertions.

Design by Contracts

This idea of stuffing code with assertions is actually an important software correctness methodology named DbC, Design by Contract, that is really worth knowing. Contracts mean much more than the usual approach with exception:

  • Explicitly throwing an exception says: zero is tolerated, it is not a bug, but you won’t get the result you’d like, be prepared to catch some exceptions.
  • Writing a contract says: don’t even thing of passing a zero value. The real type of argument is not Int32 it is [Int32 minus 0].  Ideally such violation could be caught by compilers and analyzers (and is indeed sometime caught as we saw in the screenshot above).

Conclusion

Any complex UI can be automatically tested as long as:

  • It is well designed with some controllers that pilot the UI and that can be invoked from tests.
  • UI code gets stuffed with assertions to make sure that no state becomes corrupted at runtime.

In short assertions embedded in code tested matter as much as assertions embedded in tests. If an assertion gets violated there is a problem, no matter the assertion location, and it must not be missed nor ignored. This powerful idea doesn’t necessarily applies only to UI code and is known as DbC, Design by Contract.

Actually in this post I added a third principle to achieve high code maintainablity and high code correctness : layered code + high coverage ratio by test + contracts

The continuous adaptation of Visual Studio extensions

One could think that developing an extension for a two-decades+ product as mature as Visual Studio is headache-less.

Not really. Visual Studio is a big big beast used by millions of professional developers daily. Versions after versions Microsoft is spending an astounding amount of effort to improve it and make sure it respects the latest standard in terms of dev platforms, IDE features, usability and performance.

This implies a lot of deep changes and all VS extensions must adapt to these changes … or die. Most of this adaptation efforts come from the support of new platforms (.NET Core, Xamarin, upcoming .NET 5…) but here I’d like to focus on VS integration recent changes.

Deferred extensions loading

On Monday 11th December 2017 all VS partners received an mail announcing VS package deferred loading, also referred to extension asynchronous initialization. In this mode VS extensions are loaded after VS startup and initialization. As a consequence VS startup is not slowed down by extensions.

Implementing this scenario required serious changes described here but this deferred loading scenario became mandatory only with VS 2019.1 (16.1) released this spring 2019, around 18 months later after the initial announcement. VS extension ISVs got plenty of time (and plenty of high quality support from Microsoft engineers) to implement and test this async loading scheme.

VS2019 Extensions Menu

In February 2019, we were developing the support of NDepend for VS 2019 and were downloading the latest VS2019 preview, installed our NDepend extension and immediately noticed there were no more NDepend global menu? After investigation we saw our NDepend main menu, and other extensions menus, under a new Extensions menu –with no possibility to get back extensions menu in the main bar–.

Visual Studio 2019 Extensions Menu

This change impacts millions of extensions users and hundreds of partners ISVs but has never been announced publicly ahead nor debated. As an immediate consequence someone asked to get rid of this new Extensions menu and this generated a long list of complains.

I guess this major (and blunt) change was provoked initially by the VS2019 compact menu look, but I am not sure: even on a 1920 pixel wide monitor (DPI 100%) there is still plenty of width space to show extension menus.

Visual Studio 2019 Compact Menu

Sometime in April 2019 someone anonymously published on the discussion the link to a new extension to put back extensions main menus in the main bar.

Getting back Extensions in Main Menu

I guess the VS APIs used to achieve this magic are not well documented, hence we can wonder if this initiative comes from MS or not? At least now users can get back their extensions menus in the main bar. But the Extensions in Main Menus extension has been downloaded only 800 times in around 3 months. It means only a tiny subset of VS extension users are aware of it and are actually using it.

Personally I still have hope Microsoft will reconsider this move and will embed a similar option in a future VS version to let the users chose which features (out-of-the-box VS feature or extension feature) deserves a no-brainer/main bar access or not.

VS2019 Optimize rendering for screens with different pixel densities

We published NDepend v2019.2.0 on 14th March 2019 with proud support for VS2019 and .NET Core 3. Two weeks later a user reported severe UI bugs we’ve never observed. The repro scheme was easy:

  • install .NET Fx 4.8,
  • enable the new VS2019 option Optimize rendering for screens with different pixel densities
  • run VS2019 with the NDepend extension in a multi-monitors environment with various DPI scale for each monitor
Optimize rendering for screens with different pixel densities
Optimize rendering for screens with different pixel densities

We then discussed with VS engineers. They pointed us to this documentation and kindly offered quite a few custom advices: Per-Monitor Awareness support for Visual Studio extenders

Actually this change is pretty deep for (the tons of) WPF and Winforms controls rendered in the Visual studio UI. This took us time but we just released on July 3rd 2019 NDepend v2019.2.5 with full support for this Optimize rendering for screens with different pixel densities option. This required quite tricky fixes and we’d wish we have had a few months to adapt to this major breaking changes as we had for deferred loading explained above. As far as I know extensions ISVs were not informed in advance. On the other hand I’d like to underline the awesome support we got from Visual Studio engineers (as always actually) thanks again to all of them.

I noticed that Resharper pops up a dialog at VS2019 starting time (not sure if they got all fixed at the time of writing).

Resharper Message related to Optimize rendering for screens with different pixel densities

I also noticed that up to VS2019 16.1.3 there were still some bugs in the VS UI, see the Project Properties and Diagnostics Tool windows below. I just tried with 16.1.5 and it seems fixed.

VS 2019 16.1.3 UI issues (now fixed)

My point is that this great improvement (Optimize rendering for screens with different pixel densities) should have been highlighted ahead during VS2019 preview time to avoid a wide range of UI bugs experienced by users during the early months of VS2019 RTM.

What can we expect next?

With .NET 5 public announcement on May 6th 2019 the .NET community now knows that sooner or later most of .NET Core or .NET Fx applications will have to be migrated to benefit from latest innovations and improvements.

Hopefully .NET Core apps migration will be quite straightforward since .NET 5 is .NET Core vNext. But .NET Fx apps migration will be (more or less) painful. Although WPF and Winforms desktop APIs are already supported by .NET Core 3, some others major APIs are not – and won’t be – supported (ASP.NET Web Forms, WCF, Windows Workflow, .NET Remoting, AppDomain…).

For too complex migration plans here is the Microsoft recommendation: What do you do with your older applications that you are not spending much engineering time on? We recommend leaving these on .NET Framework. (…) .NET Framework will continue to be supported and will receive minor updates. Even here at Microsoft, many large products will remain on .NET Framework. There are absolutely no changes to support and that will not change in the future. .NET Framework 4.8 is the latest version of .NET Framework and will continue to be distributed with future releases of Windows. If it is installed on a supported version of Windows, .NET Framework 4.8 will continue to be supported too.

Visual Studio 2019 is still running in a 32 bits process upon the .NET Fx 4 CLR with a massive WPF based UI and some COM usage.

Can we expect VS2019 to run on .NET 5 sometime in 2021? or in a later version? or never?

Only the future will tell but I hope extension VS ISVs will be advised ahead to anticipate the migration and continue to offer a seamless integration experience to users.

ISV_Favor_Software_Products_That_Include_Support_Price

You Should Favor Software Products That Include Support in the Price

Quite often we talk about architectural concerns on this blog, with topics like application layering or the merits of design patterns.  But today I’m going to switch gears a little and talk about your wallet.

Oh, don’t worry.  We’re not going too far afield.  I’m going to talk about how your developer tools purchases affect your wallet.

When it comes to product support, you probably don’t give this a ton of thought.  At least, you’re probably more focused on features, what problems the product solves for you, what the price is, and so on.  The support arrangement probably matters somewhat to you, but doesn’t rise to top of mind.

But today, I’m going to ask you to think about it a little bit.  And I’m going to suggest that you give a quick consideration during future purchases as to how support works.  Simply ask yourself, “does this vendor ask you to pay extra for support?”

If the answer is yes, that’s a smell.

Continue reading You Should Favor Software Products That Include Support in the Price

what devops means for static analysis

What DevOps Means for Static Analysis

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.
Continue reading What DevOps Means for Static Analysis

adding static analysis to your team's DNA

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

detecting performance bottlenecks with ndepend

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

scale static analysis tooling

How to Scale Your Static Analysis Tooling

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Continue reading How to Scale Your Static Analysis Tooling

how to prioritize bugs on your todo list

How to Prioritize Bugs on Your To-Do List

People frequently ask me questions about code quality.  People also frequently ask me questions about efficiency and productivity.  But it seems we rarely wind up talking about the two together.  How can you most efficiently improve quality via the fixing of bugs?  Or, more specifically, how should you prioritize bugs?

Let me be clear about something up front.  I’m not going to offer you some kind of grand unified scheme of bug prioritization.  If I tried, the attempt would come off as utterly quixotic.  Because software shops, roles, and offerings vary so widely, I cannot address every possible situation.

Instead, I will offer a few different philosophies of prioritization, leaving the execution mechanics up to you.  These should cover most common scenarios that software developers and project managers will encounter.
Continue reading How to Prioritize Bugs on Your To-Do List

static analysis continuous testing relationship

The Relationship between Static Analysis and Continuous Testing

As an adult, I have learned that I have an introvert type personality.  I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person.  So, learning about this characterization surprised me somewhat, but only until I fully understood.

I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding.  This describes me to a tee.  However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously.  Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.

I received just such a question the other day.  The question, more or less, was, “if we have continuous testing, do we really need static analysis?”  And, just like that, I was stumped.  This didn’t square, and I wanted time to think on that.  Luckily, I’ve had a bit of time.  (This is why I love blogging.) Continue reading The Relationship between Static Analysis and Continuous Testing

code smells fish

Easy to Miss Code Smells

The concept of a code smell is, perhaps, one of the most evocative in our profession.  The name itself has a levity factor to it, conjuring a mental image of one’s coworkers writing code so bad that it actually emits a foul odor.  But the metaphor has a certain utility as well in the “where there’s smoke, there may be fire” sense.

In case you’re not familiar, a code smell is an observable feature of the code (the smoke) that often belies a deeper existing problem (the fire).  When you say that a code smell exists, what you’re communicating is “you may be justified here, but I’m skeptical – in my experience this is probably a design flaw.”

Of course, accusing code of having a smell is only slightly less incendiary to the author than accusing code of being flat out bad.  Them’s fightin’ words, as they say.  But, for all the arguments and all of the righteous indignation that code smell accusations have generated over the years, their usefulness is undeniable.

No doubt you’ve heard of some of the most common and easiest to visualize code smells.  The God Class, Primitive Obsession, and Inappropriate Intimacy all come to mind.  These indicate, respectively a class in your code base doing way too much, a tendency to use primitive types when you should take advantage of classes, and a module or class that breaks encapsulation by knowing too many details about another.  The combination of their visual memorability and their wisdom has prodded us over the years to break things down, to create cohesive objects, and to preserve encapsulation.

I would argue, however, that there are many more code smells out there than the big, iconic ones that get a lot of attention.  I’d like today to discuss a few that I don’t think are as commonly known.  I’ll make the case for why, once you’ve mastered avoiding the well-known ones, you should watch for these as well.

Continue reading Easy to Miss Code Smells