NDepend

Improve your .NET code quality with NDepend

Static analysis of .NET Core 2.0 applications

NDepend v2017.3 has just been released with major improvements. One of the most requested features, now available, is the support for analyzing .NET Core 2.0 and .NET Standard 2.0 projects. .NET Core and its main flavor, ASP.NET Core, represents a major evolution for the .NET platform. Let’s have a look at how NDepend is analyzing .NET Core code.

Resolving .NET Core third party assemblies

In this post I’ll analyze the OSS application ASP.NET Core / EntityFramework MusicStore hosted on github. From the Visual Studio solution file, NDepend is resolving the application assembly MusicStore.dll and also two test assemblies that we won’t analyze here. In the screenshot below, we can see that:

  • NDepend recognizes the .NET profile, .NET Core 2.0, for this application.
  • It resolves several folders on the machine that are related to .NET Core, especially NuGet package folders.
  • It resolves all 77 third-party assemblies referenced by MusicStore.dll. This is important since many code rules and other NDepend features take into account what the application code is using.

It is worth noticing that the .NET Core platform assemblies have high granularity. A simple website like MusicStore references no fewer than 77 assemblies. This is because the .NET Core framework is implemented through a few NuGet packages that each contain many assemblies. The idea is to release the application only with needed assemblies, in order to reduce the memory footprint.

.NET Core 2.0 third party assemblies granularity

NDepend v2017.3 has a new heuristic to resolve .NET Core assemblies. This heuristic is based on .deps.json files that contain the names of the NuGet packages referenced. Here we can see that 3 NuGet packages are referenced by MusicStore. From these package names, the heuristic will resolve third-party assemblies (in the NuGet store) referenced by the application assemblies (MusicStore.dll in our case).

NuGet packages referenced in .deps.json file

Analyzing .NET Standard assemblies

Let’s be clear that NDepend v2017.3 can also analyze .NET Standard assemblies. Interestingly enough, since .NET Standard 2.0, .NET Standard assemblies reference a unique assembly named netstandard.dll and found in C:\Users\[user]\.nuget\packages\NETStandard.Library\2.0.0\build\netstandard2.0\ref\netstandard.dll.

By decompiling this assembly, we can see that it doesn’t contain any implementation, but it does contain all types that are part of .NET Standard 2.0. This makes sense if we remember that .NET Standard is not an implementation, but is a set of APIs implemented by various .NET profiles, including .NET Core 2.0, the .NET Framework v4.6.1, Mono 5.4 and more.

Browsing how the application is using .NET Core

Let’s come back to the MusicStore application that references 77 assemblies. This assembly granularity makes it impractical to browse dependencies with the dependency graph, since this generates dozens of items. We can see that NDepend suggests viewing this graph as a dependency matrix instead.

NDepend Dependency Graph on an ASP.NET Core 2.0 project

The NDepend dependency matrix can scale seamlessly on a large number of items. The numbers in the cells also provide a good hint about the represented coupling. For example, here we can see that  22 members of the assembly Microsoft.EntityFrameworkCore.dll are used by 32 methods of the assembly MusicStore.dll, and a menu lets us dig into this coupling.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

Clicking the menu item Open this dependency shows a new dependency matrix where only members involved are kept (the 32 elements in column are using the 22 elements in rows). This way you can easily dig into which part of the application is using what.

NDepend Dependency Matrix on an ASP.NET Core 2.0 project

All NDepend features now work when analyzing .NET Core

We saw how to browse the structure of a .NET Core application, but let’s underline that all NDepend features now work when analyzing .NET Core applications. On the Dashboard we can see code quality metrics related to Quality Gates, Code Rules, Issues and Technical Debt.

NDepend Dashboard on an ASP.NET Core 2.0 project

Also, most of the default code rules have been improved to avoid reporting false positives on .NET Core projects.

NDepend code rules on an ASP.NET Core 2.0 project

We hope you’ll enjoy using all your favorite NDepend features on your .NET Core projects!

Migrating from HTTP to HTTPS in a IIS / ASP.NET environment

Google is urging more and more webmasters to move their sites to HTTPS for security reasons. We did this move last week for our IIS / ASP.NET website https://www.NDepend.com and we learned a few tricks along the way. Once you’ve done it once it becomes pretty straightforward, but getting the big picture and handling every detail well is not trivial. So I hope this post will be useful.

HTTPS and Google Analytics Referrals

One reason for moving to HTTPS is that Google Analytics referrals don’t work when the user comes from a HTTPS website. And since most of your referrers websites are likely to be already HTTPS, if you keep up with HTTP, your GAnalytics becomes blind.

Notice that once you’ve moved to HTTPS, you still won’t be able to track referrers that come from an HTTP url, which is annoying since most of the time you don’t have edit-access to these urls.

Getting the Certificate

You can get free certificates from LetsEncrypt.com, but they have a 3 month lease. The renewal process can certainly be automated, but instead we ordered a 2 year certificate from gandi.net for only 20 EUR for the two years. For that price you’ll get the minimum and won’t obtain a certificate with the Green Address Bar, which costs around 240 EUR / year.

When ordering the certificate, a CSR (Certificate Sign Request) will be requested. The CRS can be obtained from IIS as explained here for example, through the menu Generate Certificate Request. A few questions about who you are will be asked, the most important being the Common Name, which will be typically www.yourdomain.com  (or, better, use a wildcard, as in *.yourdomain.com). If the Common Name doesn’t match the web site domain, the user will get a warning at browsing time, so this is a sensitive step.

Installing the Certificate in IIS

Once you’ve ordered the certificate, the certificate shop will provide you with a .crt or .cer crypted content. This is the certificate. But IIS doesn’t deal with the .crt nor .cer formats, it asks for a .pfx file! This is misleading and the number one explanation on the web is this one on the Michael Richardson blog. Basically you’ll use the IIS menu Complete Certificate Request (that follows the first Generate Certificate Request). Now restart IIS or the server to make sure it’ll take care of the certificate.

Binding the Certificate to the website 443 Port in IIS

At that point the certificate is installed on the server. The certificate needs to be bound with your website port 443. First make sure that the port 443 is opened on your server, and second, use the binding IIS menu on your website. A binding entry will have to be added as shown in the picture below.

Once added just restart the website. Normally, you can now access your website through HTTPS urls. If not, you may have to tweak the DNS pointers somehow, but I cannot comment since we didn’t have a problem with that.

At that point, both HTTPS and HTTP are browsable. HTTP requests need to be redirected to HTTPS to complete the migration.

301 redirection with Web.Config and IIS UrlRewriter

HTTP to HTTPS redirection can be achieved by modifying the Web.Config file of your ASP.NET website, to tell the IIS Url rewriter how to redirect. After a few attempts based on googling, our redirection rules look like:

If you believe this can be improved, please let me know. At least it works 🙂

  • <add input=”{HTTPS}” pattern=”off” ignoreCase=”true” /> is the main redirection rule that redirects HTTP requests to HTTPS (this is called 301 redirection). You’ll find many sites on the web to test that your 301 redirection works fine.
  • Make sure to double check that urls with GET params are redirected well. On our side, url=“https://{HTTP_HOST}{REQUEST_URI}” processes GET params seamlessly
  • <add input=”{URL}” pattern=”(.*)XYZ” negate=”true” ignoreCase=”true”/> is important to avoid HTTP to HTTPS redirection for a page named XYZ. Typically, if you have special pages with POST requests, they might be broken with the HTTPS redirection, and thus the redirection needs to be discarded for those.
  • <add input=”{HTTP_HOST}” matchType=”Pattern” pattern=”^localhost(:\d+)?$” negate=”true” /> avoid the HTTPS redirection when testing on localhost.
  • <add input=”{HTTP_HOST}” pattern=”^www.*” negate=”true”/> just transforms ndepend.com requests into www.ndepend.com,
  • and  <add input=”{HTTP_HOST}” pattern=”localhost” negate=”true”/> avoids this WWW redirection on localhost.

Eliminate Mixed Content

At this point you are almost done. Yet depending on the topology of your web site(s) and resources, it is possible that some pages generate a mixed content warning. Mixed content means that some resources (like images or scripts) of an HTTPS web page are served through HTTP. When mixed content is detected, most browsers show a warning to users about a not fully secured page.

You’ll find tools to search for mixed content on your web site, but you can also crawl the site yourself and use the Chrome console to get details about mixed content found.

Update Google SiteMap and Analytics

Finally make sure that your Google sitemap now references HTTPS urls, and update your Google Analytics for HTTPS:

I hope this content saves a few headaches. I am certainly not a SSL nor an IIS expert, so once again, if some part of this tutorial can be improved, feel free to comment!

Our experience with using third-party libraries

NDepend is a tool that helps .NET developers write beautiful code. The project was started in April 2004. It is now used by more than 6 000 companies worldwide.

In more than a decade, many decisions were made, each with important consequences on the code base evolution process, sometime good consequences, sometime less good. Relentlessly dog-fooding (i.e using NDepend to analyze and improve the NDepend code base) helped us a lot to obtain more maintainable code, less bugs, and to improve the tool usability and features.

When it comes to working on and maintaining a large code base for several years, some of the most important decisions made are related to relying (or not) on some third-party libraries. Choosing whether or not to rely on a library is a double-edged sword decision that can, over time, bring a lot of value or cause a lot of friction. Ultimately users won’t make a difference between your bugs and third-party libraries bugs, their problems become your problems. Consider this:

  • Sometimes the team just needs a fraction of a library and it may be more cost-effective, and more maintainable with time, to develop your own.
  • Sometimes the licensing of a free or commercial library will prevent you from achieving your goals.
  • Sometimes the library looks bright and shiny but becomes an obsolete project a few months later and precious resources will have to be spent maintaining others code, or migrating toward a trendy new library.
  • Sometimes the library code and/or authors are so fascinating that you’ll be proud to contribute and be part of it.

Of course we all hope for the last case, and we had the chance to experience this a few times for real. Here are some libraries we’ve had great success with:

Mono.Cecil (OSS)

Mono.Cecil is an open source project developed by Jean-Baptiste Evain that can read and write data nested in .NET assemblies, including compiled IL code. NDepend has relied on Cecil for almost a decade to read IL code and other data. Both the library and the support provided are outstanding. The performance of the library is near optimal and all bugs reported were fixed in a matter of days or even hours. For our usage, the library is actually close to bug free. If only all libraries had this level of excellence…

DevExpress WinForm (Commercial)

NDepend has also relied on DevExpress WinForm for almost a decade to improve the UI look and feel. NDepend is a Visual Studio extension and DevExpress WinForm makes smooth visual integration with Visual Studio. Concretely, thanks to this library we achieved the exact same Visual Studio theming and skinning, docking controls a la Visual Studio, menus, bars and special controls like BreadCrumb to explore directories. We have never been disappointed with DevExpress WinForm. The bugs we reported were fixed quickly, it supports high DPI ratio and it is rock solid in production.

Microsoft Automatic Graph Layout MSAGL (OSS)

NDepend has relied on MSAGL for several years to draw all sorts of graphs of dependencies between code elements including Call Graphs, Class Inheritance Graphs, Coupling Graphs, Path and Cycle Graphs…  This library used to be commercial but nowadays OSS.  Here also the bugs we reported were fixed quickly, it supports high DPI ratio and it is perfectly stable in production.

NRefactory (OSS)

NDepend used to have a C# Code Query LINQ editor in 2012, a few years before Roslyn became RTM. We wanted to offer users a great editing experience with code completion and documentation tooltips on mouse-hovering. At that time NRefactory was the best choice and it proved with the years to be stable in production. Nowadays Roslyn would certainly be a better choice, but since our initial investment NRefactory still does the job well, we didn’t feel the need (yet) to move to Roslyn.

 

Here are a few things we prefer to keep in-house:

Licensing

While there are libraries for licensing, these are vital, sensitive topics that require a lot of flexibility with time, and we preferred to avoid delegating it. This came at the cost of plenty of development/support and significant levels of acquired expertise. Even taking into account that these efforts could have been spent on product features, we still don’t regret our choice.

The licensing layer is a cornerstone in our relation with our users community and it cannot suffer from any compromise. As a side remark, several times I observed that the cost of developing a solid licensing layer postponed promising projects to become commercial for a while.

Persistence

As most of application, NDepend persists and restores a lot of data, especially the large amount of data in analysis results. We could have used relational or object databases, but for a UI tool embedded in VS, the worst thing would be to slow down the UI and make the user wait. We decided only optimal performance is acceptable for our users, and optimal performance in persistence means using custom streams of bytes. The consequence of this decision is less flexibility: each time our data schema evolves, we need to put in extra effort to keep ascendant compatibility.

I underline that most of the time it is not a good idea to develop a custom persistence layer because of the amount of work and expertise required. But taken account our particular needs and goals, I think we took the right decision.

Production Logs

I explained here about our production logs system. We consider it an essential component to making NDepend a super-stable product. Here also, we could have used a third-party library. We capitalize on our own logging system because, year after year, we customized it with a plethora of production information, which was required to help fix our very own problems. We kept the system lightweight and flexible, and it still helps us improve the overall stability and correctness of our products.

Dependency Matrix and Treemap UI Components

These UI components were developed years ago and are still up to date. Both then and now, I believe there is no good third-party alternative that meets all our requirements in terms of layout and performance. A few times, we received propositions to buy those components, but we are not a component provider and don’t have plans for that.

 

In this post I unveiled a few core choices we made over the years. We hope this information will be useful for other teams.

A Visual Studio script that saves time and pain

After years of pain, I finally found a clean-and-definitive way to get rid of the dreadful issue Could not copy assembly, the process cannot access the file because it is used by another process!

Locked assembly

I have no idea how many .NET developers are coping with this issue, but on our side it used to be daily and there are many situations where an assembly file gets locked:

  • sometimes VS just load in-process the assembly, or the PDB file or the .xml documentation file, for an unknown reason (usually after a debug session)
  • sometimes the culprit seems to be vshost.exe
  • sometimes the culprit is the test runner process
  • sometimes, after a smoke test session, one just forgets to close all processes that hold assemblies…
  • …or sometimes it is just a zombie process that should have stopped but just didn’t!
  • and when developing a VS extension, all of those situations are happening more often

When the culprit process was not obviously identified, I started Process Explorer to kill it! And when the culprit process is my current VS instance, it means I need to restart it and interrupt current work for significant time and then lose my mental focus!!


The clean-and-definitive solution to this problem, is this script to copy in a .bat file. This script must be invoked from the VS project pre-build-events. It just moves the assembly file, the .pdb file and the .xml documentation file to a dumb location (if the .pdb or .xml file is missing no problem).

The key is that when a process locks a file, Windows authorizes to move and rename the file. I found the original idea from this stackoverflow answer, that itself found it from a Keyvan Nayyeri blog post that seems to have been removed.

To invoke this script just add this in all your VS project pre-build events command line. If you wish the script name and location to be different, just adapt $(SolutionDir)\BuildProcess\PreBuildEvents.bat.

PreBuildEvent

Notice that this script does the job no matter the actual configuration (Debug or Release) and no matter if the compilation is started from VS or from any other script.

-Patrick

NDepend updated to Version 6.2

NDepend version 6.2 has just been released. We have addressed over 20 bug fixes, including a blocker one for Visual Studio 2015 update 1 Git Controls.

More specifically the new Visual Studio 2015 Update 1 Git controls in the Visual Studio status bar were interacting with the NDepend Visual Studio extension status bar control. As a consequence this was provoking VS UI freezing. That is fortunate that the Visual Studio team warned partners (VSIP) a few weeks ago that they were adding controls to the status bar. The issue was coming from a synchronous usage of the WPF dispatcher to implement the NDepend progress & status circle. Invoking the dispatcher asynchronously fixed the issue.

GitStatusBar

We also stumbled on an unusual issue due to an unfixed Windows bug. When working with DataGridView with many rows (like 1000+) we can face an unmanaged StackOverflowException that crashes the process. The Windows bug is explained here http://stackoverflow.com/a/14716720/27194 and as far as we know it is not fixed. The problem occurs only when the Windows process TabTip.exe runs (“Touch Keyboard and Handwriting Panel Service“) and the stackoverflow link explains that the only way to prevent it is to disable this touch keyboard service. We’re going the hard way and actually when NDepend starts, it now tries to kill this process. Most of the time it’ll work, even if the Windows user is not administrator. If you get any inconvenience with this rough fix, please let us know.

Apart these two fixes, many other bugs were fixed and some improvements were added (see the complete list here). Bugs fixed also includes some incorrect results that were happening because the way Roslyn emits IL has significantly changed in some situations, and NDepend relies a lot on IL code analysis.

Enjoy!

 

 

NDepend Case Study: Increasing Development Efficiency in the Medical Laboratory Sector

Developing applications for use in the health care industry is stressful because the margin of error is almost non-existent. Whether your tool is for treatment, research, or analysis, it needs to be dependable and accurate. The more complex the application is, the higher the chance for errors and delays in development. Dependable companies abide by rigorous methodologies to develop their code before deploying it to clients. In this NDepend case study, we learn why a company in this sector chose NDepend, and why it became an integral part of their development process.

Stago works in the medical lab industry, producing lab analysis tools that focus on haemostasis and coagulation. Working hard for over 60 years and valuing long term investments, they have created a name for themselves in the industry. A few years ago, they wanted to make their software development process more efficient. In addition, they wanted to easily enforce their own best practices and code quality standards across their teams. The goal was to be able to catch issues earlier in the development cycle to cut costs and time spent on quality assurance post-development.

“We selected NDepend after reviewing all the other options on the market and it quickly became the backbone of our development effort.”
– Fabien Prestavoine, Software Architect at Stago

We are very grateful for Stago for sharing their success story with us. Stories such as these is one of the main driving forces behind creating one of the most comprehensive and powerful .NET analysis tool on the market. Since implementing NDepend, Stago has:

  • Easily met all delivery deadlines
  • Cut both cost and time spent on quality assurance
  • Delivered a consistently dependable product
  • Improved communication between their developers and architects

To read more about how NDepend helped Stago streamline their development process, click here to download a PDF of the complete case study.

Or check out this slideshow:

Toward Bug Free Software: Lines of Defense

Hurrah!! Last week we released NDepend v6 RTM. Once again we relied on a 2 months private beta-testing period and a one month Release Candidate period to do our best to release a polished and stable product.

I’d like to talk about our lines of defense to fix as many bugs as possible. Except a few pieces of software in the world that can afford mathematical demonstrations to prove they are bug-free (like plane and some medical ones), all other pieces of software, including NDepend, rely on an empirical approach to chasing bugs and fixing them. An empirical approach is an evidence based approach that relies on direct observations and experimentation in the acquisition of new knowledge. An empirical approach will never lead to a bug-free product, but it can help a lot in keeping the number of bugs low, and make it so that bugs happen in rare enough situations that won’t have any impact on most user’s experience.

I could write many blog posts about each line of defense, after more than a decade applying them there is so much to say, but I want this post to be synthetic.

Production Crash Logs

Crashes are due to unhandled exceptions. Unhandled exceptions are due to situations at runtime that were unexpected, this typically includes:

  • null reference access,
  • division by zero,
  • disposed object accessed,
  • invalid cast,
  • wrong method call parameters,

A bug doesn’t necessarily lead to a crash, but a crash necessarily mean that there is a bug. Certainly the most important line of defense against bugs is to log all production crashes and relentlessly fix them all. The .NET Framework offers several unhandled exception access points, including:

In some environments like Visual Studio hosting, these access points don’t work and you’ll have to write code to catch all exceptions in all possible handlers of your program.

Of course some users are disconnected from internet, or behind a proxy, and you won’t get those production crashes. Our statistics show that this concerns only 20% of users at most. So being aware of 80% of all production crashes is certainly enough to have a good measure of what’s going wrong in a production. And because Windows and .NET are highly sophisticated technologies that are constantly evolving, you can expect plenty of issues that never occurred on your team’s machines! For example, the NDepend v6 Release Candidate Period shows us that users running NDepend v6 RC on a fresh Windows 8.1 or Windows 10 install, experienced crashes because of a P/Invoked win32 method that our code calls. Oddly enough this P/Invoked method behaves differently when .NET v3.5 is not installed on the machine!

The key to successful production crash logs, is to get as many useful data as possible per log. For example here is a production crash below we had a few week ago. When the same issue lead to several crash logs, we can start doing data mining on it. Do they all occur with the same stack trace? with the same high DPI resolution? on the same Windows platform version ? on the same machine only? Notice also the stack frames improved with IL offset retrieved with the StackFrame.GetILOffSet() method. Many times in the past, this alone lead us to the root cause.

ProductionCrash

 

You’ll notice that we only log crashes, we don’t have other forms of runtime logs, like logging every major event that happen (button clicked, panel opened, analysis started…). Our experience with logging events is that ultimately we logged too much or not enough of them. In both cases, the information that could help fixi a particular issue is then lost or hard to find. We found out that having verbose crash logs was enough. Sometime we can ask a user a question, like which action did you do just before the crash, but in the vast majority of real-world cases, this information is implicitly contained in the stack trace. For the same reason we don’t use remote debugging nor Windows dump files. In our context, custom and verbose production crash logs are enough.

Code Covered by Automatic Tests and Code Contracts

Not only are production crash logs an important line of defense, but they also demand just a few days of dedicated work to set up.

Having automatic tests is the second line of defense. Contrary to production crash log,s not only does it require a lot of work, but it even changes (forever) the way you write code. After a decade of writing automatic tests, a lot of conclusions can be made. In my attempt to remain synthetic in this post, let’s try to summarize the most relevant ones in a few points:

  • The number of tests is absolutely meaningless.  When it comes to unit/automatic tests, the king measure is the percentage of code covered by tests.
  • A high percentage of code covered by tests is not enough, everything that can be checked must be checked. In almost all literature related to unit testing, you’ll read that checks are assertions in unit test code. Few actually realize that assertions in the code itself are at least as important as assertions in unit-test code. There is a scientific terminology for assertions in code: Code Contracts. The important thing about code contracts is that they must fail both at manual-test time (see later) and at automatic-test time.
  • In NDepend, we use more than 20K simple Debug.Assert() in code. These are our contracts. Debug.Assert() are removed in production code. This is ok for us since we want maximum performance and having some sophisticated assertions at runtime can significantly slow down an application. Hence we decided to sacrifice an important line of defense in the name of performance. By using MS Code Contract that could let have assertions fail in production, we could increase the number and the accuracy of production crash logs. This is a choice you must make depending of your application. Let’s precise that NDepend.API actually supports MS Code Contract for its great ability to provide active documentation to users.
  • How much coverage is enough? My answer is 100%.
    • Typically, developers don’t want to lose time writing tests to cover say, properties getter and setter. My point is twofold: typically you can write higher level tests that will cover these + if these getters and setters contain assertions, this is even better.
    • Typically, developers claim that 10% of a class is difficult to test, it takes as much effort to test this 10% as it does to test the remaining 90%. Once again, they don’t want to lose time! My point is that this 10% of code is by definition not easily testable, as a consequence this code is both complex and not-well designed, and as a consequence it is certainly highly-bug-prone. So basically, the highest bug-prone portion of code ends up being not covered by automatic tests!! This is non-sense but this is the reality in most dev shops.
    • Typically, developers say that not everything is coverable by tests and I agree. Code calling blocking methods like MessageBox.Show() is just not coverable, this is why such calls must be mocked. Some other UI code can be especially tricky to test. The approach we use for this is that we designed our UIs in a way that the underlying code can be triggered by unit tests (some would say automated by tests) and then, we mostly rely on assertions in UI code itself to catch any potential regression. Of course when possible, assertions in tests are welcome and of course such UI code is highly decoupled from non-UI logic that has its own set of tests. Doing so has been proving work for our dev shop.
    • I’ll add that when a class or a group of classes are 100% covered by tests, experience shows that the innocent fact that suddenly a coverage hole appears, often means that there is a new problem, either in the code, in test code, or  in both. More often than not, we discovered regression bugs this way that were not caught by assertions. This is why we use tooling (aka NDepend) to check that all code that used to be 100% covered must remain 100% covered.
    • Last but not least, when a bug is fixed, if the fixed code portion is already covered by tests, it is easy to write assertions specific to the fix to avoid any regression in the future. And when most classes are 100% covered, more often than not it is a matter of minutes, or even seconds, to write such assertions.
  • If your application is successful enough, the code base will grow over the years. Finally, the biggest benefit you can expect from writing coverage-oriented automatic tests, is that the number of regression issues will remain under control because it won’t be proportional to the growing size of the code base. Keep in mind that only code covered by tests whose result is asserted somehow is protected by this line of defense.

Let’s illustrate this section with the NDepend 82.6% code coverage visualized with the NDepend metric view. We abide by our rules.

Coverage

Static Analysis and Code Review

I see static analysis as unit tests, but instead of exercising the code dynamically, static analysis exercises properties that can be inferred from the code. In the previous section for example, I wrote that if a class was 100% covered by tests, it must remain 100% covered by tests.  And I even underlined that if a hole suddenly pops up in this perfect coverage, more often than not, understanding the root cause of the hole will lead to a bug fix. This illustrates how static analysis is actually a line of defense against bugs.

In the previous analogy between static analysis and unit-tests, a test is actually a code rule. NDepend makes it easy to write custom code rules, it is just a matter of writing a C# LINQ query based on a fluent API, for example:

Code rules involving code coverage and diff after setting a baseline are especially suited to hunt regression bugs. But static analysis can handle many other properties of the code and it is not only related to bugs but also to code maintainability.

  • Code metrics: for example, methods with too many loops, if, else, switch, case… end up being non-understandable, hence non-maintainable. Counting these through the code metric Cyclomatic Complexity is a great way to assess when a method becomes too complex.
  • Dependencies: if the classes of your program are entangled, effects of any changes in the code becomes unpredictable. Static analysis can help to assess when classes and components are entangled.
  • Immutability: types that are used concurrently by several threads should be immutable, else you’ll have to protect state read/write access with complex lock strategies that will end up being un-maintainable. Static analysis can make sure that some classes remain immutable.
  • Dead code: dead code is code that can be removed safely, because it is not invoked anymore at runtime. Not only can it be removed, but it must be removed, because this extra code add unnecessary complexity to the program. Static analysis can find most of dead code in your program (yet not all).
  • API breaking change: if you present an API to your client, it is very easy to remove a public member without noticing and thus, breaking your clients code. Static analysis can compare two states of a program and can warn about this pitfall.
  • API usage: some APIs are intended to be used carefully. For example, a class that hold disposable fields must be itself disposable in general, except when the disposable field lifetime is not aligned with the class instances lifetime, which then sounds like a design problem.

The list of code properties that can be exercised by static analysis is endless. And the quoted ones refer to NDepend’s capabilities, some other tools like Resharper or CodeRush offer some other sorts of static analysis to warn about micro potential issues, like if a foreach variable is accessed from a closure for example, this can lead to major problems.

Static analysis is not only about directly finding bugs, but also about finding bug-prone situations that can decrease code understanding and maintainability.

Concerning code review, I don’t have much to say. This is static analysis except that the logic is statically checked by human instead of being checked by automatic rules. Thus it is highly imperfect and time consuming, yet we still practice it because experience shows that it helps finding issues that can hardly be found otherwise. The key to code review is to do it on bug-prone code, which include refactored code and new code that haven’t reached production yet.

Manual Tests and Beta Testing

No matter how good a team is at the previously explained lines of defense, if it fails at manual tests and beta testing, the end product will be buggy and ultimately unusable.

Because not all bugs lead to obvious crashes, tests done by humans are essential. For example, only a potential user can notice an incoherent numerical result in a UI.

Manual testing is like code review, highly imperfect and time consuming, and a team cannot capitalize on it. Yet, experience shows that it helps finding issue that can hardly be found otherwise.

We mentioned previously code contracts, they work hand in hand with manual tests. When, during a manual test session, I have the chance to break an assertion, this actually makes me happy 🙂 because I know that this is a great opportunity to fix a bug before it reaches the next release line.

Manual testing actually includes user feedback. Users are paying for a product and one main goal is to offer them a bug free product. Nevertheless, de-facto users are also testers and listening carefully to a user’s bug report and relentlessly struggling to fix them is an essential line of defense. Of course this does not only apply to bugs, but also to feature improvements, new feature suggestions, documentation gap, and much more, but these are another topics.