Improve your .NET code quality with NDepend

The proper usages of Exceptions in C#

The C# exception basics are generally well understood. However exceptions are often used as a way to sweep error handling duty under the carpet. As I did in The proper usages of the keyword ‘static’ in C#, I’d like to share here what are the right usages of exceptions I came up after 25 years of OO Programming.

There are various situations that might require exceptions usage:

  • A) Data gathering: when the program is facing an unexpected state in the execution environment that prevent to gather some data: missing file, missing registry-key, restricted security permissions…
  • B) Data validation: when the program is facing a wrong user input or environment data : invalid password, invalid mail address, non-well-formatted json string, unexpected web request …
  • C) Bug: when the program is facing consequences of a bug. Such situation can only be solved with a new version of the app that fixes the bug.
  • D) Severe outage: typically cannot connect to the database or network unavailability

The rule of thumb is that an application with no bug and no severe outage should not throw exceptions. This translates to: the exceptions bubble up mechanism should only be used to log bugs and severe outages.

It is important to implement a way to be notified of C) and D) exceptions that happen in production. This way one gathers automatically data about bugs one might not be aware of. To do so you must let bubble up these exception and catch them in your entry point method. Alternatively, you can use a runtime handler like AppDomain.UnhandledException. From there you log the crash to your production crash server. Those bugs often happen in environment you didn’t test yet. Hence, this is important to gather relevant environment data such as OS version, CLR version, user permissions, DPI settings, product install path…

I will now focus on A) and B) cases.

Wrapping Exceptions that are not caused because of a bug: The TryXXX Pattern

A program takes inputs and process outputs. All inputs must be A) gathered and B) validated before processing. An exception can occur in both cases.

From the principle application with no bug and no severe outage should not throw exceptions such A) and B) exception shouldn’t bubble up. In other words catch exceptions as early as possible and declare the try{...}catch{...} scopes in a method that can return a fail exit code. Typically such method is named TryXXX().

I figured out this simple TryXXX() pattern during a MVP session in the Microsoft campus a long time ago. The session presented the new BCL method in .NET 1.1 Int32.TryParse(string,out Int32):bool. At that time Microsoft engineers introduced a whole range of TryParse() methods. Developers were complaining that it was not practicable (and non-performance wise) to write try/catch scope around each call to Int32.Parse(string):Int32, each time a primitive value had to be parsed from a string.

And I agree with these complains. It is awkward to clutter your business logic with try/catch. You should consider input gathering and validation failures as just another business requirement to take account of.

The TryXXX Pattern in Action

Writing textual content to a file is typically an operation that can fail. The OS user might not have permissions, the file might be read-only, the volume in the file path might not exist… Hence File.WriteAllText() can throw all sort of exceptions. In the NDepend code base we wrote a rule to force all calls to File.WriteAllText() to actually call our own TryWriteAllText() implementation.

Remarks on TryWriteAllText() implementation

  • We don’t use yet the C#8 non-nullable types that can prevent the inputs being null at compile time. Nevertheless the ideas explained below can apply to any sort of argument validation.
  • It is often advised to avoid catching all kinds of exceptions in a catch block as we do here with catch(Exception). To me this advice means that you shouldn’t swallow exceptions in the root or entry point method. Instead those exceptions must bubble till the crash-in-production log handler.
  • File.WriteAllText() can throw all sorts of exceptions, UnauthorizeAccessException IOException PathTooLongException… But with catch(Exception) we simplify drastically all these cases with a binary outcome: OK) The file has been written properly or KO) something went wrong and prevented the file write operation.
  • Despite the simplicity of the binary outcome we don’t swallow any detail of what happened. We log the exception kind and message in the failureReason string that will be shown to the user so she/he can take action.
  • This method could be refined by returning an enum FileWritten { Yes, No } instead of using a boolean (read Code Smell – Primitive Obsession and Refactoring Recipes).
  • Since our users are developers we don’t localize our human-readable string: shouldn’t every software developer understand English?

Method argument validation: Assertions vs. Explicit Exceptions throw

In the implementation of TryWriteAllText() we don’t try to validate parameters and eventually throw an exception. This is because this method is not an NDepend.API method. Only members of our team can call this TryWriteAllText() method. We have complete control over calls. If filePath is null or content is null it means de-facto that we have a bug in our stack of callers. Thus we fallback to the C) bug exception case explained above: A NullReferenceException will bubble up till our AppDomain.UnhandledException handler that will notify our production bug tracking server.

Notice that we validate our parameters with Debug.Assert(paramValidation) mostly for two reasons:

  1. Code documentation: with such assertion, when reviewing code we don’t need to take account of the case inputs being potentially null. It is clear that we don’t expect null value here.
  2. To get an assertion that fails at test-time: both automatic tests and smoke tests. Smoke test means when the developer exercises his/her code manually. Because a null input means a bug, better be informed as early as possible. To me such assertion is conceptually like an Assert.IsNotNull() in test. I don’t mind where the assertion is, in test code or in application code. It is here to validate a state and if it fails at smoke or automatic test time, it means that there is a bug to fix. Of course relying on Debug.Assert() is not suitable for all scenarios like for example for a web server. But you get the point: use your prefered assertion framework, assert everything that can be asserted in your test code and app code and make sure to be notified at both smoke and automatic test time when an assertion fails.

Instead of using Debug.Assert(paramValidation) we could have used if(!paramValidation){throw XYZ}. Such exception clearly fallback in the C) exception-used-for-bug case. Doing so has the advantage of failing earlier at production time. As a consequence throwing explicitly exceptions produces clearer exception crash reports. On the other hand, our Debug.Assert() assertions are removed from production code. Thus the exception crash reports we got in our logs need to be investigated to understand what happened. But the if(!paramValidation){throw XYZ} approach has also some cons:

  • it clutters the code with extra {throw XYZ} scopes ;
  • a test is needed for each {throw XYZ} scope ;
  • there is a small performance hit at runtime to execute thousands or millions of paramValidation per second.

Personally I favor the assertion approach over the explicit exception throw approach. To me the automatic test suite + assertions in code is like a scaffold to exercise code. Such scaffold is not meant to be delivered with production code. We want our production code to be as fast and as slim as possible. Thus, we accept the cost of investigating what happened from exception crash reports. By adding IL offset for each method call in the logged stacktrace this is usually a fairly simple task.

Cascading the TryXXX Pattern

Thanks to the TryWriteAllText() method, any failure when writing a file is now processed with all respect that is due to a business logic requirement. All processing that depend directly or indirectly on a file writing operation can potentially fail. It means that all callers of this method must adopt the TryXXX() pattern until a method can decide what do upon failure:

  • inform the user with a meaningful message if the program cannot recover, then close the program gracefully
  • inform the user if she/he should retry
  • or maybe just silently log the failure if the program can recover despite this failure

See below all processing in our code that can fail if a file cannot be written. Leaf methods take the decision on what to do on failure (click the picture to get a larger version):

Illustrating the Try Pattern Contagion
A Case Study of the Try-Cascading-Strategy

Meaningful error message

Sometime error message can be so frustrating!

A fortunate consequence of the TryXXX() calls cascading is that you get a chance to refine the error message. For example the method AttachNewProjectForm.TryBuildAndSaveprojectFile() in the graph above can display a meaningful message to the user, something like:

Now the user knows all details. The user can recover without the need to contact the product support. Less support, less friction with users: those are the goals of a proper error management!

Nevertheless keep in mind that meaningful error description can also be a security threat. When testing a web app you want to provide all details to the test-engineer (if a DB connection string is invalid for example). But in production you can only fallback to a “Something went wrong on our end, please try again” message.

To summarize

In this post I advise keeping exceptions only for hopeless situations usually due to bugs or severe production outage. In such situations the only good attitude is to fail bluntly, inform the user, be informed and fix it asap. A bug or a severe outage means that you don’t control anymore what is happening. Some client data can then be corrupted. The consequences can be much more painful than having a client claiming that your application crashes.

On the other hand, usual failures that can happen at runtime must be treated like a business requirement. It must be fully anticipated and fully tested. I don’t want a method like AttachNewProjectForm.TryBuildAndSaveProjectFile() to cach(IOException) because an IOException is an irrelevant implementation detail that clutter my code at this point. Even worse, with such strategy plenty of methods would have to catch(IOException), leading to a lot of cluttering and code duplication.

It is often deemed that the advantage of exceptions is that they cannot be ignored. And indeed a developer can still call ProjectHelper.TrySaveBlankProjectToFile(string filePath, IProject project, our string failureReason):bool and ignore that this method can return false and a failure description. But if you are working with such developer then you have a HR problem. And it is still possible to write an analyzer to make sure that the result of a TryXXX() call doesn’t get ignored.

What about custom exception class?

If you are publishing a library and don’t want to present your clients some TryXXX() methods, you can still provide your own custom exception classes and document properly the API methods that throw them. Note that it is important to not catch all kinds of exceptions within the library because the client code needs to be notified of system exception like OutOfMemoryException.


In this article we questioned strategies about how and when exceptions exceptions should be used. For that we distinguished usual failure situations from bug and production outage scenarios.

With the try-cascading strategy exposed the team can embrasse an holistic approach of all possible program paths. A solid implementation for handling exceptions requires a lot of work and maintenance. On the other hand a try-cascading-strategy implementation ends up being lighter and simpler. Other benefits are:

  • Increase the code clarity : all program paths are well anticipated and properly tested.
  • Improve the code reviews: a TryXXX() method can necessarily fail and one must define what happens upon failure.
  • Increase the code testability: everything is deemed as tested when even the failure cases are fully tested.
  • Reduce friction with user: failure messages shown to the users are carefully written, refined and tested.

Interestingly enough, the call graph shown above has been generated by exporting methods matched by this code query.

My dad being an early programmer in the 70's, I have been fortunate to switch from playing with Lego, to program my own micro-games, when I was still a kid. Since then I never stop programming.

I graduated in Mathematics and Software engineering. After a decade of C++ programming and consultancy, I got interested in the brand new .NET platform in 2002. I had the chance to write the best-seller book (in French) on .NET and C#, published by O'Reilly (> 15.000 copies) and also did manage some academic and professional courses on the platform and C#.

Over the years, I gained a passion for understanding structure and evolution of large complex real-world applications, and for talking with talented developers behind it. As a consequence, I got interested in static code analysis and started the project NDepend.

Today, with more than 8.000 client companies, including many of the Fortune 500 ones, NDepend offers deeper insight and understanding about their code bases to a wide range of professional users around the world.

I live with my wife and our twin babies Léna and Paul, in the beautiful island of Mauritius in the Indian Ocean.

Leave a Reply

Your email address will not be published. Required fields are marked *