NDepend

Improve your .NET code quality with NDepend

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

scale static analysis tooling

How to Scale Your Static Analysis Tooling

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Continue reading How to Scale Your Static Analysis Tooling

prioritize-bugs

How to Prioritize Bugs on Your To-Do List

People frequently ask me questions about code quality.  People also frequently ask me questions about efficiency and productivity.  But it seems we rarely wind up talking about the two together.  How can you most efficiently improve quality via the fixing of bugs?  Or, more specifically, how should you prioritize bugs?

Let me be clear about something up front.  I’m not going to offer you some kind of grand unified scheme of bug prioritization.  If I tried, the attempt would come off as utterly quixotic.  Because software shops, roles, and offerings vary so widely, I cannot address every possible situation.

Instead, I will offer a few different philosophies of prioritization, leaving the execution mechanics up to you.  These should cover most common scenarios that software developers and project managers will encounter.
Continue reading How to Prioritize Bugs on Your To-Do List

static analysis continuous testing relationship

The Relationship between Static Analysis and Continuous Testing

As an adult, I have learned that I have an introvert type personality.  I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person.  So, learning about this characterization surprised me somewhat, but only until I fully understood.

I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding.  This describes me to a tee.  However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously.  Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.

I received just such a question the other day.  The question, more or less, was, “if we have continuous testing, do we really need static analysis?”  And, just like that, I was stumped.  This didn’t square, and I wanted time to think on that.  Luckily, I’ve had a bit of time.  (This is why I love blogging.) Continue reading The Relationship between Static Analysis and Continuous Testing

code smells fish

Easy to Miss Code Smells

The concept of a code smell is, perhaps, one of the most evocative in our profession.  The name itself has a levity factor to it, conjuring a mental image of one’s coworkers writing code so bad that it actually emits a foul odor.  But the metaphor has a certain utility as well in the “where there’s smoke, there may be fire” sense.

In case you’re not familiar, a code smell is an observable feature of the code (the smoke) that often belies a deeper existing problem (the fire).  When you say that a code smell exists, what you’re communicating is “you may be justified here, but I’m skeptical – in my experience this is probably a design flaw.”

Of course, accusing code of having a smell is only slightly less incendiary to the author than accusing code of being flat out bad.  Them’s fightin’ words, as they say.  But, for all the arguments and all of the righteous indignation that code smell accusations have generated over the years, their usefulness is undeniable.

No doubt you’ve heard of some of the most common and easiest to visualize code smells.  The God Class, Primitive Obsession, and Inappropriate Intimacy all come to mind.  These indicate, respectively a class in your code base doing way too much, a tendency to use primitive types when you should take advantage of classes, and a module or class that breaks encapsulation by knowing too many details about another.  The combination of their visual memorability and their wisdom has prodded us over the years to break things down, to create cohesive objects, and to preserve encapsulation.

I would argue, however, that there are many more code smells out there than the big, iconic ones that get a lot of attention.  I’d like today to discuss a few that I don’t think are as commonly known.  I’ll make the case for why, once you’ve mastered avoiding the well-known ones, you should watch for these as well.

Continue reading Easy to Miss Code Smells

5 Habits that Help Code Quality

When I’m called in to do a strategic assessment of a codebase, it’s never the result of everything being awesome.  That is, no one calls me up and says, “we’re ahead of schedule, under budget, and knocking it out of the park, so can you come in and tell us what you think of our code?”  Rather, I get calls when something isn’t going according to plan and the business people involved want to get some insight into what underlying causes there are in the code and in the team’s approach.

When the business gets involved this way, there is invariably a fiscal operational concern, either overtly or lurking just beneath the surface.  I’ll roll this up to the general consideration of “total cost of ownership” for the codebase.  The business is thus asking, “why are things proving to be more expensive than we thought?”

Typically, I come in and size up the situation, quantify it objectively, and then use analogies and examples to make clear what’s happening.  After I do this, pretty much without exception, the decision-makers to whom I’m speaking want to know what small things they can do, internally, to course correct.  This makes sense when you think about it.  If your doctor told you that your health outlook wasn’t great, you’d cross your fingers and say, “but I can fix it by changing my diet and exercise a little, right?”  You wouldn’t throw yourself on the table and say, “cut me open and make sure whatever you do is expensive!”

I am thus frequently asked, by both developers and by management, “what are the little things we can do to improve and maintain code quality?”  As such, this seems like excellent fodder for a blog post.  Here are my tips, based on years of observation of what correlates with healthy codebases and what correlates with distressed ones.

Continue reading 5 Habits that Help Code Quality

The Better Code Book – Our MVPs of 2015

We firmly believe spaghetti belongs on the dinner table and not in code. Our mission when starting NDepend was to create a tool to make best coding practices easier to maintain and improve. Writing has always been part of our message (see Patrick Smacchia’s work on CodeBetter.com) and we are proud to present our favorite pieces of writing from around the web in the last year, collected in what we are calling the Better Code Book.

We wanted to focus not only on how people use NDepend to improve their code for developers and architects, but also how to use static analysis in a broader, management sense. We are extremely grateful for our contributors in this project. Let us introduce them:

Bjørn Einar Bjartnes is a developer at the Norwegian Broadcasting Corporation. His current role is a backend developer at the API team, serving web, mobile, TV clients and more metadata about programs- and video-streams. He holds a MSc in Engineering Cybernetics and has a background from the petroleum industry, which has probably shaped his view on systems design. Also, Bjørn is active in the local F# Meetup and a proud member of the lambda club, playing with all things useless related to computers. You can also follow him on Twitter: @bjartnes

Jack Robinson is a twenty-something student in his final year of a degree in Software Engineering at Victoria University of Wellington. Currently an Intern Developer at Xero, he enjoys writing clean code, playing a board game or two with his friends, or just sitting down and watching a good film. You can read not just his musings on computer science, but also reviews on films and more at his website jackrobinson.co.nz

Prasad Narravula is a programmer, architect, consultant, and problem-solving leader.  He helps teams in agile development essentials- feedback loops to fail fast, enabling (engineering) practices, iterative and incremental design, starting at the right place, discovery, and learning. When time permits, he writes at ObjectCraftworks.com.

Erik Dietrich, founder of DaedTech LLC, is a programmer, architect, development coach, writer, Pluralsight author, and technologist. You can read his writing and find out more about him at http://www.daedtech.com/ and you can follow him on Twitter @daedtech.

Anthony Sciamanna is a software developer from Philadelphia, PA who has worked in the industry for nearly 20 years. He specializes in leading and coaching development teams, improving development practices for cross-functional teams, Test-Driven Development (TDD), unit testing, pair programming, and other Agile / eXtreme Programming (XP) practices. He can be contacted via his website: anthonysciamanna.com

Tomasz Jaskula is a software craftsman, founder and organizer of Paris user groups for F# and Domain Driven Design. He focuses on creating software delivering true business value which aligns with the business’s strategic initiatives and bears solutions with clearly identifiable competitive advantage. He is currently working for a big French bank building reactive applications in F# and C#. In his free time, he runs a startup project on applying machine learning with F# to the recruitment field, speaks at conferences and user groups, and writes blogs and articles for a French magazine for coders called “Programmez !” You can visit his site jaskula.fr

Continue reading The Better Code Book – Our MVPs of 2015

A Software Architect’s Best Friend

To this day, I have a persistent nightmare about my time in college.  It’s always pretty similar.  I wake up and I have a final exam later in the day, but I’m completely unprepared.  And I don’t mean that I’m unprepared in the sense that I didn’t re-read chapter 4 enough times.  I mean that I realize I haven’t attended a single lecture, done a single homework assignment, or really even fully understood what, exactly, the course covers.  For all intents and purposes, I’m walking in to take the final exam for a class I haven’t taken.  The dream creates a feeling that I am deeply and fundamentally unprepared for life.

I’ve never encountered anything since college that has stuck with me quite as profoundly; I don’t dream of being unprepared for board meetings or talks or anything like that.  Perhaps this is because my brain was still developing when I was 20 years old, or something.  I’m not a psychologist, so I don’t know.  But what I do know is that the closest I came to having this feeling was in the role of software architect, responsible for a code base with many contributors.

If you’re an architect, I’m sure you know the feeling.  You’ve noticed junior developers introducing global variables on a few occasions and explained to them the perils of this practice, but do you really know they’re not still doing it?  Sure, you check from time to time, but there’s only one of you and it’s not like you can single-handedly monitor every commit to the code base.  You do like to go on vacation at least every now and then, right?  Or maybe you’ve shown your team a design pattern that will prevent redundancy and standardize what had been a hodgepodge approach.  You’ve monitored the situation for a few commits and noticed that they’re not getting it quite right, but you can live with what they’re doing… as long as it doesn’t get any further off the rails.

You get the point.  After all, you’re living it.  You don’t need me to tell you that it’s stressful being responsible for the health of a code base that’s changing faster than you can track in any level of detail.  If you’re anything like me, you love the role, but sometimes you long for the days when you were making your presence felt by hacking things and cranking out code.  Well, what if I told you there were a way to get back there while hanging onto your architect card… at least, to a degree?

You and I both know it — you’re not going to be spending 8+ hours a day working on feature implementations, but that doesn’t mean that you’re forever consigned to UML and meetings and that your hacking days are completely over.  And I’m not just talking about side projects at night.  It’s entirely appropriate for you to be writing prototyping code and it’s also entirely appropriate for you to write little plugins and scripts around your build that examine your team’s code base.  In fact, I might argue that this latter pursuit should be part of your job.

What do I mean, exactly, by this?  What does it mean to automate “examining your team’s code base?”  You might be familiar with this in its simplest forms, such as a build that fails if test coverage is too low or if there are compiler warnings.  But this concept is extremely extensible.  You don’t need to settle for rudimentary information about your team’s code.

NDepend is a tool that provides you with a lot of information out of the box as well as hooks with which to integrate that information into your team’s build.  You can see architectural concerns at a high level, such as how much you depend on external libraries and how internally coupled your modules are.  You can track more granular concerns such as average method complexity or class cohesion.  And you can automatically generate reports about these things, and more, with each build.  All of that comes built in.

But where things get really fun and really interesting is when you go beyond what comes out of the box and start to customize for your own, specific situation.  NDepend comes with an incredibly powerful feature called CQLinq that lets you ask very custom, specific questions about your code.  You can write a query to see how many new global variables your team is introducing.  You can see if your code base is getting worse, in terms of coupling.  And, you can even spend an afternoon or two putting together a complex CQLinq query to see if your team is implementing the pattern that you prototyped for them.

And not only can you see it — you can see it in style.  You can generate a custom report with clear, obvious visuals.  This sort of visualization isn’t just decoration.  It has the power to impact your team’s behavior in a meaningful way.  They can see when their checkins are making more things green and happy and when their checkins are making things red and angry.  They’ll modify their code accordingly and basically anonymously, since this feedback is automated.  They’ll like it because automated feedback won’t feel judgmental to them, and you’ll like it because you’ll know that they’re being funneled toward good architectural decisions even when you aren’t there.  Talk about peace of mind.

You’ll spend a few days getting to know NDepend, a few days tweaking the reports out of the box, and a few days hacking the CQLinq queries to guard your code base according to your standards.  And, from there on in, you’ll enjoy peace of mind and be freed to focus on other things that command your attention.  As an architect there are a thousand demands on your time.  Do yourself a favor and get rid of the ones that you can easily automate with the right tooling.

Toward Bug Free Software: Lines of Defense

Hurrah!! Last week we released NDepend v6 RTM. Once again we relied on a 2 months private beta-testing period and a one month Release Candidate period to do our best to release a polished and stable product.

I’d like to talk about our lines of defense to fix as many bugs as possible. Except a few pieces of software in the world that can afford mathematical demonstrations to prove they are bug-free (like plane and some medical ones), all other pieces of software, including NDepend, rely on an empirical approach to chasing bugs and fixing them. An empirical approach is an evidence based approach that relies on direct observations and experimentation in the acquisition of new knowledge. An empirical approach will never lead to a bug-free product, but it can help a lot in keeping the number of bugs low, and make it so that bugs happen in rare enough situations that won’t have any impact on most user’s experience.

I could write many blog posts about each line of defense, after more than a decade applying them there is so much to say, but I want this post to be synthetic.

Production Crash Logs

Crashes are due to unhandled exceptions. Unhandled exceptions are due to situations at runtime that were unexpected, this typically includes:

  • null reference access,
  • division by zero,
  • disposed object accessed,
  • invalid cast,
  • wrong method call parameters,

A bug doesn’t necessarily lead to a crash, but a crash necessarily mean that there is a bug. Certainly the most important line of defense against bugs is to log all production crashes and relentlessly fix them all. The .NET Framework offers several unhandled exception access points, including:

In some environments like Visual Studio hosting, these access points don’t work and you’ll have to write code to catch all exceptions in all possible handlers of your program.

Of course some users are disconnected from internet, or behind a proxy, and you won’t get those production crashes. Our statistics show that this concerns only 20% of users at most. So being aware of 80% of all production crashes is certainly enough to have a good measure of what’s going wrong in a production. And because Windows and .NET are highly sophisticated technologies that are constantly evolving, you can expect plenty of issues that never occurred on your team’s machines! For example, the NDepend v6 Release Candidate Period shows us that users running NDepend v6 RC on a fresh Windows 8.1 or Windows 10 install, experienced crashes because of a P/Invoked win32 method that our code calls. Oddly enough this P/Invoked method behaves differently when .NET v3.5 is not installed on the machine!

The key to successful production crash logs, is to get as many useful data as possible per log. For example here is a production crash below we had a few week ago. When the same issue lead to several crash logs, we can start doing data mining on it. Do they all occur with the same stack trace? with the same high DPI resolution? on the same Windows platform version ? on the same machine only? Notice also the stack frames improved with IL offset retrieved with the StackFrame.GetILOffSet() method. Many times in the past, this alone lead us to the root cause.

ProductionCrash

 

You’ll notice that we only log crashes, we don’t have other forms of runtime logs, like logging every major event that happen (button clicked, panel opened, analysis started…). Our experience with logging events is that ultimately we logged too much or not enough of them. In both cases, the information that could help fixi a particular issue is then lost or hard to find. We found out that having verbose crash logs was enough. Sometime we can ask a user a question, like which action did you do just before the crash, but in the vast majority of real-world cases, this information is implicitly contained in the stack trace. For the same reason we don’t use remote debugging nor Windows dump files. In our context, custom and verbose production crash logs are enough.

Code Covered by Automatic Tests and Code Contracts

Not only are production crash logs an important line of defense, but they also demand just a few days of dedicated work to set up.

Having automatic tests is the second line of defense. Contrary to production crash log,s not only does it require a lot of work, but it even changes (forever) the way you write code. After a decade of writing automatic tests, a lot of conclusions can be made. In my attempt to remain synthetic in this post, let’s try to summarize the most relevant ones in a few points:

  • The number of tests is absolutely meaningless.  When it comes to unit/automatic tests, the king measure is the percentage of code covered by tests.
  • A high percentage of code covered by tests is not enough, everything that can be checked must be checked. In almost all literature related to unit testing, you’ll read that checks are assertions in unit test code. Few actually realize that assertions in the code itself are at least as important as assertions in unit-test code. There is a scientific terminology for assertions in code: Code Contracts. The important thing about code contracts is that they must fail both at manual-test time (see later) and at automatic-test time.
  • In NDepend, we use more than 20K simple Debug.Assert() in code. These are our contracts. Debug.Assert() are removed in production code. This is ok for us since we want maximum performance and having some sophisticated assertions at runtime can significantly slow down an application. Hence we decided to sacrifice an important line of defense in the name of performance. By using MS Code Contract that could let have assertions fail in production, we could increase the number and the accuracy of production crash logs. This is a choice you must make depending of your application. Let’s precise that NDepend.API actually supports MS Code Contract for its great ability to provide active documentation to users.
  • How much coverage is enough? My answer is 100%.
    • Typically, developers don’t want to lose time writing tests to cover say, properties getter and setter. My point is twofold: typically you can write higher level tests that will cover these + if these getters and setters contain assertions, this is even better.
    • Typically, developers claim that 10% of a class is difficult to test, it takes as much effort to test this 10% as it does to test the remaining 90%. Once again, they don’t want to lose time! My point is that this 10% of code is by definition not easily testable, as a consequence this code is both complex and not-well designed, and as a consequence it is certainly highly-bug-prone. So basically, the highest bug-prone portion of code ends up being not covered by automatic tests!! This is non-sense but this is the reality in most dev shops.
    • Typically, developers say that not everything is coverable by tests and I agree. Code calling blocking methods like MessageBox.Show() is just not coverable, this is why such calls must be mocked. Some other UI code can be especially tricky to test. The approach we use for this is that we designed our UIs in a way that the underlying code can be triggered by unit tests (some would say automated by tests) and then, we mostly rely on assertions in UI code itself to catch any potential regression. Of course when possible, assertions in tests are welcome and of course such UI code is highly decoupled from non-UI logic that has its own set of tests. Doing so has been proving work for our dev shop.
    • I’ll add that when a class or a group of classes are 100% covered by tests, experience shows that the innocent fact that suddenly a coverage hole appears, often means that there is a new problem, either in the code, in test code, or  in both. More often than not, we discovered regression bugs this way that were not caught by assertions. This is why we use tooling (aka NDepend) to check that all code that used to be 100% covered must remain 100% covered.
    • Last but not least, when a bug is fixed, if the fixed code portion is already covered by tests, it is easy to write assertions specific to the fix to avoid any regression in the future. And when most classes are 100% covered, more often than not it is a matter of minutes, or even seconds, to write such assertions.
  • If your application is successful enough, the code base will grow over the years. Finally, the biggest benefit you can expect from writing coverage-oriented automatic tests, is that the number of regression issues will remain under control because it won’t be proportional to the growing size of the code base. Keep in mind that only code covered by tests whose result is asserted somehow is protected by this line of defense.

Let’s illustrate this section with the NDepend 82.6% code coverage visualized with the NDepend metric view. We abide by our rules.

Coverage

Static Analysis and Code Review

I see static analysis as unit tests, but instead of exercising the code dynamically, static analysis exercises properties that can be inferred from the code. In the previous section for example, I wrote that if a class was 100% covered by tests, it must remain 100% covered by tests.  And I even underlined that if a hole suddenly pops up in this perfect coverage, more often than not, understanding the root cause of the hole will lead to a bug fix. This illustrates how static analysis is actually a line of defense against bugs.

In the previous analogy between static analysis and unit-tests, a test is actually a code rule. NDepend makes it easy to write custom code rules, it is just a matter of writing a C# LINQ query based on a fluent API, for example:

Code rules involving code coverage and diff after setting a baseline are especially suited to hunt regression bugs. But static analysis can handle many other properties of the code and it is not only related to bugs but also to code maintainability.

  • Code metrics: for example, methods with too many loops, if, else, switch, case… end up being non-understandable, hence non-maintainable. Counting these through the code metric Cyclomatic Complexity is a great way to assess when a method becomes too complex.
  • Dependencies: if the classes of your program are entangled, effects of any changes in the code becomes unpredictable. Static analysis can help to assess when classes and components are entangled.
  • Immutability: types that are used concurrently by several threads should be immutable, else you’ll have to protect state read/write access with complex lock strategies that will end up being un-maintainable. Static analysis can make sure that some classes remain immutable.
  • Dead code: dead code is code that can be removed safely, because it is not invoked anymore at runtime. Not only can it be removed, but it must be removed, because this extra code add unnecessary complexity to the program. Static analysis can find most of dead code in your program (yet not all).
  • API breaking change: if you present an API to your client, it is very easy to remove a public member without noticing and thus, breaking your clients code. Static analysis can compare two states of a program and can warn about this pitfall.
  • API usage: some APIs are intended to be used carefully. For example, a class that hold disposable fields must be itself disposable in general, except when the disposable field lifetime is not aligned with the class instances lifetime, which then sounds like a design problem.

The list of code properties that can be exercised by static analysis is endless. And the quoted ones refer to NDepend’s capabilities, some other tools like Resharper or CodeRush offer some other sorts of static analysis to warn about micro potential issues, like if a foreach variable is accessed from a closure for example, this can lead to major problems.

Static analysis is not only about directly finding bugs, but also about finding bug-prone situations that can decrease code understanding and maintainability.

Concerning code review, I don’t have much to say. This is static analysis except that the logic is statically checked by human instead of being checked by automatic rules. Thus it is highly imperfect and time consuming, yet we still practice it because experience shows that it helps finding issues that can hardly be found otherwise. The key to code review is to do it on bug-prone code, which include refactored code and new code that haven’t reached production yet.

Manual Tests and Beta Testing

No matter how good a team is at the previously explained lines of defense, if it fails at manual tests and beta testing, the end product will be buggy and ultimately unusable.

Because not all bugs lead to obvious crashes, tests done by humans are essential. For example, only a potential user can notice an incoherent numerical result in a UI.

Manual testing is like code review, highly imperfect and time consuming, and a team cannot capitalize on it. Yet, experience shows that it helps finding issue that can hardly be found otherwise.

We mentioned previously code contracts, they work hand in hand with manual tests. When, during a manual test session, I have the chance to break an assertion, this actually makes me happy 🙂 because I know that this is a great opportunity to fix a bug before it reaches the next release line.

Manual testing actually includes user feedback. Users are paying for a product and one main goal is to offer them a bug free product. Nevertheless, de-facto users are also testers and listening carefully to a user’s bug report and relentlessly struggling to fix them is an essential line of defense. Of course this does not only apply to bugs, but also to feature improvements, new feature suggestions, documentation gap, and much more, but these are another topics.