NDepend

Improve your .NET code quality with NDepend

What DevOps Means for Static Analysis

For most of my career, software development has, in a very specific way, resembled mailing a letter.  You write the thing, and then you go through the standard mail piece rigmarole.  This involves putting it into an envelope, addressing the envelope, putting a stamp on, it and then walking it over to the mailbox.  From there, you stuff it into the mailbox.

At this point, you might as well have dropped the thing into some kind of rip in space-time for all you understand what comes next.  Off it goes into the ether, and you hope that it arrives at its destination through some kind of logistical magic.  So it has generally gone with software.
Continue reading What DevOps Means for Static Analysis

Adding Static Analysis to Your Team’s DNA

Stop me if this sounds familiar.  (Well, not literally.  I realize that asynchronous publication makes it hard for you to actually stop me as I type.  Indulge me the figure of speech.)  You work on a codebase for a long time, all the while having the foreboding sense of growing messiness.  One day, perhaps when you have a bit of extra time, you download a static analyzer to tell you “how bad.”

Then you have an experience like a holiday-time binge eater getting on a scale on January 1st.  As the tool crunches its results, you wince in anticipation.  Next, you get the results, get depressed, and then get busy correcting them.  Unlike shedding those holiday pounds, you can often fix the most egregious errors in your codebase in a matter of days.  So you make those fixes, pat yourself on the back, and forget all about the static analyzer, perhaps letting your trial expire or leaving it to sit on the shelf.

If you’re wondering how I got in your head, consider that I see this pattern in client shops frequently.  They regard static analysis as a one time cleanup effort, to be implemented as a small project every now and then.  Then, they resolve to carry the learning forward to avoid making similar mistakes.  But, in a vacuum, they rarely do.
Continue reading Adding Static Analysis to Your Team’s DNA

Detecting Performance Bottlenecks with NDepend

In the past, I’ve talked about the nature of static code analysis.  Specifically, static analysis involves analyzing programs’ source code without actually executing them.  Contrast this with runtime analysis, which offers observations of runtime behavior, via introspection or other means. This creates an interesting dynamic regarding the idea of detecting performance bottlenecks with static analysis.  This is because performance is inherently a runtime concern.  Static analysis tends to do its best, most direct work with source code considerations.  It requires a more indirect route to predict runtime issues.

For example, consider something simple.

With a static analyzer, we can easily look at this method and say, “you’re dereferencing ‘theService’ without a null check.”  However, it gets a lot harder to talk definitively about runtime behavior.  Will this method ever generate an exception?  We can’t know that with only the information present.  Maybe the only call to this in the entire codebase happens right after instantiating a service.  Maybe no one ever calls it.

Today, I’d like to talk about using NDepend to sniff out possible performance issues.  But my use of possible carries significant weight because definitive gets difficult.  You can use NDepend to inform reasoning about your code’s performance, but you should do so with an eye to probabilities.

That said, how can you you use NDepend to identify possible performance woes in your code?  Let’s take a look at some ideas.

Continue reading Detecting Performance Bottlenecks with NDepend

scale static analysis tooling

How to Scale Your Static Analysis Tooling

If you wander the halls of a large company with a large software development organization, you will find plenty of examples of practice and process at scale.  When you see this sort of thing, it has generally come about in one of two ways.  First, the company piloted a new practice with a team or two and then scaled it from there.  Or, second, the development organization started the practice when it was small and grew it as the department grew.

But what about “rolled it out all at once?”  Nah, (mercifully) not so much.  “Let’s take this thing we’ve never tried before, deploy it in an expensive roll out, and assume all will go well.”  Does that sound like the kind of plan executives with career concerns sign off on?  Would you sign off on it?  Even the pointiest haired of managers would feel gun shy.

When it comes to scaling a static analysis practice, you will find no exception.  Invariably, organizations grow the practice as they grow, or they pilot it and then scale it up.  And that begs the question of, “how?” when it comes to scaling static analysis.

Two main areas of concern come to mind: technical and human.  You probably think I’ll spend most of the post talking technical don’t you?  Nope.  First of all, too many tools, setups, and variations exist for me to scratch the surface.  But secondly, and more importantly, a key person that I’ll mention below will take the lead for you on this.

Instead, I’ll focus on the human element.  Or, more specifically, I will focus on the process for scaling your static analysis — a process involving humans.

Continue reading How to Scale Your Static Analysis Tooling

prioritize-bugs

How to Prioritize Bugs on Your To-Do List

People frequently ask me questions about code quality.  People also frequently ask me questions about efficiency and productivity.  But it seems we rarely wind up talking about the two together.  How can you most efficiently improve quality via the fixing of bugs?  Or, more specifically, how should you prioritize bugs?

Let me be clear about something up front.  I’m not going to offer you some kind of grand unified scheme of bug prioritization.  If I tried, the attempt would come off as utterly quixotic.  Because software shops, roles, and offerings vary so widely, I cannot address every possible situation.

Instead, I will offer a few different philosophies of prioritization, leaving the execution mechanics up to you.  These should cover most common scenarios that software developers and project managers will encounter.
Continue reading How to Prioritize Bugs on Your To-Do List

static analysis continuous testing relationship

The Relationship between Static Analysis and Continuous Testing

As an adult, I have learned that I have an introvert type personality.  I do alright socially, don’t mind public speaking, and do not (I don’t think) present as an awkward person.  So, learning about this characterization surprised me somewhat, but only until I fully understood.

I won’t delve into the finer points of human psychology here, but suffice it to say that introverts prefer to process and grok questions before responding.  This describes me to a tee.  However, working as a consultant and giving frequent advice clashes with this and has forced me to develop somewhat of a knack for answering extemporaneously.  Still, you might ask me just the right question to cause me to cock my head, blink at you, and frown.

I received just such a question the other day.  The question, more or less, was, “if we have continuous testing, do we really need static analysis?”  And, just like that, I was stumped.  This didn’t square, and I wanted time to think on that.  Luckily, I’ve had a bit of time.  (This is why I love blogging.) Continue reading The Relationship between Static Analysis and Continuous Testing

code smells fish

Easy to Miss Code Smells

The concept of a code smell is, perhaps, one of the most evocative in our profession.  The name itself has a levity factor to it, conjuring a mental image of one’s coworkers writing code so bad that it actually emits a foul odor.  But the metaphor has a certain utility as well in the “where there’s smoke, there may be fire” sense.

In case you’re not familiar, a code smell is an observable feature of the code (the smoke) that often belies a deeper existing problem (the fire).  When you say that a code smell exists, what you’re communicating is “you may be justified here, but I’m skeptical – in my experience this is probably a design flaw.”

Of course, accusing code of having a smell is only slightly less incendiary to the author than accusing code of being flat out bad.  Them’s fightin’ words, as they say.  But, for all the arguments and all of the righteous indignation that code smell accusations have generated over the years, their usefulness is undeniable.

No doubt you’ve heard of some of the most common and easiest to visualize code smells.  The God Class, Primitive Obsession, and Inappropriate Intimacy all come to mind.  These indicate, respectively a class in your code base doing way too much, a tendency to use primitive types when you should take advantage of classes, and a module or class that breaks encapsulation by knowing too many details about another.  The combination of their visual memorability and their wisdom has prodded us over the years to break things down, to create cohesive objects, and to preserve encapsulation.

I would argue, however, that there are many more code smells out there than the big, iconic ones that get a lot of attention.  I’d like today to discuss a few that I don’t think are as commonly known.  I’ll make the case for why, once you’ve mastered avoiding the well-known ones, you should watch for these as well.

Continue reading Easy to Miss Code Smells

5 Habits that Help Code Quality

When I’m called in to do a strategic assessment of a codebase, it’s never the result of everything being awesome.  That is, no one calls me up and says, “we’re ahead of schedule, under budget, and knocking it out of the park, so can you come in and tell us what you think of our code?”  Rather, I get calls when something isn’t going according to plan and the business people involved want to get some insight into what underlying causes there are in the code and in the team’s approach.

When the business gets involved this way, there is invariably a fiscal operational concern, either overtly or lurking just beneath the surface.  I’ll roll this up to the general consideration of “total cost of ownership” for the codebase.  The business is thus asking, “why are things proving to be more expensive than we thought?”

Typically, I come in and size up the situation, quantify it objectively, and then use analogies and examples to make clear what’s happening.  After I do this, pretty much without exception, the decision-makers to whom I’m speaking want to know what small things they can do, internally, to course correct.  This makes sense when you think about it.  If your doctor told you that your health outlook wasn’t great, you’d cross your fingers and say, “but I can fix it by changing my diet and exercise a little, right?”  You wouldn’t throw yourself on the table and say, “cut me open and make sure whatever you do is expensive!”

I am thus frequently asked, by both developers and by management, “what are the little things we can do to improve and maintain code quality?”  As such, this seems like excellent fodder for a blog post.  Here are my tips, based on years of observation of what correlates with healthy codebases and what correlates with distressed ones.

Continue reading 5 Habits that Help Code Quality

The Better Code Book – Our MVPs of 2015

We firmly believe spaghetti belongs on the dinner table and not in code. Our mission when starting NDepend was to create a tool to make best coding practices easier to maintain and improve. Writing has always been part of our message (see Patrick Smacchia’s work on CodeBetter.com) and we are proud to present our favorite pieces of writing from around the web in the last year, collected in what we are calling the Better Code Book.

We wanted to focus not only on how people use NDepend to improve their code for developers and architects, but also how to use static analysis in a broader, management sense. We are extremely grateful for our contributors in this project. Let us introduce them:

Bjørn Einar Bjartnes is a developer at the Norwegian Broadcasting Corporation. His current role is a backend developer at the API team, serving web, mobile, TV clients and more metadata about programs- and video-streams. He holds a MSc in Engineering Cybernetics and has a background from the petroleum industry, which has probably shaped his view on systems design. Also, Bjørn is active in the local F# Meetup and a proud member of the lambda club, playing with all things useless related to computers. You can also follow him on Twitter: @bjartnes

Jack Robinson is a twenty-something student in his final year of a degree in Software Engineering at Victoria University of Wellington. Currently an Intern Developer at Xero, he enjoys writing clean code, playing a board game or two with his friends, or just sitting down and watching a good film. You can read not just his musings on computer science, but also reviews on films and more at his website jackrobinson.co.nz

Prasad Narravula is a programmer, architect, consultant, and problem-solving leader.  He helps teams in agile development essentials- feedback loops to fail fast, enabling (engineering) practices, iterative and incremental design, starting at the right place, discovery, and learning. When time permits, he writes at ObjectCraftworks.com.

Erik Dietrich, founder of DaedTech LLC, is a programmer, architect, development coach, writer, Pluralsight author, and technologist. You can read his writing and find out more about him at http://www.daedtech.com/ and you can follow him on Twitter @daedtech.

Anthony Sciamanna is a software developer from Philadelphia, PA who has worked in the industry for nearly 20 years. He specializes in leading and coaching development teams, improving development practices for cross-functional teams, Test-Driven Development (TDD), unit testing, pair programming, and other Agile / eXtreme Programming (XP) practices. He can be contacted via his website: anthonysciamanna.com

Tomasz Jaskula is a software craftsman, founder and organizer of Paris user groups for F# and Domain Driven Design. He focuses on creating software delivering true business value which aligns with the business’s strategic initiatives and bears solutions with clearly identifiable competitive advantage. He is currently working for a big French bank building reactive applications in F# and C#. In his free time, he runs a startup project on applying machine learning with F# to the recruitment field, speaks at conferences and user groups, and writes blogs and articles for a French magazine for coders called “Programmez !” You can visit his site jaskula.fr

Continue reading The Better Code Book – Our MVPs of 2015

A Software Architect’s Best Friend

To this day, I have a persistent nightmare about my time in college.  It’s always pretty similar.  I wake up and I have a final exam later in the day, but I’m completely unprepared.  And I don’t mean that I’m unprepared in the sense that I didn’t re-read chapter 4 enough times.  I mean that I realize I haven’t attended a single lecture, done a single homework assignment, or really even fully understood what, exactly, the course covers.  For all intents and purposes, I’m walking in to take the final exam for a class I haven’t taken.  The dream creates a feeling that I am deeply and fundamentally unprepared for life.

I’ve never encountered anything since college that has stuck with me quite as profoundly; I don’t dream of being unprepared for board meetings or talks or anything like that.  Perhaps this is because my brain was still developing when I was 20 years old, or something.  I’m not a psychologist, so I don’t know.  But what I do know is that the closest I came to having this feeling was in the role of software architect, responsible for a code base with many contributors.

If you’re an architect, I’m sure you know the feeling.  You’ve noticed junior developers introducing global variables on a few occasions and explained to them the perils of this practice, but do you really know they’re not still doing it?  Sure, you check from time to time, but there’s only one of you and it’s not like you can single-handedly monitor every commit to the code base.  You do like to go on vacation at least every now and then, right?  Or maybe you’ve shown your team a design pattern that will prevent redundancy and standardize what had been a hodgepodge approach.  You’ve monitored the situation for a few commits and noticed that they’re not getting it quite right, but you can live with what they’re doing… as long as it doesn’t get any further off the rails.

You get the point.  After all, you’re living it.  You don’t need me to tell you that it’s stressful being responsible for the health of a code base that’s changing faster than you can track in any level of detail.  If you’re anything like me, you love the role, but sometimes you long for the days when you were making your presence felt by hacking things and cranking out code.  Well, what if I told you there were a way to get back there while hanging onto your architect card… at least, to a degree?

You and I both know it — you’re not going to be spending 8+ hours a day working on feature implementations, but that doesn’t mean that you’re forever consigned to UML and meetings and that your hacking days are completely over.  And I’m not just talking about side projects at night.  It’s entirely appropriate for you to be writing prototyping code and it’s also entirely appropriate for you to write little plugins and scripts around your build that examine your team’s code base.  In fact, I might argue that this latter pursuit should be part of your job.

What do I mean, exactly, by this?  What does it mean to automate “examining your team’s code base?”  You might be familiar with this in its simplest forms, such as a build that fails if test coverage is too low or if there are compiler warnings.  But this concept is extremely extensible.  You don’t need to settle for rudimentary information about your team’s code.

NDepend is a tool that provides you with a lot of information out of the box as well as hooks with which to integrate that information into your team’s build.  You can see architectural concerns at a high level, such as how much you depend on external libraries and how internally coupled your modules are.  You can track more granular concerns such as average method complexity or class cohesion.  And you can automatically generate reports about these things, and more, with each build.  All of that comes built in.

But where things get really fun and really interesting is when you go beyond what comes out of the box and start to customize for your own, specific situation.  NDepend comes with an incredibly powerful feature called CQLinq that lets you ask very custom, specific questions about your code.  You can write a query to see how many new global variables your team is introducing.  You can see if your code base is getting worse, in terms of coupling.  And, you can even spend an afternoon or two putting together a complex CQLinq query to see if your team is implementing the pattern that you prototyped for them.

And not only can you see it — you can see it in style.  You can generate a custom report with clear, obvious visuals.  This sort of visualization isn’t just decoration.  It has the power to impact your team’s behavior in a meaningful way.  They can see when their checkins are making more things green and happy and when their checkins are making things red and angry.  They’ll modify their code accordingly and basically anonymously, since this feedback is automated.  They’ll like it because automated feedback won’t feel judgmental to them, and you’ll like it because you’ll know that they’re being funneled toward good architectural decisions even when you aren’t there.  Talk about peace of mind.

You’ll spend a few days getting to know NDepend, a few days tweaking the reports out of the box, and a few days hacking the CQLinq queries to guard your code base according to your standards.  And, from there on in, you’ll enjoy peace of mind and be freed to focus on other things that command your attention.  As an architect there are a thousand demands on your time.  Do yourself a favor and get rid of the ones that you can easily automate with the right tooling.