NDepend

Improve your .NET code quality with NDepend

porting-to-dot-net-core

Considering a Port to .NET Core? Use NDepend

An American colloquialism holds, “only two things are certain: death and taxes.”  If I had to appropriate that for the software industry, I might say that the two certainties are death and legacy code.  Inevitably, you have code that you have had for a while, and you want to do things with it.

Software architects typically find themselves tasked with such considerations.  Oh, sure, sometimes they get to pick techs and frameworks for greenfield development.  Sometimes they get to draw fancy diagrams and lay out plans.  But frequently, life charges them with the more mundane task of “figuring out how to make that creaky old application run on an iPhone.”  Okay, maybe it’s not quite that silly, but you get the idea.

If you earn a living as an architect in the .NET world, you have, no doubt, contemplated the impact of .NET Core on your application portfolio.  Even if you have no active plans to migrate, this evolution of .NET should inform your strategic decisions going forward.  But if you have use for deploying the framework along with your application or if you want to run on different operating systems, you’re going to need to port that legacy code.

I am, by no means, an expert in .NET Core.  Instead, my areas of specialty lie in code analysis, developer training, and IT management and strategy consulting.  I help dev teams create solutions economically.  And because of this, I can recognize the value of NDepend to a port from what I do know about .NET core.

Continue reading Considering a Port to .NET Core? Use NDepend

plugging leaky abstractions

Plugging Leaky Abstractions

In 2002, Joel Spolsky coined something he called “The Law of Leaky Abstractions.”  In software, an “abstraction” hides complexity of an underlying system from those using the abstraction.  Examples abound, but for a quick understanding, think of an ORM hiding from you the details of database interaction.

The Law of Leaky Abstractions states that “all non-trivial abstractions, to some degree, are leaky.”  “Leaky” indicates that the abstraction fails to adequately hide its internal details.  Imagine, for instance, that while modifying the objects generated by your ORM, you suddenly needed to manage the particulars of some SQL query.  The abstraction leaked, forcing you to understand the details that it was supposed to hide.

Spolsky’s point may inspire a fatalistic feeling.  After all, if the things are doomed to leak, why bother with them in the first place?  But I like to consider it a caution against chasing perfection rather than a lament.

Abstractions in software help us the same way figurative language helps our prose.  Metaphors and analogies offer ease of understanding, but at the accepted price of lost precision.  If you press a metaphor enough, it will inevitably break down.  But that doesn’t render metaphors useless — far from it.

Thus, if you have a leaky abstraction, you can take steps to “plug” it, so to speak.  Spolsky says it himself, right in the law he coined: “all non-trivial abstractions are, to some degree, leaky.”  We have the ability to lessen that degree.

Continue reading Plugging Leaky Abstractions

measurecode

Measure Your Code to Get Back on Track

When I’m called in for strategy advice on a codebase, I never arrive to find a situation where all parties want to tell me how wonderfully things are going.  As I’ve mentioned before here, one of the main things I offer with my consulting practice is codebase assessments and subsequent strategic recommendations.

Companies pay for such a service when they have problems, and those problems drive questions.  “Should we scrap this code and start over, or can we factor toward a better state?”  “Can we move away from framework X, or are we hopelessly tied to it?”  “How can we evolve without doing a forklift upgrade?”

To answer these questions, I assess their code (often using NDepend as the centerpiece for querying the codebase) and synthesize the resultant statistics and data.  I then present a write-up with my answer to their questions.  This also generally includes a buffet of options/tactics to help them toward their goals.  And invariably, I (prominently) offer the option: “instrument your code/build with static analysis to raise the bar and prevent backslides.”

I find it surprising and a bit dismaying how frequently clients want to gloss over this option in favor of others. Continue reading Measure Your Code to Get Back on Track

ndepend-api

Managing Code Analysis Statistics with the NDepend API

If you’re familiar with NDepend, you’re probably familiar with the Visual Studio plugin, the out of the box metrics, the excellent visualization tools, and the iconic Zone of Uselessness/Zone of Pain chart.  These feel familiar to NDepend users and have likely found their way into the normal application development process. NDepend has other features as well, however, some of which I do not necessarily hear discussed as frequently.  The NDepend API has membership in that “lesser known NDepend features club.”  Yes, that’s right — if you didn’t know this, NDepend has an API that you can use.

You may be familiar, as a user, with the NDepend power tools.  These include some pretty powerful capabilities, such as duplicate code detection, so it stands to reason that you may have played with them or even that you might routinely use them.  But what you may not realize is the power tools’ source code accompanies the installation of NDepend, and it furnishes a great series of examples on how to use the NDepend API.

NDepend’s API is contained in the DLLs that support the executable and plugin, so you needn’t do anything special to obtain it.  The NDepend website also treats the API as a first class citizen, providing detailed, excellent documentation.   With your NDepend installation, you can get up and running quickly with the API.

Probably the easiest way to introduce yourself is to open the source code for the power tools project and to add a power tool, or generally to modify that assembly.  If you want to create your own assembly to use the power tools, you can do that as well, though it is a bit more involved.  The purpose of this post is not to do a walk-through of setting up with the power tools, since that can be found here.  I will mention two things, however, that are worth bearing in mind as you get started.

  1. If you want to use the API outside of the installed project directory, there is additional setup overhead.  Because it leverages proprietary parts of NDepend under the covers, setup is more involved than just adding a DLL by reference.
  2. Because of point (1), if you want to create your own assembly outside of the NDepend project structure, be sure to follow the setup instructions exactly.

A Use Case

I’ve spoken so far in generalities about the API.  If you haven’t already used it, you might be wondering what kinds of applications it has, besides simply being interesting to play with.  Fair enough.

One interesting use case that I’ve experienced personally is getting information out of NDepend in a customized format.  For example, let’s say I’m analyzing a client’s codebase and want to cite statistical information about types and methods in the code.  Out of the box, what I do is open Visual Studio and then open NDepend’s query/rules editor.  This gives me the ability to create ad-hoc CQLinq queries that will have the information I need.

But from there, I have to transcribe the results into a format that I want, such as a spreadsheet.  That’s fine for small projects or sample sizes, but it becomes unwieldy if I want to plot statistics in large codebases.  To address this, I have enlisted the NDepend API.

Continue reading Managing Code Analysis Statistics with the NDepend API

deliverprojectsontime

How to Deliver Software Projects on Time

Someone asked me recently, almost in passing, about the keys to delivering software projects on time.  In this particular instance, it was actually a question of how to deliver .NET projects on time, but I see nothing particularly unique to any one tech stack or ecosystem.  In any case, the question piqued my interest, since I’m more frequently called in as a consultant to address issues of quality and capability than slipped deadlines.

To understand how to deliver projects on time (or, conversely, the mechanics of failing to deliver on time) requires a quick bit of term deconstruction.  The concept of “on time” consists of two concerns of software parlance: scope and delivery date.  Specifically, for something to be “on time” there has to be an expectation of what will be delivered and when it will be delivered.

How We Get This Wrong

Given that timeliness of delivery is such an apparently simple concept, we sure do find a lot of ways to get it wrong.  I’m sure that no one reading has to think long and hard to recall a software project that failed to deliver on time.  Slipped deadlines abound in our line of work.

The so-called “waterfall” approach to software delivery has increasingly fallen out of favor of late.  This is a methodology that attempts simultaneously to solve all unknowns through extensive up-front planning and estimation.  “The project will be delivered in exactly 19 months, for 9.4 million dollars, with all of the scope outlined in the requirements documents, and with a minimum standard of quality set forth in the contract.”  This approach runs afoul of a concept sometimes called “the iron triangle of software development,” which holds that the more you fix one concern (scope, cost, delivery date), the more the others will wind up varying — kind of a Heisenburg’s Uncertainty Principle of software.  The waterfall approach of just planning harder and harder until you get all of them right thus becomes something of a fool’s errand.

Let’s consider the concept of “on time” then, in a vacuum.  This features only two concerns: scope and delivery date.  Cost (and quality, if we add that to the mix as a possible variant and have an “iron rectangle”) fails to enter into the discussion.  This tends to lead organizations with deep pockets to respond to lateness in a predictable way — by throwing resources at it.  This approach runs afoul of yet another aphorism in software known as “Brooks’ Law:” adding manpower to a late software project makes it later.

If we accept both Brooks’ Law and the Iron Triangle as established wisdom, our prospects for hitting long-range dates with any reliability start to seem fairly bleak.  We must do one of two things, with neither one being particularly attractive.  Either we have to plan to dramatically over-spend from day 1 (instead of when the project is already late) or we must pad our delivery date estimate to such an extent that we can realistically hit it (really, just surreptitiously altering delivery instead of cost, but without seeming to).

Continue reading How to Deliver Software Projects on Time

Laptop.

Static Analysis Isn’t Just for Techies

I do a lot of work with and around static analysis tools.  Obviously, I write for this blog.  I also have a consulting practice that includes detailed codebase and team fact-finding missions, and I have employed static analysis aplenty when I’ve had run of the mill architect gigs.  Doing all of this, I’ve noticed that the practice gets a rap of being just for techies.

Beyond that even, people seem to perceive static analysis as the province of the uber-techie: architects, experts, and code statistics nerds.  Developing software is for people with bachelors’ degrees in programming, but static analysis is PhD-level stuff.  Static analysis nerds go off, dream up metrics, and roll them out for measurement of developers and codebases.

This characterization makes me sad — doubly so when I see something like test coverage or cyclomatic complexity being used as a cudgel to bonk programmers into certain, predictable behaviors.  At its core, static analysis is not about standards compliance or behavior modification, though it can be used for those things.  Static analysis is about something far more fundamental: furnishing data and information about the codebase (without running the code).  And wanting information about the code is clearly something everyone on or around the team is interested in.

To drive this point home, I’d like to cite some examples of less commonly known value propositions for static analysis within a software group.  Granted, all of these require a more indirect route than “install the tool, see what warnings pop up,” but they’re all there for the realizing, if you’re so inclined.  One of the main reasons that static analysis can be so powerful is scale — tools can analyze 10 million lines of code in minutes, whereas a human would need months.

Continue reading Static Analysis Isn’t Just for Techies

dealing with technical debt

Avoid Technical Debt with NDepend

The term “technical debt” has become ubiquitous in the programming world.  In the most general sense, it reflects the idea that you’re doing something easy in the moment, but that you’re going to pay for, with interest, in the long run.  Conceived this way, to avoid technical debt would mean to avoid taking out these “time loans” in general.

There’s a subtle bit of friction, however, when using the (admittedly very helpful) concept of technical debt to communicate with business stakeholders.  For them, carrying debt is generally a standard operating procedure and often a tool, and it doesn’t have quite the same connotation.  When developers talk about incurring technical debt, it’s overwhelmingly in the context of “we’re doing something ugly and dirty to get this thing shipped, and man are we going to pay for it later.”  That’s a far cry from, “I’m going to finance a fleet of trucks so that we can expand our delivery operation regionally,” that an accountant or executive might understand.  Taking on technical debt is colloquially more akin to borrowing money from a guy that breaks thumbs.

The reason there’s this slight dissonance between the usages is that technical debt in codebases is a lot more likely to be incurred unwittingly (or improvidently).  The reason, in turn, for this could make up the subject of an entire post, but suffice it to say that the developers are often shielded from business decisions and consequences.  It is thus harder for them to be party to all factors of such a tradeoff — a role often played by people with titles like “business analyst” or “project manager.”

In light of this, let’s talk about avoiding the “we break thumbs” variety of tech debt, and how NDepend can help.  This sort of tech debt takes the form of “things you realize probably aren’t great, but you might not realize how long-term damaging they are.”

Continue reading Avoid Technical Debt with NDepend

code smells fish

Easy to Miss Code Smells

The concept of a code smell is, perhaps, one of the most evocative in our profession.  The name itself has a levity factor to it, conjuring a mental image of one’s coworkers writing code so bad that it actually emits a foul odor.  But the metaphor has a certain utility as well in the “where there’s smoke, there may be fire” sense.

In case you’re not familiar, a code smell is an observable feature of the code (the smoke) that often belies a deeper existing problem (the fire).  When you say that a code smell exists, what you’re communicating is “you may be justified here, but I’m skeptical – in my experience this is probably a design flaw.”

Of course, accusing code of having a smell is only slightly less incendiary to the author than accusing code of being flat out bad.  Them’s fightin’ words, as they say.  But, for all the arguments and all of the righteous indignation that code smell accusations have generated over the years, their usefulness is undeniable.

No doubt you’ve heard of some of the most common and easiest to visualize code smells.  The God Class, Primitive Obsession, and Inappropriate Intimacy all come to mind.  These indicate, respectively a class in your code base doing way too much, a tendency to use primitive types when you should take advantage of classes, and a module or class that breaks encapsulation by knowing too many details about another.  The combination of their visual memorability and their wisdom has prodded us over the years to break things down, to create cohesive objects, and to preserve encapsulation.

I would argue, however, that there are many more code smells out there than the big, iconic ones that get a lot of attention.  I’d like today to discuss a few that I don’t think are as commonly known.  I’ll make the case for why, once you’ve mastered avoiding the well-known ones, you should watch for these as well.

Continue reading Easy to Miss Code Smells

it management consultant

How to Get an Edge as an IT Management Consultant

I’ve made no secret of, and even frequently referred to my consulting practice including aspects of IT management consulting.  In short, one of my key offerings is to help strategic decision makers (CIOs/CTOs, dev managers, etc) make tough or non-obvious calls about their applications and codebases.  Can we migrate this easily to a new technology, or should we start over?  Are we heading in the right direction with the new code that we’re writing?  We’d like to start getting our codebase under test, but we’re not sure how (un) testable the code is – can you, as an IT consultant, advise?

This is a fairly niche position that’s fairly high on the organizational trust ladder, so it’s good work to be had.  Because of that, I recently got a question along the lines of, “how do you get that sort of work and then succeed with it?”  In thinking about the answer, I realized it would make a good blog post, specifically for the NDepend blog.  I think of this work as true consulting, and NDepend is invaluable to me as I do it.

Before I tell you about how this works for me in detail, let me paint a picture of what I think of as a market differentiator for my specific services.  I’ll do this by offering a tale of two different consulting pitfalls that people seem to fall into if tasked with the sorts of high-trust, advisory consulting engagements.

Continue reading How to Get an Edge as an IT Management Consultant

Analyzing a complex solution

How to Analyze a Complex Solution

I’ve made no secret that I spend a lot of time these days analyzing code bases as a consultant, and I’ve also made no secret that I use NDepend (and its Java counterpart, JArchitect) to do this analysis.  As a result, I get a lot of questions about analyzing code bases and about the tooling.  Today, I’ll address a question I’ve heard.

Can NDepend analyze a complex solution (i.e. more than 100 projects)?  If so, how do you do this, and how does it work?

Can NDepend Handle It?

For the first question — in a word, yes.  You certainly can do this with NDepend.  As a matter of fact, NDepend will handle the crippling overhead of this many projects better than just about any tool out there.  It will be, so to speak, the least of your problems.

How should you use it in this situation?  You should use it to help yourself get out of the situation.  You should use it as an aid to consolidating and partitioning into different solutions.

The Trouble with Scale

If you download a trial of NDepend and use it on your complex solution, you’ll be treated to an impressive number of project rules out of the box.  One of those rules that you might not notice at first is “avoid partitioning the code base through many small library assemblies.”  You can see the rule and explanation here.

We advise having less, and bigger, .NET assemblies and using the concept of namespaces to define logical components.

You can probably now understand why I gave the flippant-seeming answer above.  In a sense, it’d be like asking, “how do I use NDepend on an assembly where I constantly swallow exceptions with empty catch blocks.”  The answer would be, “you can use it to help you stop doing that.”

Continue reading How to Analyze a Complex Solution