Contained Failure

Feb 16, 2011

I assume that almost everyone has by now heard the maxim “fail early, fail often” at least once.

I hope everyone has also heard the explanation that the point is not to fail early and often per se but rather to fail earlier rather than later, i.e., avoid large catastrophic failures by accepting small, frequent ones.

The practical advice is quite clear:  When you embark on a new project or task, set up the situation so that you get early warning signs of failure.  Pay attention to those signs of failure.  Be willing to accept that the time, money, and energy you’ve already put into the project is a sunk cost and don’t throw good money after bad.  Etc.  All of this focuses on the idea of preventing a larger, more catastrophic failure.  If you’re going to fail, at least fail small.

Many authors also go to lengths to emphasize the the “learn” aspect of the “fail early, fail often” maxim – which is great.

What I often find lacking is a discussion of the practical implications for how one should coach teams or individuals.

There are inevitably situations where a person or team are faced with a problem and someone else knows that their planned solution is going to fail.  Perhaps this person is formally the team’s coach.  Perhaps they’re one member of a pair (as in pair programming).  Perhaps it’s a manager.  Perhaps it’s just someone tangentially related to the situation.  It doesn’t really matter.

The question is: what should that person do?

The natural inclination is to try to prevent the mistake – for obvious reasons.  Sometimes this takes the form of explaining why the plan won’t work.  Sometimes it’s telling a story about how that has been tried in the past and never works.  Sometimes it’s someone with authority just overruling the plan.  None of these approaches is terribly effective.  Sure, they may work sometimes.  Sure, the overruling approach can “work” 100% of the time, at least on the surface.  Anyone who’s been on the other side of that equation, though, knows pretty well the negative side effects of a team (or an individual) just being overruled when they’re convinced they’re right or having to argue endlessly with someone who thinks they’re wrong.

The fundamental problem with these situations is, in my opinion, the hidden premise that all mistakes should be avoided if at all possible.  In other words, never allow failure to occur.

The problem with this premise is twofold: 1) it’s impossible, 2) it prevents learning.

The trick is to craft the situation such that, if you fail, the failure is small, contained, and teaches you something valuable.  It’s not that the advice is “fail early, fail often so as to avoid catastrophic failure and, whenever possible, avoid failing at all.”  It’s “allow small failure both for the sake of avoiding larger failure and for the sake of learning.”

For example, if I’m working on a task with someone and my partner wants to do something that I am certain will not work, what should I do?  Rather than spending valuable time and effort arguing about it, the best thing to do is probably do it their way – so long as I can craft the situation to ensure that, if and when it fails, we haven’t spent a lot of time/energy/money doing it.  After all, there are only two possibilities: either I’m right and we will fail or I’m wrong and it will work.  Either way, we both win.  If I’m right, my partner has now learned something valuable in perhaps the most effective way possible (by experience) and with a small amount of expenditure (of time/energy/money) and we can peacefully move on to doing it my way.  On the other hand, if I’m wrong, I’ve learned something valuable AND the task is now done.

For this to end up being a win/win, though, you must contain the size of the experiment (what is potentially going to fail), for two reasons:

  1. It contains the cost associated with the benefit of the learning to be achieved so that the cost/benefit still works out in your favor.
  2. It limits externalities.  That is, it limits the number of variables to which the failure can later be attributed.  If the experiment (task) has too many variables (is too big), if and when it fails, it will be all too easy for people to argue about what really caused the failure.  The smaller the experiment, the more self-evident it becomes what went wrong.  The larger the failure, the less likely anyone is to learn anything since they will be able to rationalize the failure according to their own biases.

In sum, failure is not something to be avoided always and everywhere.  Because experience is often the most powerful teaching mechanism and because experience inevitably involves failures, failure is an excellent way to learn.  The critical distinction, though, is between contained, low-cost, high-yield failure on the one hand and open-ended, high-cost, no-yield failure on the other.  To get the first and avoid the second, craft your experiments well:

  • When you embark on a new project or task, set up the situation so that you get early warning signs of failure.
  • Pay attention to those signs of failure.
  • Be willing to accept that the time, money, and energy you’ve already put into the project is a sunk cost and don’t throw good money after bad.
  • And, the point of this post, be willing to allow others to conduct experiments you know are going to fail.  Don’t try to “save” them from failure.  Save them from that second kind of failure.
Share
Posted by | Categories: Lean Principles, Project Management | Tagged: |

One Response so far | Have Your Say!

  1. Tweets that mention Contained Failure - Ken Furlong -- Topsy.com
    February 16th, 2011 at 4:42 pm #

    […] This post was mentioned on Twitter by David J Bland, Ken Furlong. Ken Furlong said: New blog post: Contained Failure http://bit.ly/eLTynL […]