Contained Failure

16 February 2011

I assume that almost everyone has by now heard the maxim “fail early, fail often” at least once.

I hope everyone has also heard the explanation that the point is not to fail early and often per se but rather to fail earlier rather than later, i.e., avoid large catastrophic failures by accepting small, frequent ones.

The practical advice is quite clear:  When you embark on a new project or task, set up the situation so that you get early warning signs of failure.  Pay attention to those signs of failure.  Be willing to accept that the time, money, and energy you’ve already put into the project is a sunk cost and don’t throw good money after bad.  Etc.  All of this focuses on the idea of preventing a larger, more catastrophic failure.  If you’re going to fail, at least fail small.

Many authors also go to lengths to emphasize the the “learn” aspect of the “fail early, fail often” maxim – which is great.

What I often find lacking is a discussion of the practical implications for how one should coach teams or individuals.

There are inevitably situations where a person or team are faced with a problem and someone else knows that their planned solution is going to fail.  Perhaps this person is formally the team’s coach.  Perhaps they’re one member of a pair (as in pair programming).  Perhaps it’s a manager.  Perhaps it’s just someone tangentially related to the situation.  It doesn’t really matter.

The question is: what should that person do?

The natural inclination is to try to prevent the mistake – for obvious reasons.  Sometimes this takes the form of explaining why the plan won’t work.  Sometimes it’s telling a story about how that has been tried in the past and never works.  Sometimes it’s someone with authority just overruling the plan.  None of these approaches is terribly effective.  Sure, they may work sometimes.  Sure, the overruling approach can “work” 100% of the time, at least on the surface.  Anyone who’s been on the other side of that equation, though, knows pretty well the negative side effects of a team (or an individual) just being overruled when they’re convinced they’re right or having to argue endlessly with someone who thinks they’re wrong.

The fundamental problem with these situations is, in my opinion, the hidden premise that all mistakes should be avoided if at all possible.  In other words, never allow failure to occur.

The problem with this premise is twofold: 1) it’s impossible, 2) it prevents learning.

The trick is to craft the situation such that, if you fail, the failure is small, contained, and teaches you something valuable.  It’s not that the advice is “fail early, fail often so as to avoid catastrophic failure and, whenever possible, avoid failing at all.”  It’s “allow small failure both for the sake of avoiding larger failure and for the sake of learning.”

For example, if I’m working on a task with someone and my partner wants to do something that I am certain will not work, what should I do?  Rather than spending valuable time and effort arguing about it, the best thing to do is probably do it their way – so long as I can craft the situation to ensure that, if and when it fails, we haven’t spent a lot of time/energy/money doing it.  After all, there are only two possibilities: either I’m right and we will fail or I’m wrong and it will work.  Either way, we both win.  If I’m right, my partner has now learned something valuable in perhaps the most effective way possible (by experience) and with a small amount of expenditure (of time/energy/money) and we can peacefully move on to doing it my way.  On the other hand, if I’m wrong, I’ve learned something valuable AND the task is now done.

For this to end up being a win/win, though, you must contain the size of the experiment (what is potentially going to fail), for two reasons:

  1. It contains the cost associated with the benefit of the learning to be achieved so that the cost/benefit still works out in your favor.
  2. It limits externalities.  That is, it limits the number of variables to which the failure can later be attributed.  If the experiment (task) has too many variables (is too big), if and when it fails, it will be all too easy for people to argue about what really caused the failure.  The smaller the experiment, the more self-evident it becomes what went wrong.  The larger the failure, the less likely anyone is to learn anything since they will be able to rationalize the failure according to their own biases.

In sum, failure is not something to be avoided always and everywhere.  Because experience is often the most powerful teaching mechanism and because experience inevitably involves failures, failure is an excellent way to learn.  The critical distinction, though, is between contained, low-cost, high-yield failure on the one hand and open-ended, high-cost, no-yield failure on the other.  To get the first and avoid the second, craft your experiments well:

  • When you embark on a new project or task, set up the situation so that you get early warning signs of failure.
  • Pay attention to those signs of failure.
  • Be willing to accept that the time, money, and energy you’ve already put into the project is a sunk cost and don’t throw good money after bad.
  • And, the point of this post, be willing to allow others to conduct experiments you know are going to fail.  Don’t try to “save” them from failure.  Save them from that second kind of failure.
 | Posted by | Categories: Lean Principles, Project Management | Tagged: |

Imagine you have a single organization (choose your own scale: a single team or a group of teams arranged into a single, bigger organization).  Now imagine that you have multiple – perhaps many – products to manage concurrently.  By ‘manage’, I mean everything from the smallest bug fix or tweak to new major releases.  Now imagine that the customer/user bases of those disparate products are not coextensive.  That is, some users only care about their particular product to the exclusion of the rest – and they might be the only user base that cares about that particular product.

Lastly, imagine that your users/customers are generally dissatisfied with the speed with which enhancements or fixes are being rolled out to the product(s) they care about.  After all, with a single organization handling all of the products, and the efficiency gains to be had by grouping similar work items together, it is often going to be the case that a significant amount of time goes by without a particular product seeing any updates – because there is a major release for other products, for example.

What would you do?

One option is to use allocation based project or portfolio management.  I’m deliberately avoiding the phrase “resource allocation” because I object to using ‘resource’ as a euphemism for people – thus, I turn the phrase around and coming at it from the project side.

In any event, what I mean is a situation where you step away from having a single backlog (which you presumably have since we’re talking about a single organization) and break the work into multiple backlogs (or value streams, as the case may be) – allocating a portion of your people or teams to each.  The idea being that this new situation will ensure a slow but steady flow of improvements to each product, thus reassuring users that progress is being made, even if it is slow.

As with every tool or principle in any toolbox or framework, there are times when this will work and times when it will not (and times when it will make the situation worse).

Here are the things one must consider:

1. The stability of the allocation

If you can foresee that the allocation of people/teams to each product is going to be pretty stable, then this strategy might work for you (assuming the other conditions below obtain).  If, on the other hand, you’re going to have to adjust the allocation frequently based on changing market needs or customer demands, then this will probably make the situation worse.  By the time a person or team has been working with a particular product to be familiar with it and really productive, they will be pulled off and have to come up the learning curve on another product before being as productive as possible.  Fast forward a few more cycles and you’ll have a lot of wasted horse trading and lower morale on your hands.

2. The distribution of knowledge and skills

That is, will each product have allocated to it a set of people who have sufficient knowledge and skill sufficient to make the desired enhancements?  Often times, this will not be the case.  For example, say there is a team of eight people responsible for eight products but six team members are only familiar with 3 of the products and the other five products are really only understood well by the other two team members.  Assuming having those two team members float between teams is not a viable solution (for whatever reason: too much task switching, etc.), one could not really make any progress on all five of those systems in an allocation based system.  (I’m assuming that the team as a whole can make progress on these products due to the mentorship of those two team members.)

3. The ease with which you can deploy to Production

If your products are such that you can fairly easily, quickly, and reliably deploy new builds (for software; models for other types of products), allocation will possibly be a winning strategy (again, assuming the other conditions obtain).  As the cost/pain of deployment rises, the benefits of allocation decline because the frequency with which users receive updates will decline because of the transaction cost associate with deployment.

4. The type of work to be done for each product

If your products are mature and stable and on going work takes the form of minor enhancements, new features, the occasional bug, etc., allocating might work.  If this is the case, your users will indeed feel a slow but steady stream of improvements coming their way.

If, on the other hand, your working on major releases and cannot be deployed piecemeal because they represent pieces of a completely new paradigm which have to be deployed in a large batch, your customers will not experience this feeling of steady progress – because they’ll still of necessity be receiving them in “big bang” releases.

Given that we’re positing multiple products, it is likely that different products will fall into different camps.  Without knowing specifics, the only guiding principle I can offer is that allocation will be a relatively better idea as the  importance of the products that are mature and stable relative to the other products increases.

So, assuming those four issues are favorable, allocation might work and is certainly a valid experiment to run to see if you get the intended result.  If, however, any one of them seems problematic, allocation will probably not work and will create a slew of other problems.

I’m interested to hear your take on this, especially if you’ve ever tried allocation in this way.  Please share experiences in the comments.


I’ve been thinking a lot about retrospectives lately both because our team has been struggling with them being ineffective/wasteful and because retrospectives were the subject of conversation at the last DC/NOVA Scrum Users Group meetup.

Our team has been tweaking and experimenting with various modifications to our process but one thing that we’ve left untouched for a year now is our retrospectives.  Every sprint, we ask “What went well?” “What went badly?” and “What can we improve?”  Rarely, though, do we follow through on those items under “What can we improve?”  We’ve tried forcing ourselves to make these items concrete, posting them near our board, bringing them up in the standup meeting, etc. but to no avail.

We’re now experimenting with changing the meeting itself to better foster improvement.  First: we’re going into the meeting with an agenda rather than having it be a free form discussion.  Generally, when we treat it as free form, memories are dull and it is difficult to start a conversation.  We’re hoping that a prepared agenda (to which anyone can contribute) will help grease the skids, so to speak.

Secondly, the “What can we improve?” section will now be more explicitly a “What experiments should we run?” discussion – things like “should we be using Selenium instead of our current solution?” or “what if we only tasked out half of the stories at the beginning of the sprint and left the second half until the middle of the sprint?”

The “What went well?” and “What went badly?” topics, then, can focus not only on unexpected things that came up but also on the results of these process, tooling, and workflow experiments that we’re running.

Hopefully, this will prove to be a true PDCA loop and really drive improvement.  After all, that what retrospectives are supposed to do.


A number of recent problems has caused our team to tweak our weekly Backlog Review meeting.  Specifically, we’ve added two items to the agenda: 1) a review of the stories tentatively scheduled for the next sprint (and possibly the next one after that) and 2) providing our PO with an estimate of our story point capacity for the next sprint.  If you’re thinking “why weren’t you reviewing upcoming stories already and why don’t you just use your velocity?” read on.

The first problem we experienced dealt with a specific story – a new database report that was needed.  The report was originally conceived and designed a few months ago but the project was postponed for several (valid) reasons.  The story, though, had been estimated months ago and then not touched again.  When business priorities were such that the report was again a priority, we simply dropped it into a sprint.  That’s when things got ugly.

In the intervening months, the estimate had become stale.  We had learned several rather critical lessons about this particular type of database report (we had developed similar ones in the meantime) but never incorporated that learning into this story or its estimate.  What we originally thought was an 8 point story instantly became 6 different 5 point stories.

That’s all fine and good – estimates are estimates and it is expected that the team will gain new knowledge and refine estimates as time goes on.  That’s not what happened, though.  Instead, we only realized the problem during our sprint planning meeting.  Since our velocity was hovering in the low 30’s, the revised set of stories ate an entire sprint.  Product Management was not expecting that at all.  Though they were prepared for estimates to shift and priorities to have to be moved around, they were not prepared for a sprint that they thought (and were basically told) would hold 4 or 5 different stories being eaten up by one story.

The lesson we learned was that we simply couldn’t allow stories and their estimates to become stale – there was too much risk that the story itself no longer made sense or that the estimate was now widely off.  Ideally, stories would never get stale because the backlog would be a pull system and stories would be planned and estimated very close to when development would begin.  Unfortunately, a huge piece of prioritization is weighing costs and Product Management can’t gauge the cost/benefit of each story relative to others without estimates.  Thus, it sometimes happens that stories are estimated and then shelved for a while.

To deal with this, both the team and Product Management now know to be on the look out for any stale stories or estimates that might be making their way to the top of the backlog.  Additionally, we’ve now broken our backlog review into two parts: estimated new stories and reviewing the one or two sprint’s worth of stories at the top of the backlog in case anything needs to be tweaked.  This uncovers problems a lot earlier than the sprint planning meeting and gives Product Management a chance to move around other priorities and reset customer expectations.

Secondly, our team’s capacity over the last several sprints has been somewhat erratic due to overlapping vacations, conference attendance, and other circumstances.  As a result, Product Management has been subjected to a few rude surprises on the first day of a new sprint when we tell them that our capacity for this sprint is half of what it was last time.  The fix for this (at least until our schedules settle down and we can really rely on our velocity again) is to take 2-3 minutes in the backlog review and estimate (in story points) our capacity for the next sprint.  This again gives Product Management so early warning and time to shift things around as necessary.

I’d be interested to hear if anyone else has had similar problems and what their solutions were.

 | Posted by | Categories: Scrum |

On Monday of last week, our network and desktop support team (what we call our “IT Team” as distinct from our software development teams) began experimenting with Kanban as our project management framework.  Heretofore, we’d simply been handling our project management and priorities in a sort of ad-hoc fashion.  We knew we wanted to ratchet it down, but didn’t want to use Scrum since the IT team is more of a support organization that would not operate well using time boxes.  We decided to experiment with Kanban for multiple reasons, including its suitability for support organizations and its focus on lean principles.

Our first week of experimenting with Kanban went quite well.  The major benefit we saw was the visualization of our work and workflow.  On Monday, we held our first retrospective and identified the first major process issue we want to address: widening the ownership of the backlog to the entire team.  Up until now, I had generally been the one ultimately responsible for what we worked on and in what order.  Obviously, there was input from the rest of the team and other stakeholders but there was a sense that I was the gatekeeper for priorities.

Kanban has highlighted the inefficiencies in that arrangement and we’re now trying to actively discuss the backlog and new issues at least once a day in our daily meeting – if not more often throughout the day.  This is definitely going to be an ongoing improvement effort so I expect we’ll keep this as an action item for at least several weeks until we get to a point where we think the entire team has full ownership of the backlog.

Comments Off on IT Experimenting with Kanban
 | Posted by | Categories: Kanban |

Our company has been in the habit of doing periodic EPPs (Employee Performance Plans).  We’ve evolved from doing them once a quarter to once every four months to twice a year.  We’ve gradually lengthened the amount of time an EPP covers because of the overhead involved in putting them together and reviewing them.

When the Engineering department moved to Scrum, we obviously had to change the way we conceived of an EPP.  As usual, the problems we encountered weren’t caused by Scrum – just highlighted by it.

The main problem we encountered was “how can we say we’re planning on doing anything since we don’t set our own priorities?”  In the past, this didn’t seem like a problem because we just subtracted the amount of time needed to complete our EPP objectives from the amount of available time for “PM-sponsored” projects – or we built in the fact that we wouldn’t be working 100% of the time on those projects.  Either route is a problem because it reduces visibility into what the team’s priorities and capacity are.

Another problem was “how can we claim to be agile while putting together six month long personal plans?”

The latest problem we’ve encountered had to do with personal/career/team development, e.g., writing more unit tests, peer reviewing code, experimenting with peer programming, networking with peers outside the company, etc.

I feel like we’ve addressed all three problems fairly well.  Here’s how:

Regarding EPP projects, we realized that engineers making themselves personally responsible for entire projects was simply the wrong approach.  Granted, we wanted to get these projects (mostly technical debt reduction projects) done and granted they are important, but cutting out the rest of the team and the Product Owner is simply not the best way to accomplish them.

We realized that we should not be focusing on the whole project but simply that piece which is under our control.  Thus, we are now adopting EPP goals such as “Advocate for refactoring product X” – with objectives such as “educate Product Management about the costs and potential benefits” and “submit requested user stories and Definitions of Done to our Product Owner”.  In this way, we’re doing everything we can to see that these projects get done without sacrificing the prerogative of the PO to set priorities.  We’re also doing what only we can do: identify, explain, and plan to reduce technical debt or capitalize on new technologies.

Regarding the fact that we’re using six month EPPs, we are very explicit that EPPs – like all plans – should not be written in stone.  Thus, we’ve taken the approach of having quick, monthly reviews of our EPPs to see if there is anything we want to add, remove, or change given our evolving situation and knowledge.  These reviews sometimes only last five minutes; sometimes they last 30.  The point is that they don’t introduce much overhead and they allow us to course correct fairly frequently.

Regarding personal/career/team development goals, the problems we were running into regarded how to measure success.  If we had an EPP goal to “ensure unit tests are written,” what defines success?  What do we say at the end of six months if we didn’t write as many unit tests as we could have for the first month or two, then were pretty good for the rest of the period until the last week when we again may have missed some opportunities for tests?

We realized that we were not focusing on the real issue.  At the end of the period, we didn’t so much want a code coverage percentage as we wanted to be able to say that we had adopted or internalized certain practices.  That is, that we had developed certain habits.  Thus, at the end of the period, the question we ask ourselves is not “what is our code coverage like?” but rather “have we developed the habit of always writing unit tests?”  While this is more subjective, we feel it is still more valuable and it more accurately reflects what we actually want to do.


  • Plan to do those things where you add unique value – bearing in mind that no one person can tackle an entire project alone and, therefore, should not be solely responsible for that project.
  • Review the plan often, making changes as necessary.  The plan is not written in stone.
  • Don’t be seduced by “vanity metrics” like “how many units tests have I written per story?”  Rather, focus on those habits or practices that you want to develop or internalize and then judge yourself against how well you have become the engineer you want to be.
Comments Off on Scrum & Employee Performance Plans
 | Posted by | Categories: Business, Human Resources, Scrum, Software Development, Uncategorized |

One of the biggest problems (if not the biggest problem) we have on our team is technical debt that has accumulated over the last 4-5 years. Fortunately, the team and our Product Owners understand the problem of technical debt in general and recognize it in our case.
We’ve started taking small steps to reduce our amount of debt. I’d love feedback and suggestions about what we’re doing and anything that anyone else has found helpful.

The first thing we’ve done is resolved to not take on any additional debt intentionally. Of course, almost anything we implement will eventually become technical debt if it is allowed to collect dust long enough or if circumstances change – but the key point is that it wasn’t debt when we implemented it. Our hope is that the rate at which technical assets become technical debt will be such that we will be able to keep up with regular refactoring.

The second thing is that our POs have made it clear that they are willing to give us the time to pay off technical debt – but the burden is on the team to identify, flag up, and explain the technical debt to the PO so that it can be properly prioritized in our backlog.

The third thing is the team now has a weekly 30 minute meeting to discuss technical debt. We don’t have a firm agenda, but the discussion usually centers around a few points:

  • Are there any pieces of technical debt that we would like to discuss (presumably because we haven’t discussed them in this forum before)?
  • How costly would it be to pay down this piece of debt? (We estimate this using the XS, S, M, L XL scale.)
  • How costly in the interest on this debt? That is, how much pain is it causing? (We estimate this using a yellow, orange, red scale. I’ll explain why in a minute).
  • How should we begin the process of paying down this debt? Is this something we can “just fix” with a little effort on the side? Is this something that we should write up a user story and request our PO add to the backlog? Should we keep it in our back pocket for a hack-a-thon project?
  • Who is on point for this piece of debt? That is, who is going to “just fix it” or write the user story or keep it on their own hack-a-thon to-do list?

The fourth thing is that we’re now maintaining a technical debt board on a wall near our sprint backlog. We wanted a visual representation and reminder of our team’s biggest problem. It will hopefully help us stay focused on it, not let us forget about any given piece of technical debt, and help us track and encourage progress (a very important facet, in my opinion).

This is why we estimated cost using size but impact using color – we can visually represent each piece of technical debt using a piece of paper, not card, post-it note, etc. of the appropriate color and have a board where someone can assess all of the critical information at a glance (example). If we had just used Fibonacci numbers for each scale and written them in the corners of cards or something, it would be much harder to get a sense for the whole situation.

So far, we’ve identified two pieces of technical debt that we can “just fix” in our spare time and the fixes are in progress. We’ve also begun working on writing the stories necessary to eliminate another debt. Hopefully, we’ll be able to keep up this momentum, increasing our velocity and quality along the way.

Any tips from those who have gone before would be greatly appreciated!

Comments Off on Paying Down Our Technical Debt
 | Posted by | Categories: Scrum, Software Development | Tagged: |

Why We Dropped Ideal Hours

18 November 2009

Since converting to Scrum, my team has been in the practice of planning our capacity for a sprint in terms of ideal hours.  We had a fairly simple spreadsheet where we’d enter the number of vacation days each team member was planning and taking and their estimated “overhead” percentage (all of the time spent in meetings, handling random things that come up, etc.).   During our Sprint Planning meetings, we would then estimate all of the tasks for the stories in terms of ideal hours – how long we expected that task to take assuming zero distractions and interruptions.  We then took on as many stories as we had enough ideal hours for.

Over time, I became less and less satisfied with this way of planning capacity.  In general, it didn’t seem to add much value to the process and increased the length of the planning meeting.  Additionally, because we track the progress of a sprint using the number of hours of work remaining, the team had to continuously update both our task board and our online tool (whether Rally, ScrumWorks, etc.) or others wouldn’t have a good sense of how the sprint was going.

At best, this represented time (albeit, not a ton, but still enough that it hurt) not spent doing actual work.  At worst, it was a complete waste since there were usually caveats associated with the hours as they are presented on the board or in the tool, e.g., “well we’re way over in terms of hours we spent on this task, but we realized that all the work we did will save us time on the next 5 tasks so it’s basically a wash” or “we’re going to leave this task at 6, but we might lower it to 1 shortly depending on how something turns out”.

One could respond that those types of things can and should be tracked in a tool and the problem is not that we were using ideal hours, it was that we were being lax in updating the tool and, by extension, the rest of the team and stakeholders.  While this was initially my thought, I came to disagree for the following reasons:

  1. It seemed odd that we were estimating work in terms of a fictional unit – the ideal hour.  Since there is very rarely an extended period of time during which someone really doesn’t have any distractions and is free to focus on a single task, I don’t understand why we ask them to imagine how long a task would take under those conditions.  Granted, it makes the math easier, but that doesn’t make the estimate any better and might actually make it worse.
  2. We limited the granularity of ideal hours to whole hours.  Even if ideal hours were a real unit, limiting their granularity means limiting their accuracy and usefulness.  Granted, estimating in whole hour blocks is faster and easier, but the very mechanism that makes it such also severely limits its usefulness – especially when there is disagreement within the team about how long something is going to take and we just settle with the average.
  3. In my experience, estimating in ideal hours didn’t help us.  There were times when we had to go to the Product Owner and say that we couldn’t get all of the stories that were tentatively put on the sprint backlog done in time (because the number of ideal hours needed was higher than our capacity) – but those were identically the times when the number of story points on the sprint backlog exceeded our velocity and/or where we had huge 8 point stories which we all later agreed should have been multiple stories whose points would have added up to more than 8.

For a while, I didn’t have a good suggestion as to how to do away with ideal hours.  Then I saw this presentation on the ScrumAlliance website.  I’m going to take the liberty of paraphrasing the author: if we’re really doing our story point estimates well, and we’re always trying to break things into smaller stories so that story sizes are as uniform as possible, and we’ve got some historical data to tell us what our velocity is, why don’t we just use that to figure out how much to put into a sprint and save ourselves the trouble of estimating tasks?

Under such a scheme, we would still task things out so that we could uncover any gotchas and get a general consensus in the team as to what needs to be done.  We just wouldn’t estimate hours for tasks and then track progress in terms of hours remaining.  Granted, it might turn out that we sign up for an amount of work that causes us to either end early, have to work a little extra, or miss a story, but we run that risk with ideal hours too – we just spend more time doing it.

In terms of tracking the progress of a sprint, the author suggests having a burndown of tasks rather than hours – which is obviously less quantifiable but perhaps no less valuable.  Our current burndown uses hours, but it really just gives the illusion that we know exactly how many hours are remaining.  Not having an ideal hour burndown just means we don’t have that illusion anymore.  As the author points out, precision doesn’t equal accuracy and accuracy is what we’re really after.

Lastly, there was the issue of how to adjust the amount of work we pull into a sprint when we know that someone will be on vacation.  Ideal hours gives us a nice way to do this because we just subtract the appropriate number of hours from our capacity.  That really isn’t that accurate, though, because any given day in a two week period might be very different than any other day in terms of how much someone is able to focus on direct work.  Treating all days as identical in terms of capacity is again mathematically easier but perhaps no more accurate.  We could probably do just as well by “manually” adjusting the velocity down a few story points based on gut feelings.  Again, precision doesn’t equal accuracy.

When I presented these ideas to the team, we all decided that it was worth trying – after all, even if we crash and burn, we’ve only lost two weeks.  That was 2 sprints ago and we all seem pretty happy not using ideal hours.  Our task board is just the same except we don’t put hours on our tasks.  Our online tool is the same – we just assign every task 1 estimated hour.  Our burndown chart thus shows the burndown of tasks.  During planning, we are extra conscious of task breakdowns and try to make all tasks as uniform a size as reasonably possible.

All in all, I’m very happy we’ve moved away from ideal hours and are relying more on our velocity and “gut checks” to know how much work to pull into a sprint.  I highly recommend trying it.

 | Posted by | Categories: Scrum |


24 October 2009

Jack Milunsky has a brief post up over at wherein he discusses the issue of switching stories mid sprint.  One of his points that I’d like to draw attention to is his response to the criticism that a team can be the slave of the process, i.e, too rigid in following Scrum mechanics and not willing to change a sprint mid stride because something urgent comes up.  As he says “Well you’re either a slave to the process or the team is a slave to any chicken in the company who shouts the loudest.”

That is a fantastic point, in my opinion.  It is much like the oft repeated response when something goes wrong while you’re doing scrum: “would this have been any different if we were doing waterfall?”
Comments Off on Slavery
 | Posted by | Categories: Scrum |