Imagine you have a single organization (choose your own scale: a single team or a group of teams arranged into a single, bigger organization).  Now imagine that you have multiple – perhaps many – products to manage concurrently.  By ‘manage’, I mean everything from the smallest bug fix or tweak to new major releases.  Now imagine that the customer/user bases of those disparate products are not coextensive.  That is, some users only care about their particular product to the exclusion of the rest – and they might be the only user base that cares about that particular product.

Lastly, imagine that your users/customers are generally dissatisfied with the speed with which enhancements or fixes are being rolled out to the product(s) they care about.  After all, with a single organization handling all of the products, and the efficiency gains to be had by grouping similar work items together, it is often going to be the case that a significant amount of time goes by without a particular product seeing any updates – because there is a major release for other products, for example.

What would you do?

One option is to use allocation based project or portfolio management.  I’m deliberately avoiding the phrase “resource allocation” because I object to using ‘resource’ as a euphemism for people – thus, I turn the phrase around and coming at it from the project side.

In any event, what I mean is a situation where you step away from having a single backlog (which you presumably have since we’re talking about a single organization) and break the work into multiple backlogs (or value streams, as the case may be) – allocating a portion of your people or teams to each.  The idea being that this new situation will ensure a slow but steady flow of improvements to each product, thus reassuring users that progress is being made, even if it is slow.

As with every tool or principle in any toolbox or framework, there are times when this will work and times when it will not (and times when it will make the situation worse).

Here are the things one must consider:

1. The stability of the allocation

If you can foresee that the allocation of people/teams to each product is going to be pretty stable, then this strategy might work for you (assuming the other conditions below obtain).  If, on the other hand, you’re going to have to adjust the allocation frequently based on changing market needs or customer demands, then this will probably make the situation worse.  By the time a person or team has been working with a particular product to be familiar with it and really productive, they will be pulled off and have to come up the learning curve on another product before being as productive as possible.  Fast forward a few more cycles and you’ll have a lot of wasted horse trading and lower morale on your hands.

2. The distribution of knowledge and skills

That is, will each product have allocated to it a set of people who have sufficient knowledge and skill sufficient to make the desired enhancements?  Often times, this will not be the case.  For example, say there is a team of eight people responsible for eight products but six team members are only familiar with 3 of the products and the other five products are really only understood well by the other two team members.  Assuming having those two team members float between teams is not a viable solution (for whatever reason: too much task switching, etc.), one could not really make any progress on all five of those systems in an allocation based system.  (I’m assuming that the team as a whole can make progress on these products due to the mentorship of those two team members.)

3. The ease with which you can deploy to Production

If your products are such that you can fairly easily, quickly, and reliably deploy new builds (for software; models for other types of products), allocation will possibly be a winning strategy (again, assuming the other conditions obtain).  As the cost/pain of deployment rises, the benefits of allocation decline because the frequency with which users receive updates will decline because of the transaction cost associate with deployment.

4. The type of work to be done for each product

If your products are mature and stable and on going work takes the form of minor enhancements, new features, the occasional bug, etc., allocating might work.  If this is the case, your users will indeed feel a slow but steady stream of improvements coming their way.

If, on the other hand, your working on major releases and cannot be deployed piecemeal because they represent pieces of a completely new paradigm which have to be deployed in a large batch, your customers will not experience this feeling of steady progress – because they’ll still of necessity be receiving them in “big bang” releases.

Given that we’re positing multiple products, it is likely that different products will fall into different camps.  Without knowing specifics, the only guiding principle I can offer is that allocation will be a relatively better idea as the  importance of the products that are mature and stable relative to the other products increases.

So, assuming those four issues are favorable, allocation might work and is certainly a valid experiment to run to see if you get the intended result.  If, however, any one of them seems problematic, allocation will probably not work and will create a slew of other problems.

I’m interested to hear your take on this, especially if you’ve ever tried allocation in this way.  Please share experiences in the comments.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest

I’ve been thinking a lot about retrospectives lately both because our team has been struggling with them being ineffective/wasteful and because retrospectives were the subject of conversation at the last DC/NOVA Scrum Users Group meetup.

Our team has been tweaking and experimenting with various modifications to our process but one thing that we’ve left untouched for a year now is our retrospectives.  Every sprint, we ask “What went well?” “What went badly?” and “What can we improve?”  Rarely, though, do we follow through on those items under “What can we improve?”  We’ve tried forcing ourselves to make these items concrete, posting them near our board, bringing them up in the standup meeting, etc. but to no avail.

We’re now experimenting with changing the meeting itself to better foster improvement.  First: we’re going into the meeting with an agenda rather than having it be a free form discussion.  Generally, when we treat it as free form, memories are dull and it is difficult to start a conversation.  We’re hoping that a prepared agenda (to which anyone can contribute) will help grease the skids, so to speak.

Secondly, the “What can we improve?” section will now be more explicitly a “What experiments should we run?” discussion – things like “should we be using Selenium instead of our current solution?” or “what if we only tasked out half of the stories at the beginning of the sprint and left the second half until the middle of the sprint?”

The “What went well?” and “What went badly?” topics, then, can focus not only on unexpected things that came up but also on the results of these process, tooling, and workflow experiments that we’re running.

Hopefully, this will prove to be a true PDCA loop and really drive improvement.  After all, that what retrospectives are supposed to do.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest

Waterhocking

19 March 2010

Today was the opening of AgileCoachCamp 2010 (#ACCNC) here in Durham.  So far, we’ve had a few rounds of lightning talks which were limited to 3 minutes and no slides as well as a lot of networking and generally good conversation.

In my lightning talk, I mentioned that I like to refer to our previous methodology (what we were doing prior to moving to Scrum) as “Waterhocking.”  I think it accurately captures the nature of our previous process.  It was definitely ad-hoc in so far as we weren’t following any particular project management framework and just handling things as they came up on a case-by-case basis.  It was similar to heavyweight “waterfall” methods in that we had extensive requirements gathering and documentation phases (BDUF), lengthy periods where the team would keep heads down and just try to build exactly what was documented, and too little user and acceptance testing too late.  Lastly, our releases often felt like we were hocking the product up since we were often under a fixed deadline and killing ourselves to get a product out the door only to find that the customer wasn’t happy with what was delivered.

Apparently, this label struck a chord with my fellow participants – it has a few mentions on twitter.  I’m definitely looking forward to tomorrow’s sessions; my only regret is that I can’t be in six or seven places at once.  There are so many really experienced, really insightful people here it is impossible not to miss great talks.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Waterhocking
 | Posted by | Categories: Software Development | Tagged: , |

Our company has been in the habit of doing periodic EPPs (Employee Performance Plans).  We’ve evolved from doing them once a quarter to once every four months to twice a year.  We’ve gradually lengthened the amount of time an EPP covers because of the overhead involved in putting them together and reviewing them.

When the Engineering department moved to Scrum, we obviously had to change the way we conceived of an EPP.  As usual, the problems we encountered weren’t caused by Scrum – just highlighted by it.

The main problem we encountered was “how can we say we’re planning on doing anything since we don’t set our own priorities?”  In the past, this didn’t seem like a problem because we just subtracted the amount of time needed to complete our EPP objectives from the amount of available time for “PM-sponsored” projects – or we built in the fact that we wouldn’t be working 100% of the time on those projects.  Either route is a problem because it reduces visibility into what the team’s priorities and capacity are.

Another problem was “how can we claim to be agile while putting together six month long personal plans?”

The latest problem we’ve encountered had to do with personal/career/team development, e.g., writing more unit tests, peer reviewing code, experimenting with peer programming, networking with peers outside the company, etc.

I feel like we’ve addressed all three problems fairly well.  Here’s how:

Regarding EPP projects, we realized that engineers making themselves personally responsible for entire projects was simply the wrong approach.  Granted, we wanted to get these projects (mostly technical debt reduction projects) done and granted they are important, but cutting out the rest of the team and the Product Owner is simply not the best way to accomplish them.

We realized that we should not be focusing on the whole project but simply that piece which is under our control.  Thus, we are now adopting EPP goals such as “Advocate for refactoring product X” – with objectives such as “educate Product Management about the costs and potential benefits” and “submit requested user stories and Definitions of Done to our Product Owner”.  In this way, we’re doing everything we can to see that these projects get done without sacrificing the prerogative of the PO to set priorities.  We’re also doing what only we can do: identify, explain, and plan to reduce technical debt or capitalize on new technologies.

Regarding the fact that we’re using six month EPPs, we are very explicit that EPPs – like all plans – should not be written in stone.  Thus, we’ve taken the approach of having quick, monthly reviews of our EPPs to see if there is anything we want to add, remove, or change given our evolving situation and knowledge.  These reviews sometimes only last five minutes; sometimes they last 30.  The point is that they don’t introduce much overhead and they allow us to course correct fairly frequently.

Regarding personal/career/team development goals, the problems we were running into regarded how to measure success.  If we had an EPP goal to “ensure unit tests are written,” what defines success?  What do we say at the end of six months if we didn’t write as many unit tests as we could have for the first month or two, then were pretty good for the rest of the period until the last week when we again may have missed some opportunities for tests?

We realized that we were not focusing on the real issue.  At the end of the period, we didn’t so much want a code coverage percentage as we wanted to be able to say that we had adopted or internalized certain practices.  That is, that we had developed certain habits.  Thus, at the end of the period, the question we ask ourselves is not “what is our code coverage like?” but rather “have we developed the habit of always writing unit tests?”  While this is more subjective, we feel it is still more valuable and it more accurately reflects what we actually want to do.

Summary

  • Plan to do those things where you add unique value – bearing in mind that no one person can tackle an entire project alone and, therefore, should not be solely responsible for that project.
  • Review the plan often, making changes as necessary.  The plan is not written in stone.
  • Don’t be seduced by “vanity metrics” like “how many units tests have I written per story?”  Rather, focus on those habits or practices that you want to develop or internalize and then judge yourself against how well you have become the engineer you want to be.
Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Scrum & Employee Performance Plans
 | Posted by | Categories: Business, Human Resources, Scrum, Software Development, Uncategorized |

One of the biggest problems (if not the biggest problem) we have on our team is technical debt that has accumulated over the last 4-5 years. Fortunately, the team and our Product Owners understand the problem of technical debt in general and recognize it in our case.
We’ve started taking small steps to reduce our amount of debt. I’d love feedback and suggestions about what we’re doing and anything that anyone else has found helpful.

The first thing we’ve done is resolved to not take on any additional debt intentionally. Of course, almost anything we implement will eventually become technical debt if it is allowed to collect dust long enough or if circumstances change – but the key point is that it wasn’t debt when we implemented it. Our hope is that the rate at which technical assets become technical debt will be such that we will be able to keep up with regular refactoring.

The second thing is that our POs have made it clear that they are willing to give us the time to pay off technical debt – but the burden is on the team to identify, flag up, and explain the technical debt to the PO so that it can be properly prioritized in our backlog.

The third thing is the team now has a weekly 30 minute meeting to discuss technical debt. We don’t have a firm agenda, but the discussion usually centers around a few points:

  • Are there any pieces of technical debt that we would like to discuss (presumably because we haven’t discussed them in this forum before)?
  • How costly would it be to pay down this piece of debt? (We estimate this using the XS, S, M, L XL scale.)
  • How costly in the interest on this debt? That is, how much pain is it causing? (We estimate this using a yellow, orange, red scale. I’ll explain why in a minute).
  • How should we begin the process of paying down this debt? Is this something we can “just fix” with a little effort on the side? Is this something that we should write up a user story and request our PO add to the backlog? Should we keep it in our back pocket for a hack-a-thon project?
  • Who is on point for this piece of debt? That is, who is going to “just fix it” or write the user story or keep it on their own hack-a-thon to-do list?

The fourth thing is that we’re now maintaining a technical debt board on a wall near our sprint backlog. We wanted a visual representation and reminder of our team’s biggest problem. It will hopefully help us stay focused on it, not let us forget about any given piece of technical debt, and help us track and encourage progress (a very important facet, in my opinion).

This is why we estimated cost using size but impact using color – we can visually represent each piece of technical debt using a piece of paper, not card, post-it note, etc. of the appropriate color and have a board where someone can assess all of the critical information at a glance (example). If we had just used Fibonacci numbers for each scale and written them in the corners of cards or something, it would be much harder to get a sense for the whole situation.

So far, we’ve identified two pieces of technical debt that we can “just fix” in our spare time and the fixes are in progress. We’ve also begun working on writing the stories necessary to eliminate another debt. Hopefully, we’ll be able to keep up this momentum, increasing our velocity and quality along the way.

Any tips from those who have gone before would be greatly appreciated!

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Paying Down Our Technical Debt
 | Posted by | Categories: Scrum, Software Development | Tagged: |