Imagine you have a single organization (choose your own scale: a single team or a group of teams arranged into a single, bigger organization).  Now imagine that you have multiple – perhaps many – products to manage concurrently.  By ‘manage’, I mean everything from the smallest bug fix or tweak to new major releases.  Now imagine that the customer/user bases of those disparate products are not coextensive.  That is, some users only care about their particular product to the exclusion of the rest – and they might be the only user base that cares about that particular product.

Lastly, imagine that your users/customers are generally dissatisfied with the speed with which enhancements or fixes are being rolled out to the product(s) they care about.  After all, with a single organization handling all of the products, and the efficiency gains to be had by grouping similar work items together, it is often going to be the case that a significant amount of time goes by without a particular product seeing any updates – because there is a major release for other products, for example.

What would you do?

One option is to use allocation based project or portfolio management.  I’m deliberately avoiding the phrase “resource allocation” because I object to using ‘resource’ as a euphemism for people – thus, I turn the phrase around and coming at it from the project side.

In any event, what I mean is a situation where you step away from having a single backlog (which you presumably have since we’re talking about a single organization) and break the work into multiple backlogs (or value streams, as the case may be) – allocating a portion of your people or teams to each.  The idea being that this new situation will ensure a slow but steady flow of improvements to each product, thus reassuring users that progress is being made, even if it is slow.

As with every tool or principle in any toolbox or framework, there are times when this will work and times when it will not (and times when it will make the situation worse).

Here are the things one must consider:

1. The stability of the allocation

If you can foresee that the allocation of people/teams to each product is going to be pretty stable, then this strategy might work for you (assuming the other conditions below obtain).  If, on the other hand, you’re going to have to adjust the allocation frequently based on changing market needs or customer demands, then this will probably make the situation worse.  By the time a person or team has been working with a particular product to be familiar with it and really productive, they will be pulled off and have to come up the learning curve on another product before being as productive as possible.  Fast forward a few more cycles and you’ll have a lot of wasted horse trading and lower morale on your hands.

2. The distribution of knowledge and skills

That is, will each product have allocated to it a set of people who have sufficient knowledge and skill sufficient to make the desired enhancements?  Often times, this will not be the case.  For example, say there is a team of eight people responsible for eight products but six team members are only familiar with 3 of the products and the other five products are really only understood well by the other two team members.  Assuming having those two team members float between teams is not a viable solution (for whatever reason: too much task switching, etc.), one could not really make any progress on all five of those systems in an allocation based system.  (I’m assuming that the team as a whole can make progress on these products due to the mentorship of those two team members.)

3. The ease with which you can deploy to Production

If your products are such that you can fairly easily, quickly, and reliably deploy new builds (for software; models for other types of products), allocation will possibly be a winning strategy (again, assuming the other conditions obtain).  As the cost/pain of deployment rises, the benefits of allocation decline because the frequency with which users receive updates will decline because of the transaction cost associate with deployment.

4. The type of work to be done for each product

If your products are mature and stable and on going work takes the form of minor enhancements, new features, the occasional bug, etc., allocating might work.  If this is the case, your users will indeed feel a slow but steady stream of improvements coming their way.

If, on the other hand, your working on major releases and cannot be deployed piecemeal because they represent pieces of a completely new paradigm which have to be deployed in a large batch, your customers will not experience this feeling of steady progress – because they’ll still of necessity be receiving them in “big bang” releases.

Given that we’re positing multiple products, it is likely that different products will fall into different camps.  Without knowing specifics, the only guiding principle I can offer is that allocation will be a relatively better idea as the  importance of the products that are mature and stable relative to the other products increases.

So, assuming those four issues are favorable, allocation might work and is certainly a valid experiment to run to see if you get the intended result.  If, however, any one of them seems problematic, allocation will probably not work and will create a slew of other problems.

I’m interested to hear your take on this, especially if you’ve ever tried allocation in this way.  Please share experiences in the comments.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest