Our company has been in the habit of doing periodic EPPs (Employee Performance Plans).  We’ve evolved from doing them once a quarter to once every four months to twice a year.  We’ve gradually lengthened the amount of time an EPP covers because of the overhead involved in putting them together and reviewing them.

When the Engineering department moved to Scrum, we obviously had to change the way we conceived of an EPP.  As usual, the problems we encountered weren’t caused by Scrum – just highlighted by it.

The main problem we encountered was “how can we say we’re planning on doing anything since we don’t set our own priorities?”  In the past, this didn’t seem like a problem because we just subtracted the amount of time needed to complete our EPP objectives from the amount of available time for “PM-sponsored” projects – or we built in the fact that we wouldn’t be working 100% of the time on those projects.  Either route is a problem because it reduces visibility into what the team’s priorities and capacity are.

Another problem was “how can we claim to be agile while putting together six month long personal plans?”

The latest problem we’ve encountered had to do with personal/career/team development, e.g., writing more unit tests, peer reviewing code, experimenting with peer programming, networking with peers outside the company, etc.

I feel like we’ve addressed all three problems fairly well.  Here’s how:

Regarding EPP projects, we realized that engineers making themselves personally responsible for entire projects was simply the wrong approach.  Granted, we wanted to get these projects (mostly technical debt reduction projects) done and granted they are important, but cutting out the rest of the team and the Product Owner is simply not the best way to accomplish them.

We realized that we should not be focusing on the whole project but simply that piece which is under our control.  Thus, we are now adopting EPP goals such as “Advocate for refactoring product X” – with objectives such as “educate Product Management about the costs and potential benefits” and “submit requested user stories and Definitions of Done to our Product Owner”.  In this way, we’re doing everything we can to see that these projects get done without sacrificing the prerogative of the PO to set priorities.  We’re also doing what only we can do: identify, explain, and plan to reduce technical debt or capitalize on new technologies.

Regarding the fact that we’re using six month EPPs, we are very explicit that EPPs – like all plans – should not be written in stone.  Thus, we’ve taken the approach of having quick, monthly reviews of our EPPs to see if there is anything we want to add, remove, or change given our evolving situation and knowledge.  These reviews sometimes only last five minutes; sometimes they last 30.  The point is that they don’t introduce much overhead and they allow us to course correct fairly frequently.

Regarding personal/career/team development goals, the problems we were running into regarded how to measure success.  If we had an EPP goal to “ensure unit tests are written,” what defines success?  What do we say at the end of six months if we didn’t write as many unit tests as we could have for the first month or two, then were pretty good for the rest of the period until the last week when we again may have missed some opportunities for tests?

We realized that we were not focusing on the real issue.  At the end of the period, we didn’t so much want a code coverage percentage as we wanted to be able to say that we had adopted or internalized certain practices.  That is, that we had developed certain habits.  Thus, at the end of the period, the question we ask ourselves is not “what is our code coverage like?” but rather “have we developed the habit of always writing unit tests?”  While this is more subjective, we feel it is still more valuable and it more accurately reflects what we actually want to do.


  • Plan to do those things where you add unique value – bearing in mind that no one person can tackle an entire project alone and, therefore, should not be solely responsible for that project.
  • Review the plan often, making changes as necessary.  The plan is not written in stone.
  • Don’t be seduced by “vanity metrics” like “how many units tests have I written per story?”  Rather, focus on those habits or practices that you want to develop or internalize and then judge yourself against how well you have become the engineer you want to be.
Comments Off on Scrum & Employee Performance Plans
 | Posted by | Categories: Business, Human Resources, Scrum, Software Development, Uncategorized |

Spring Conferences

26 February 2010

I’ll be attending Agile Coach Camp 2010 – a bar camp for agile practitioners being held in Durham, NC, March 19-21.  I and a colleague will be able to be there for the entire weekend and hopefully meet up with another co-worker who is based in NC whom we haven’t seen in a while.

I’ll also be attending the Lean Software and Systems Conference in Atlanta from April 21st through the 23rd.

I’m really looking forward to these conferences and meeting anyone else who might be attending these.  If you’re planning on attending, drop me a line and perhaps we can arrange to meet up.

Comments Off on Spring Conferences
 | Posted by | Categories: Uncategorized | Tagged: |

One of the biggest problems (if not the biggest problem) we have on our team is technical debt that has accumulated over the last 4-5 years. Fortunately, the team and our Product Owners understand the problem of technical debt in general and recognize it in our case.
We’ve started taking small steps to reduce our amount of debt. I’d love feedback and suggestions about what we’re doing and anything that anyone else has found helpful.

The first thing we’ve done is resolved to not take on any additional debt intentionally. Of course, almost anything we implement will eventually become technical debt if it is allowed to collect dust long enough or if circumstances change – but the key point is that it wasn’t debt when we implemented it. Our hope is that the rate at which technical assets become technical debt will be such that we will be able to keep up with regular refactoring.

The second thing is that our POs have made it clear that they are willing to give us the time to pay off technical debt – but the burden is on the team to identify, flag up, and explain the technical debt to the PO so that it can be properly prioritized in our backlog.

The third thing is the team now has a weekly 30 minute meeting to discuss technical debt. We don’t have a firm agenda, but the discussion usually centers around a few points:

  • Are there any pieces of technical debt that we would like to discuss (presumably because we haven’t discussed them in this forum before)?
  • How costly would it be to pay down this piece of debt? (We estimate this using the XS, S, M, L XL scale.)
  • How costly in the interest on this debt? That is, how much pain is it causing? (We estimate this using a yellow, orange, red scale. I’ll explain why in a minute).
  • How should we begin the process of paying down this debt? Is this something we can “just fix” with a little effort on the side? Is this something that we should write up a user story and request our PO add to the backlog? Should we keep it in our back pocket for a hack-a-thon project?
  • Who is on point for this piece of debt? That is, who is going to “just fix it” or write the user story or keep it on their own hack-a-thon to-do list?

The fourth thing is that we’re now maintaining a technical debt board on a wall near our sprint backlog. We wanted a visual representation and reminder of our team’s biggest problem. It will hopefully help us stay focused on it, not let us forget about any given piece of technical debt, and help us track and encourage progress (a very important facet, in my opinion).

This is why we estimated cost using size but impact using color – we can visually represent each piece of technical debt using a piece of paper, not card, post-it note, etc. of the appropriate color and have a board where someone can assess all of the critical information at a glance (example). If we had just used Fibonacci numbers for each scale and written them in the corners of cards or something, it would be much harder to get a sense for the whole situation.

So far, we’ve identified two pieces of technical debt that we can “just fix” in our spare time and the fixes are in progress. We’ve also begun working on writing the stories necessary to eliminate another debt. Hopefully, we’ll be able to keep up this momentum, increasing our velocity and quality along the way.

Any tips from those who have gone before would be greatly appreciated!

Comments Off on Paying Down Our Technical Debt
 | Posted by | Categories: Scrum, Software Development | Tagged: |

Why We Dropped Ideal Hours

18 November 2009

Since converting to Scrum, my team has been in the practice of planning our capacity for a sprint in terms of ideal hours.  We had a fairly simple spreadsheet where we’d enter the number of vacation days each team member was planning and taking and their estimated “overhead” percentage (all of the time spent in meetings, handling random things that come up, etc.).   During our Sprint Planning meetings, we would then estimate all of the tasks for the stories in terms of ideal hours – how long we expected that task to take assuming zero distractions and interruptions.  We then took on as many stories as we had enough ideal hours for.

Over time, I became less and less satisfied with this way of planning capacity.  In general, it didn’t seem to add much value to the process and increased the length of the planning meeting.  Additionally, because we track the progress of a sprint using the number of hours of work remaining, the team had to continuously update both our task board and our online tool (whether Rally, ScrumWorks, etc.) or others wouldn’t have a good sense of how the sprint was going.

At best, this represented time (albeit, not a ton, but still enough that it hurt) not spent doing actual work.  At worst, it was a complete waste since there were usually caveats associated with the hours as they are presented on the board or in the tool, e.g., “well we’re way over in terms of hours we spent on this task, but we realized that all the work we did will save us time on the next 5 tasks so it’s basically a wash” or “we’re going to leave this task at 6, but we might lower it to 1 shortly depending on how something turns out”.

One could respond that those types of things can and should be tracked in a tool and the problem is not that we were using ideal hours, it was that we were being lax in updating the tool and, by extension, the rest of the team and stakeholders.  While this was initially my thought, I came to disagree for the following reasons:

  1. It seemed odd that we were estimating work in terms of a fictional unit – the ideal hour.  Since there is very rarely an extended period of time during which someone really doesn’t have any distractions and is free to focus on a single task, I don’t understand why we ask them to imagine how long a task would take under those conditions.  Granted, it makes the math easier, but that doesn’t make the estimate any better and might actually make it worse.
  2. We limited the granularity of ideal hours to whole hours.  Even if ideal hours were a real unit, limiting their granularity means limiting their accuracy and usefulness.  Granted, estimating in whole hour blocks is faster and easier, but the very mechanism that makes it such also severely limits its usefulness – especially when there is disagreement within the team about how long something is going to take and we just settle with the average.
  3. In my experience, estimating in ideal hours didn’t help us.  There were times when we had to go to the Product Owner and say that we couldn’t get all of the stories that were tentatively put on the sprint backlog done in time (because the number of ideal hours needed was higher than our capacity) – but those were identically the times when the number of story points on the sprint backlog exceeded our velocity and/or where we had huge 8 point stories which we all later agreed should have been multiple stories whose points would have added up to more than 8.

For a while, I didn’t have a good suggestion as to how to do away with ideal hours.  Then I saw this presentation on the ScrumAlliance website.  I’m going to take the liberty of paraphrasing the author: if we’re really doing our story point estimates well, and we’re always trying to break things into smaller stories so that story sizes are as uniform as possible, and we’ve got some historical data to tell us what our velocity is, why don’t we just use that to figure out how much to put into a sprint and save ourselves the trouble of estimating tasks?

Under such a scheme, we would still task things out so that we could uncover any gotchas and get a general consensus in the team as to what needs to be done.  We just wouldn’t estimate hours for tasks and then track progress in terms of hours remaining.  Granted, it might turn out that we sign up for an amount of work that causes us to either end early, have to work a little extra, or miss a story, but we run that risk with ideal hours too – we just spend more time doing it.

In terms of tracking the progress of a sprint, the author suggests having a burndown of tasks rather than hours – which is obviously less quantifiable but perhaps no less valuable.  Our current burndown uses hours, but it really just gives the illusion that we know exactly how many hours are remaining.  Not having an ideal hour burndown just means we don’t have that illusion anymore.  As the author points out, precision doesn’t equal accuracy and accuracy is what we’re really after.

Lastly, there was the issue of how to adjust the amount of work we pull into a sprint when we know that someone will be on vacation.  Ideal hours gives us a nice way to do this because we just subtract the appropriate number of hours from our capacity.  That really isn’t that accurate, though, because any given day in a two week period might be very different than any other day in terms of how much someone is able to focus on direct work.  Treating all days as identical in terms of capacity is again mathematically easier but perhaps no more accurate.  We could probably do just as well by “manually” adjusting the velocity down a few story points based on gut feelings.  Again, precision doesn’t equal accuracy.

When I presented these ideas to the team, we all decided that it was worth trying – after all, even if we crash and burn, we’ve only lost two weeks.  That was 2 sprints ago and we all seem pretty happy not using ideal hours.  Our task board is just the same except we don’t put hours on our tasks.  Our online tool is the same – we just assign every task 1 estimated hour.  Our burndown chart thus shows the burndown of tasks.  During planning, we are extra conscious of task breakdowns and try to make all tasks as uniform a size as reasonably possible.

All in all, I’m very happy we’ve moved away from ideal hours and are relying more on our velocity and “gut checks” to know how much work to pull into a sprint.  I highly recommend trying it.

 | Posted by | Categories: Scrum |

Coworking with Ourselves

28 October 2009

My company has gone to lengths to ensure that employees can work from anywhere in the world (provided they have an internet connection).  Since I live 70 miles (80 – 150 minutes, depending on traffic) from my home office, I have taken to working remotely.  For a couple years, I probably went into the office an average of once every two months.

In many ways, this is fantastic: I save between 160 and 300 minutes per day (!), no frustrations with traffic jams, fewer distractions, being home to help out with the kids if necessary, etc.

I’ve become quite the advocate for telecommuting and for the idea that distributed workforces can be every bit as productive as co-located ones.

Recently, however, I’ve started realizing that some of the criticism of telecommuting is quite justified, particularly the idea that distributed teams don’t share ideas often enough.  In my experience, we actually do share ideas quite frequently – but only with those with whom we work the most – our direct peers, reports, and superiors.  Usually, that means people who basically do the same thing you do.  In other words, even though as a company we try to have a fairly flat org structure and no silos, we create invisible silos outside of which we rarely, if ever, venture – not because there’s an actual organizational barrier there, but because we literally just don’t see other people.   While you can easily bump into a guy from another team at the water cooler, you rarely “bump” into some one over IM.

In one way, this is good: fewer distractions from your immediate work – more productivity.  In another way it sucks: less innovation, less cross-pollination between teams.  I am beginning to think that telecommuting tends to increase immediate productivity and decrease innovation.  By “innovation,” I mean those new processes, tools, products, resolved pain points, or ways of thinking that materialize when someone from your sales group (who happens to sit next to someone from your database group) says “I’ve been thinking, what if we knew which feature our customers care more about: feature x or feature y” and your database guy says “you know, I could tell you which customers use each and how often in about five minutes.”  That is exactly the kind of conversation and quick opportunity that rarely happen when teams are distributed.

Many people would blame this on several things: a) the sales guy would have to realize that someone else in the company might be able to provide this info (how you could make everyone in your company aware of every piece of information that someone else in the might be able to generate is beyond me), b) inertia: the sales guy has to think “yeah, this is worth disturbing someone with an IM or a phone call” or “yeah, this is worth writing out in long hand in an email and hoping for a response,” etc.

When the team is co-located, these issues tend to disappear – the sales guy doesn’t need to realize that the problem is solvable to mention it to the person sitting next to him.  Regarding inertia, people who are physically sitting next to each other tend to just volunteer issues like that for the sake of conversation – it’s practically encouraged to break the silence.

There are lots of companies trying to solve these issues with various social tools for the enterprise, such as Yammer.  These tools, in my opinion, are good, but nothing beats physical co-location.

To try to combat this, I’ve been making an effort to go into the office at least once a week.  In addition, since a lot of my colleagues telecommute quite a bit, I’ve been trying to organize work-ins where once or twice a month, everyone from the office tries to be in the office on the same day.  A shocking idea, I know, but you have to understand that we’ve all realized the benefits of working remotely and do so quite regularly – we’ve got it down.

In essence, we’re coworking with ourselves, which ends up being quite interesting.  We have noticed the usual benefits associated with coworking – a lot of cross pollination of ideas, more “soft” understanding of what’s going on in other departments and teams, quick opportunities seized just because two people were sitting next to each other, etc.

So why don’t we just all work in the office all the time?  Because I don’t there would be much marginal benefit.  So far, it seems that the benefits we accrue in one day would be the same as those accrued over an entire week – except with the added cost of going into the office every day.  There’s just something about being in the office being novel that keeps us from falling into the rut where we all sit at our desks and focus (read, “ignore all inputs not directly associated with the task at hand”).  By setting aside a day every week or two, we force ourselves to really focus on spontaneous collaboration while we are together – something we would be able to do if we were always together.

I am eager to see how this experiment plays out.  Comments are very welcome if anyone especially if anyone has had similar experiences or is conducting similar experiments.

 | Posted by | Categories: Business, Telecommuting |


24 October 2009

Jack Milunsky has a brief post up over at AgileSoftwareDevelopment.com wherein he discusses the issue of switching stories mid sprint.  One of his points that I’d like to draw attention to is his response to the criticism that a team can be the slave of the process, i.e, too rigid in following Scrum mechanics and not willing to change a sprint mid stride because something urgent comes up.  As he says “Well you’re either a slave to the process or the team is a slave to any chicken in the company who shouts the loudest.”

That is a fantastic point, in my opinion.  It is much like the oft repeated response when something goes wrong while you’re doing scrum: “would this have been any different if we were doing waterfall?”

Comments Off on Slavery
 | Posted by | Categories: Scrum |

Guaranteed Failure

9 October 2009

Tobias Mayer has a great quote that I just learned about, “Scrum guarantees failure in 30 days or less.”  It’s so true – in a good way, of course.  Better to fail within 30 days and get back on track than fail after a year or more and be beyond the point of no return.

Comments Off on Guaranteed Failure
 | Posted by | Categories: Uncategorized |

Lifehacker has a post describing Arora, an open source browser that comes with a lot of built in functionality, including ad blocking – something that others have been talking about and which I discussed a little while ago.  One thing to note here is that it is open source – one of the arguments against the idea that browsers would start blocking ads by default was/is that the makers of browsers generally have a vested interest in ad revenue.  While this is obviously not a major browser, it does demonstrate that that argument is not air tight.

Comments Off on Speaking of blocking ads automatically…
 | Posted by | Categories: Business | Tagged: |

Seth Godin on New Marketing

24 September 2009

Seth Godin has a new post (“The platform vs. the eyeballs“) up about how marketing used to be about renting an audience (“we want this size of a TV audience at such and such a time”) and the metrics were how many eyeballs did you get or what was the CPM:

You, the marketer, don’t care about the long-term value of this audience. It’s like a rental car. You want it to be clean and shiny when you get it, you want to avoid getting in trouble when you return it, but hey, it’s a rental.

And so when we buy ads, we ask, “how big an audience” and then we design an ad with our brand in mind, not with the well-being of the media company or its audience in mind. And if we get a .1% or even a 1% response rate, we celebrate.

Godin’s thesis is that new marketing focuses on owning the audience and not aiming for a 1% conversion rate but a 90% conversion rate:

Old media was not the same as old branding. Media companies built audiences and then brands rented those audiences.

Suddenly the new media comes along and the rules are different. You’re not renting an audience, you’re building one. You’re not exhibiting at a trade show, you’re starting your own trade show.

If you still ask, “how much traffic is there,” or “what’s the CPM?” you’re not getting it. Are you buying momentary attention or are you investing in a long term asset?

The rest of the post concerns Godin’s ideas on how to build the platform he mentions.  It’s interesting, but the above is what concerns me most since it seems to validate at least some of what I was saying earlier about advertising being a long-term, win-win relationship between the consumer and content providers.  The analogy to renting an audience as opposed to cultivating a long-term audience is quite good, in my opinion.

Comments Off on Seth Godin on New Marketing
 | Posted by | Categories: Business | Tagged: |

The latest edition of FastCompany has an article by Adam L. Penenberg (author of the upcoming book Viral Loop) entitled Loop de Loop which is noted as being an adaptation of the aforementioned book.  While the article is, therefore, ostensibly about social media sites and their viral characteristics, the bulk of the article concerns the advertising implications thereof.  The future of advertising is pretty interesting to me and I have some thoughts on the article:

One assertion the author makes is

“Consumers will tolerate a whole lot of advertising if it’s disguised as entertainment, which is why marketers have responded to the DVR by making TV commercials more engaging–the kind that make viewers stick around–and integrating ads into shows.  The same holds true for the Web, and as with keyword-search ads, the trick is to market to people in a way that doesn’t make them feel like they’re being marketed to.”

I have to say that this seems to miss the point (unless I’m just nit-picking on the author’s choice of words).  The future of advertising is not disguising ads so that people will be fooled into consuming them; the future lies in being painfully honest with content consumers so that you don’t serve them barely-palatable ads but delicious ads – ads that they ask you to serve them.

You want consumers to want to consume the ads you’re serving them.  The way to do that is to make sure you’re always giving them what they want.  The only way to do that is to have a relationship with them, a relationship where there is a two way street of data flowing back and forth.  You allow them to tell you up front what they’re interested in right now, you serve them content (normal content and ads), they tell you what they liked and what they didn’t, and then you start over.

That sounds like a recommendation engine, doesn’t it?  In my mind, the vast majority of ads are just really dumb recommendations served up by really dumb recommendation engines.  The key is to start realizing that Amazon’s recommendation engine is a better mouse trap – a better way to serve up ads.  Once you’re thinking about it that way, it suddenly makes perfect sense to allow customers to (gasp!) rate your ads (Hulu does this, although I think their implementation could be a lot better).  Once your customers are telling you which ads they like and which they don’t, you can adjust the system (or the ad content) appropriately to keep them engaged with the system.

That’s the real danger confronting the advertising industry right now: consumers not only are disengaged from the system but actively blocking it.  It has been posited (one counter argument here, see also the same author’s ideas about what would happen if this were true and note how basically all 10 scenarios assume that ads are inherently bad) that, in a few years, the industry will hit a tipping point where blocking ads becomes so easy that the majority of users will do it.  As soon as that happens, the companies who don’t really understand the underlying issues are just going to put their ads in flash.  I think we can all guess what the users’ next move will be.

Advertisers have to keep consumers engaged in the advertising system – give them what they want (useful and/or entertaining ads), let them give you feedback (which they want to be able to do), and allow them to tell you when they want you to recommend another product or service to them (which they want).

I grant that some really good ads are not good because they recommended exactly what you wanted when you wanted it, but rather just because they were entertaining (think of the Budweiser Frogs and Louie the Lizard campaigns).  Such ads are obviously not the product of a recommendation engine in the sense that a computer algorithm thought that you might buy more X if you saw this funny video clip.  The ad being displayed to you at a particular time and in a particular way, though, can easily be thought of as the product of a recommendation engine: the system knew that you liked brand Y, or really like funny ads, or ads with celebrities and served up content that you found entertaining.  What does the system get in return (since most entertaining ads don’t really generate new sales)?  Two things: 1) data (assuming the consumer feeds back) and 2) a consumer who is just a little more engaged than they were a moment ago.  Both of which are priceless.  What does the brand get from entertaining ads?  The same thing they currently do: brand awareness and a more engaged tribe (in the Seth Godin sense).

The consumer gets what they want, the advertiser gets what they want, and the content provider (the ad distributor) gets what they want.  Advertising in such a model is no longer a zero-sum game where consumers lose (because they are basically forced to consume ads they don’t like), the advertiser might eke out a meager return but risks alienating a larger demographic with an ad that bombs or is just annoying), and the content provider loses since their users’ experience is that much poorer due to unwanted ads.

Granted, if you let users tell you when and how to serve them ads, you’re going to get a) a lot of people saying “never and through no medium” and b) a lower number of ads being consumed even by people who are open to ads.  So what?  Those aren’t the metrics we care about.  The metrics we care about are sales and happy customers.

That’s where we finally get back to the Loop de Loop article:  Regarding Andy Monfried, CEO of Lotame (a social media ad and marketing firm), the article says “For Monfried, an ad’s success isn’t based on how many people see the ad; it’s all about how much time someone spends engaging with it.”  Now, I am taking this quote out of context but I don’t think he and I are on completely different pages.  He is mostly talking about the level of engagement a consumer has with an ad within the context of a social network (and, by extension, how far they are responsible for spreading the ad throughout the rest of the network).  ‘Engagement’, here, doesn’t mean so much acting on the ad in terms of buying something or rating the ad, but rather in helping spread it.  Essentially, the consumer becomes a mechanism within the larger recommendation engine system: they recommend (either explicitly or implicitly) the ad to their friends.

The interesting point that emerges, then, is that the “problem” of not getting enough eye balls on an ad because consumers are allowed to choose not to view it is probably more than made up for by coupling a consumer-centric ad recommendation system with the viral properties of a social network.  In other words: make sure your highly customized, targeted ads “work” in Facebook and on Twitter (1) (2).

The article is good as far as it goes regarding social networking and its affects on advertising, but I think the author missed a golden opportunity to discuss the more fundamental issues.  I look forward to seeing how the book turned out.

 | Posted by | Categories: Business | Tagged: |