I’ve been thinking a lot about retrospectives lately both because our team has been struggling with them being ineffective/wasteful and because retrospectives were the subject of conversation at the last DC/NOVA Scrum Users Group meetup.

Our team has been tweaking and experimenting with various modifications to our process but one thing that we’ve left untouched for a year now is our retrospectives.  Every sprint, we ask “What went well?” “What went badly?” and “What can we improve?”  Rarely, though, do we follow through on those items under “What can we improve?”  We’ve tried forcing ourselves to make these items concrete, posting them near our board, bringing them up in the standup meeting, etc. but to no avail.

We’re now experimenting with changing the meeting itself to better foster improvement.  First: we’re going into the meeting with an agenda rather than having it be a free form discussion.  Generally, when we treat it as free form, memories are dull and it is difficult to start a conversation.  We’re hoping that a prepared agenda (to which anyone can contribute) will help grease the skids, so to speak.

Secondly, the “What can we improve?” section will now be more explicitly a “What experiments should we run?” discussion – things like “should we be using Selenium instead of our current solution?” or “what if we only tasked out half of the stories at the beginning of the sprint and left the second half until the middle of the sprint?”

The “What went well?” and “What went badly?” topics, then, can focus not only on unexpected things that came up but also on the results of these process, tooling, and workflow experiments that we’re running.

Hopefully, this will prove to be a true PDCA loop and really drive improvement.  After all, that what retrospectives are supposed to do.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest

A number of recent problems has caused our team to tweak our weekly Backlog Review meeting.  Specifically, we’ve added two items to the agenda: 1) a review of the stories tentatively scheduled for the next sprint (and possibly the next one after that) and 2) providing our PO with an estimate of our story point capacity for the next sprint.  If you’re thinking “why weren’t you reviewing upcoming stories already and why don’t you just use your velocity?” read on.

The first problem we experienced dealt with a specific story – a new database report that was needed.  The report was originally conceived and designed a few months ago but the project was postponed for several (valid) reasons.  The story, though, had been estimated months ago and then not touched again.  When business priorities were such that the report was again a priority, we simply dropped it into a sprint.  That’s when things got ugly.

In the intervening months, the estimate had become stale.  We had learned several rather critical lessons about this particular type of database report (we had developed similar ones in the meantime) but never incorporated that learning into this story or its estimate.  What we originally thought was an 8 point story instantly became 6 different 5 point stories.

That’s all fine and good – estimates are estimates and it is expected that the team will gain new knowledge and refine estimates as time goes on.  That’s not what happened, though.  Instead, we only realized the problem during our sprint planning meeting.  Since our velocity was hovering in the low 30’s, the revised set of stories ate an entire sprint.  Product Management was not expecting that at all.  Though they were prepared for estimates to shift and priorities to have to be moved around, they were not prepared for a sprint that they thought (and were basically told) would hold 4 or 5 different stories being eaten up by one story.

The lesson we learned was that we simply couldn’t allow stories and their estimates to become stale – there was too much risk that the story itself no longer made sense or that the estimate was now widely off.  Ideally, stories would never get stale because the backlog would be a pull system and stories would be planned and estimated very close to when development would begin.  Unfortunately, a huge piece of prioritization is weighing costs and Product Management can’t gauge the cost/benefit of each story relative to others without estimates.  Thus, it sometimes happens that stories are estimated and then shelved for a while.

To deal with this, both the team and Product Management now know to be on the look out for any stale stories or estimates that might be making their way to the top of the backlog.  Additionally, we’ve now broken our backlog review into two parts: estimated new stories and reviewing the one or two sprint’s worth of stories at the top of the backlog in case anything needs to be tweaked.  This uncovers problems a lot earlier than the sprint planning meeting and gives Product Management a chance to move around other priorities and reset customer expectations.

Secondly, our team’s capacity over the last several sprints has been somewhat erratic due to overlapping vacations, conference attendance, and other circumstances.  As a result, Product Management has been subjected to a few rude surprises on the first day of a new sprint when we tell them that our capacity for this sprint is half of what it was last time.  The fix for this (at least until our schedules settle down and we can really rely on our velocity again) is to take 2-3 minutes in the backlog review and estimate (in story points) our capacity for the next sprint.  This again gives Product Management so early warning and time to shift things around as necessary.

I’d be interested to hear if anyone else has had similar problems and what their solutions were.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
 | Posted by | Categories: Scrum |

Our company has been in the habit of doing periodic EPPs (Employee Performance Plans).  We’ve evolved from doing them once a quarter to once every four months to twice a year.  We’ve gradually lengthened the amount of time an EPP covers because of the overhead involved in putting them together and reviewing them.

When the Engineering department moved to Scrum, we obviously had to change the way we conceived of an EPP.  As usual, the problems we encountered weren’t caused by Scrum – just highlighted by it.

The main problem we encountered was “how can we say we’re planning on doing anything since we don’t set our own priorities?”  In the past, this didn’t seem like a problem because we just subtracted the amount of time needed to complete our EPP objectives from the amount of available time for “PM-sponsored” projects – or we built in the fact that we wouldn’t be working 100% of the time on those projects.  Either route is a problem because it reduces visibility into what the team’s priorities and capacity are.

Another problem was “how can we claim to be agile while putting together six month long personal plans?”

The latest problem we’ve encountered had to do with personal/career/team development, e.g., writing more unit tests, peer reviewing code, experimenting with peer programming, networking with peers outside the company, etc.

I feel like we’ve addressed all three problems fairly well.  Here’s how:

Regarding EPP projects, we realized that engineers making themselves personally responsible for entire projects was simply the wrong approach.  Granted, we wanted to get these projects (mostly technical debt reduction projects) done and granted they are important, but cutting out the rest of the team and the Product Owner is simply not the best way to accomplish them.

We realized that we should not be focusing on the whole project but simply that piece which is under our control.  Thus, we are now adopting EPP goals such as “Advocate for refactoring product X” – with objectives such as “educate Product Management about the costs and potential benefits” and “submit requested user stories and Definitions of Done to our Product Owner”.  In this way, we’re doing everything we can to see that these projects get done without sacrificing the prerogative of the PO to set priorities.  We’re also doing what only we can do: identify, explain, and plan to reduce technical debt or capitalize on new technologies.

Regarding the fact that we’re using six month EPPs, we are very explicit that EPPs – like all plans – should not be written in stone.  Thus, we’ve taken the approach of having quick, monthly reviews of our EPPs to see if there is anything we want to add, remove, or change given our evolving situation and knowledge.  These reviews sometimes only last five minutes; sometimes they last 30.  The point is that they don’t introduce much overhead and they allow us to course correct fairly frequently.

Regarding personal/career/team development goals, the problems we were running into regarded how to measure success.  If we had an EPP goal to “ensure unit tests are written,” what defines success?  What do we say at the end of six months if we didn’t write as many unit tests as we could have for the first month or two, then were pretty good for the rest of the period until the last week when we again may have missed some opportunities for tests?

We realized that we were not focusing on the real issue.  At the end of the period, we didn’t so much want a code coverage percentage as we wanted to be able to say that we had adopted or internalized certain practices.  That is, that we had developed certain habits.  Thus, at the end of the period, the question we ask ourselves is not “what is our code coverage like?” but rather “have we developed the habit of always writing unit tests?”  While this is more subjective, we feel it is still more valuable and it more accurately reflects what we actually want to do.

Summary

  • Plan to do those things where you add unique value – bearing in mind that no one person can tackle an entire project alone and, therefore, should not be solely responsible for that project.
  • Review the plan often, making changes as necessary.  The plan is not written in stone.
  • Don’t be seduced by “vanity metrics” like “how many units tests have I written per story?”  Rather, focus on those habits or practices that you want to develop or internalize and then judge yourself against how well you have become the engineer you want to be.
Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Scrum & Employee Performance Plans
 | Posted by | Categories: Business, Human Resources, Scrum, Software Development, Uncategorized |

One of the biggest problems (if not the biggest problem) we have on our team is technical debt that has accumulated over the last 4-5 years. Fortunately, the team and our Product Owners understand the problem of technical debt in general and recognize it in our case.
We’ve started taking small steps to reduce our amount of debt. I’d love feedback and suggestions about what we’re doing and anything that anyone else has found helpful.

The first thing we’ve done is resolved to not take on any additional debt intentionally. Of course, almost anything we implement will eventually become technical debt if it is allowed to collect dust long enough or if circumstances change – but the key point is that it wasn’t debt when we implemented it. Our hope is that the rate at which technical assets become technical debt will be such that we will be able to keep up with regular refactoring.

The second thing is that our POs have made it clear that they are willing to give us the time to pay off technical debt – but the burden is on the team to identify, flag up, and explain the technical debt to the PO so that it can be properly prioritized in our backlog.

The third thing is the team now has a weekly 30 minute meeting to discuss technical debt. We don’t have a firm agenda, but the discussion usually centers around a few points:

  • Are there any pieces of technical debt that we would like to discuss (presumably because we haven’t discussed them in this forum before)?
  • How costly would it be to pay down this piece of debt? (We estimate this using the XS, S, M, L XL scale.)
  • How costly in the interest on this debt? That is, how much pain is it causing? (We estimate this using a yellow, orange, red scale. I’ll explain why in a minute).
  • How should we begin the process of paying down this debt? Is this something we can “just fix” with a little effort on the side? Is this something that we should write up a user story and request our PO add to the backlog? Should we keep it in our back pocket for a hack-a-thon project?
  • Who is on point for this piece of debt? That is, who is going to “just fix it” or write the user story or keep it on their own hack-a-thon to-do list?

The fourth thing is that we’re now maintaining a technical debt board on a wall near our sprint backlog. We wanted a visual representation and reminder of our team’s biggest problem. It will hopefully help us stay focused on it, not let us forget about any given piece of technical debt, and help us track and encourage progress (a very important facet, in my opinion).

This is why we estimated cost using size but impact using color – we can visually represent each piece of technical debt using a piece of paper, not card, post-it note, etc. of the appropriate color and have a board where someone can assess all of the critical information at a glance (example). If we had just used Fibonacci numbers for each scale and written them in the corners of cards or something, it would be much harder to get a sense for the whole situation.

So far, we’ve identified two pieces of technical debt that we can “just fix” in our spare time and the fixes are in progress. We’ve also begun working on writing the stories necessary to eliminate another debt. Hopefully, we’ll be able to keep up this momentum, increasing our velocity and quality along the way.

Any tips from those who have gone before would be greatly appreciated!

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Paying Down Our Technical Debt
 | Posted by | Categories: Scrum, Software Development | Tagged: |

Why We Dropped Ideal Hours

18 November 2009

Since converting to Scrum, my team has been in the practice of planning our capacity for a sprint in terms of ideal hours.  We had a fairly simple spreadsheet where we’d enter the number of vacation days each team member was planning and taking and their estimated “overhead” percentage (all of the time spent in meetings, handling random things that come up, etc.).   During our Sprint Planning meetings, we would then estimate all of the tasks for the stories in terms of ideal hours – how long we expected that task to take assuming zero distractions and interruptions.  We then took on as many stories as we had enough ideal hours for.

Over time, I became less and less satisfied with this way of planning capacity.  In general, it didn’t seem to add much value to the process and increased the length of the planning meeting.  Additionally, because we track the progress of a sprint using the number of hours of work remaining, the team had to continuously update both our task board and our online tool (whether Rally, ScrumWorks, etc.) or others wouldn’t have a good sense of how the sprint was going.

At best, this represented time (albeit, not a ton, but still enough that it hurt) not spent doing actual work.  At worst, it was a complete waste since there were usually caveats associated with the hours as they are presented on the board or in the tool, e.g., “well we’re way over in terms of hours we spent on this task, but we realized that all the work we did will save us time on the next 5 tasks so it’s basically a wash” or “we’re going to leave this task at 6, but we might lower it to 1 shortly depending on how something turns out”.

One could respond that those types of things can and should be tracked in a tool and the problem is not that we were using ideal hours, it was that we were being lax in updating the tool and, by extension, the rest of the team and stakeholders.  While this was initially my thought, I came to disagree for the following reasons:

  1. It seemed odd that we were estimating work in terms of a fictional unit – the ideal hour.  Since there is very rarely an extended period of time during which someone really doesn’t have any distractions and is free to focus on a single task, I don’t understand why we ask them to imagine how long a task would take under those conditions.  Granted, it makes the math easier, but that doesn’t make the estimate any better and might actually make it worse.
  2. We limited the granularity of ideal hours to whole hours.  Even if ideal hours were a real unit, limiting their granularity means limiting their accuracy and usefulness.  Granted, estimating in whole hour blocks is faster and easier, but the very mechanism that makes it such also severely limits its usefulness – especially when there is disagreement within the team about how long something is going to take and we just settle with the average.
  3. In my experience, estimating in ideal hours didn’t help us.  There were times when we had to go to the Product Owner and say that we couldn’t get all of the stories that were tentatively put on the sprint backlog done in time (because the number of ideal hours needed was higher than our capacity) – but those were identically the times when the number of story points on the sprint backlog exceeded our velocity and/or where we had huge 8 point stories which we all later agreed should have been multiple stories whose points would have added up to more than 8.

For a while, I didn’t have a good suggestion as to how to do away with ideal hours.  Then I saw this presentation on the ScrumAlliance website.  I’m going to take the liberty of paraphrasing the author: if we’re really doing our story point estimates well, and we’re always trying to break things into smaller stories so that story sizes are as uniform as possible, and we’ve got some historical data to tell us what our velocity is, why don’t we just use that to figure out how much to put into a sprint and save ourselves the trouble of estimating tasks?

Under such a scheme, we would still task things out so that we could uncover any gotchas and get a general consensus in the team as to what needs to be done.  We just wouldn’t estimate hours for tasks and then track progress in terms of hours remaining.  Granted, it might turn out that we sign up for an amount of work that causes us to either end early, have to work a little extra, or miss a story, but we run that risk with ideal hours too – we just spend more time doing it.

In terms of tracking the progress of a sprint, the author suggests having a burndown of tasks rather than hours – which is obviously less quantifiable but perhaps no less valuable.  Our current burndown uses hours, but it really just gives the illusion that we know exactly how many hours are remaining.  Not having an ideal hour burndown just means we don’t have that illusion anymore.  As the author points out, precision doesn’t equal accuracy and accuracy is what we’re really after.

Lastly, there was the issue of how to adjust the amount of work we pull into a sprint when we know that someone will be on vacation.  Ideal hours gives us a nice way to do this because we just subtract the appropriate number of hours from our capacity.  That really isn’t that accurate, though, because any given day in a two week period might be very different than any other day in terms of how much someone is able to focus on direct work.  Treating all days as identical in terms of capacity is again mathematically easier but perhaps no more accurate.  We could probably do just as well by “manually” adjusting the velocity down a few story points based on gut feelings.  Again, precision doesn’t equal accuracy.

When I presented these ideas to the team, we all decided that it was worth trying – after all, even if we crash and burn, we’ve only lost two weeks.  That was 2 sprints ago and we all seem pretty happy not using ideal hours.  Our task board is just the same except we don’t put hours on our tasks.  Our online tool is the same – we just assign every task 1 estimated hour.  Our burndown chart thus shows the burndown of tasks.  During planning, we are extra conscious of task breakdowns and try to make all tasks as uniform a size as reasonably possible.

All in all, I’m very happy we’ve moved away from ideal hours and are relying more on our velocity and “gut checks” to know how much work to pull into a sprint.  I highly recommend trying it.

Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
 | Posted by | Categories: Scrum |

Slavery

24 October 2009

Jack Milunsky has a brief post up over at AgileSoftwareDevelopment.com wherein he discusses the issue of switching stories mid sprint.  One of his points that I’d like to draw attention to is his response to the criticism that a team can be the slave of the process, i.e, too rigid in following Scrum mechanics and not willing to change a sprint mid stride because something urgent comes up.  As he says “Well you’re either a slave to the process or the team is a slave to any chicken in the company who shouts the loudest.”

That is a fantastic point, in my opinion.  It is much like the oft repeated response when something goes wrong while you’re doing scrum: “would this have been any different if we were doing waterfall?”

http://agilesoftwaredevelopment.com/blog/jackmilunsky/switching-stories-mid-sprint
Google+LinkedInTwitterFacebookEmailDZoneDiggPinterest
Comments Off on Slavery
 | Posted by | Categories: Scrum |