Monthly Archives: November 2010

Why track velocity?

Say you’re on an agile software development team and your Customer doesn’t care about release planning or even whether you make your commitment in an iteration. You’re a good team and he’s confident you’ll get the work done when it needs to get done.

Is there still value in tracking velocity? If your customer doesn’t care, is it worth the time to create burn-down charts?

[Burndown! OMG!]

You can’t manage what you don’t measure.

Velocity tells you how much work your team can do in an iteration–say one or two weeks. It’s based on two things: how many discreet items of work you completed and the unitless “size” of those items, as determined by the team themselves, often a long time ago.

Knowing velocity has three separate benefits for the team, regardless of whether anyone else is interested in seeing that number:

  1. It can be motivating for people who want to try to improve that number,
  2. It tells you how much work your team is likely be able to complete in a subsequent iteration, and
  3. It gives you feedback to improve the “size” estimates.

Motivation is great, but the biggest benefit I see is the future iteration planning one. A big part of iteration planning is having the team “commit” to a bundle of work. You can spit-ball an estimate, but software estimation is notoriously hard to get right. Having velocity takes away a lot of the guess-work. It’s not perfect–changes in the team and errors in estimation make the number less reliable. You’re not looking at a precise, hour by hour estimate, however. You just want to have a reasonable amount of confidence in the amount of work you can do on aggregate. Velocity gives you that.

Making your commitment for an iteration is good discipline even if, again, no-one else cares. As a professional, you should be able to finish what you said you’d finish when you said you’d finish. Fortunately, knowing velocity helps you with that.

One of the compelling things about agile for me is the automatic feedback mechanisms. If you plan more work than you’re able to do, your velocity goes down and your next iteration should be more manageable. If you plan too little, and you’re able to take some work off the backlog, your velocity goes up, so you’ll automatically do more next time. Eventually, you find an equilibrium.

Likewise, if you tend to overestimate the “size” of work, your velocity will be high. “Size” has no direct relationship to time besides velocity. Any future estimates you make knowing previous size and velocity will be somewhat improved over your initial, somewhat arbitrary estimates because you now have a better yardstick to measure work by.

Your first iterations are almost certain to be unsuccessful, because you don’t know velocity and you have poor estimates. If you work the process, however, your estimates will improve and you start to have a hope of making your commitments. And it’s much more fun when you’re winning a game than when you’re losing all the time.

If you’re interested in delving deeper, James Shore does a way better job explaining this stuff than I can.