As a measure of the amount of work completed in an iteration, velocity works extremely well when teams are relatively stable. If the same people stay on a team, it is reasonable to assume that the amount of work they complete will be relatively constant from iteration to iteration. This allows us to plan using inferences such as "This team has an average velocity of 25 points per iteration over the last year and they have time for 8 iterations in this new project; therefore they will complete around 200 points in those 8 iterations." But what do we do when we are managing an agile project where team membership or size changes frequently? To answer this question most effectively, you should collect data on how teams of different sizes have performed over time in your organization. When I was a VP of Development at a couple of agile organizations, I used to collect data on velocity and team size changes in a simple spreadsheet similar to this:
|Initial Team Size||New Team Size||Median of Last 5||Iteration +1||Iteration +2||Iteration +3|
The first column represented the size of the team before a change occurred. The next column represented the new size of the team (up or down). The third column was what I considered to be a reasonable "long term average" velocity for the team at its initial size. Because team's could change frequently (by a person or two) I settled on using the median value of the team's last five iterations. The tradeoff in using a longer measure (median of 15 iterations perhaps) is that you'd have fewer observations. The next columns represent the actual velocities of teams over the next three iterations. Notice that for the last team values are not shown for the last two iterations.
This is usually because the team size changed again. If you have a significant number of teams, the rows in this type of spreadsheet will accumulate quite quickly. In some of the organizations where I used this approach we did not have a standardized definition of story point (or we're using ideal days, which were not as normalized as you might think). So all analysis was done on a percentage basis. What I wanted to know was "What is the average impact of adding a person to a seven-person team?" I would have loved the answer to be something like "Velocity goes up 15%."
Unfortunately, it wasn't that straightforward because velocity often dipped for a couple of iterations before going up. By tracking data I found that usually by the third iteration a team had settled in on a new velocity, which is why my spreadsheet above only tracks through Iteration+3. By all means, track more and see what you find. (But keep in mind that the data will get sparse as team sizes will change again.) Another tab in my spreadsheet, expressed all the data in percentage terms. The first two rows
|Initial Team Size||New Team Size||Iteration +1||Iteration +2||Iteration +3|
(Example: Iteration +1 in the first row is -20% based on the team dropping its velocity from 25 to 20.) I then simply averaged these percentages for each team size change to get results like:
|Initial Team Size||New Team Size||% Change in Iteration +1||% Change in Iteration +2||% Change in Iteration +3|
This allowed me to answer all sorts of questions, including:
- What will this team's velocity be if we add two people?
- How soon could we get this project if we added a person to each team?
- If I want all those projects done by the end of the year, how many people would we need to add?
- What would be the impact of not approving the new employees in the budget?
- What would be the impact of a 15% layoff?
There are of course many flaws with this approach. Adding Susan to the project is very different than "adding an unknown person" to the project. Still, if I have the data on averages across the board in our organization I can make assumptions about specifically adding Susan (if I want, there can be many more risks in doing this). Notice that the approach does not attempt to take into consideration who it was that was added or even what skillset the person had. You could collect such data if you wanted. As anal about collecting all sorts of data like this as I am when I have access to it, I knew though that collecting that type of data would have made this just hard enough that I wouldn't have done it regularly. I did collect a few other bits of data that I left out of the initial table (so as to have more horizontal room for the data of real interest).
For example, I collected data such as iteration length and the name of the team's ScrumMaster (the latter was in case I had questions a few weeks later). The approach described here was just simple enough that I could get empirical evidence of the impact of team size changes. This was invaluable when discussing headcount changes with product owners and the CEO.