A few weeks back I promised someone I would blog about the unique challenges of estimating non-functional requirements. First, let's remember that a non-functional requirement is a requirement that is more about the state of being of the system than about one specific thing the system does. Non-functional requirements often have to do with performance, correctness, maintainability, interoperability, portability, and so on. They are often called the "-ilities" of a system because so many end in "ility." (By the way, in case you're wondering, non-functional requirements can be written as user stories.) The challenge with estimating non-functional requirements is that there are really two costs. First is the cost of initial compliance. Second is the cost of ongoing compliance. To see these two costs at work, let's consider an example.
Suppose we have a performance requirement and that the team is working on a new product. In the first sprint the team may be thinking about performance but they aren't going to do any performance testing. There's no code yet. There's definitely code being developed over the next few sprints and say by sprint five the team decides there's enough code that they want to start doing some performance testing during that sprint. Remember our first cost is the cost of initial compliance. In this case, that is the amount of work the team will spend on performance testing in sprint five. That's not really much harder to estimate than most other product backlog items. The team here thinks about it and they put some number of story points or ideal days on it as their estimate.
To continue our example suppose they do the performance testing and any necessary tuning it reveals during sprint five. Well, in sprint six the team is adding some new features and here is where the second cost comes in--the cost of ongoing compliance. Once the team accepts a non-functional requirement into the project (as our team did here in sprint five) they need to remain in compliance with that non-functional requirement for the remainder of the project. I think of this cost as a tax. Doing performance testing (or staying in compliance with any non-functional requirement) creates some amount of overhead on the team (the tax). This overhead or tax must be paid regularly. In some cases, the team and product owner will decide the tax must be paid every sprint. For this team, that would mean that every time they add a feature, they do performance testing of that feature and, likely, the whole system. In other cases the team and product owner may agree to pay the tax every few sprints. After all, a team might reason, it's unlikely we affected the performance characteristics with the user stories we added this sprint and, besides, we aren't really shipping after this sprint. The first of these cases, in which the team pays the tax every sprint can be thought of like as a sales tax or VAT. The second, in which the team pays every few sprints, is more like an estimated quarterly tax like independent contractors in the US pay.
So, back to the issue of how do we estimate this type of work? Well, estimate both parts separately. Estimate the cost of initial compliance just like any other user story or product backlog item. The team and product owner will need to incorporate an estimate of when they'll do this. Adding performance testing after five sprints is different than adding it after 20 sprints. For the tax portion, the team and product owner need to agree on when they will do the work: every sprint or after every n sprints. The team can then estimate how much work will be involved over the planned number of sprints and allocate that amount of work to each. For example, suppose the team and product owner agree that they will do performance testing in every fourth two-week sprint. The team then estimates that it will take six points of work every fourth sprint. That's about 1.5 points per sprint. If a team has a velocity of 30, 1.5 can be thought of as about a 5% tax. It's likely that the team could be quite wrong on this estimate the first time. Fortunately, the team can easily track how much effort goes into performance testing over a number of sprints and use that to revise their estimate of the ongoing cost.
The discussion here is closed but join us in the Agile Mentors Community to further discuss this topic.
Go to AgileMentors.com