The topic of estimation can be contentious. One big reason is that, when it comes to creating estimates for agile planning, people are simultaneously your greatest asset and greatest obstacle.
Better Estimates Are Built on Better Understanding
Estimating is a human task, and humans can be complicated. No matter how clearly you define and explain the process, estimates are influenced by bias, background, and individual perspectives. Even if you and your team understand the theory behind estimating with story points, if you don’t account for human nature and personality dynamics, you can still encounter problems.
I’m not saying your team members are problem people; I’m sure they’re awesome. It’s just that people bring baggage and preconceived notions to the estimation process.
How People Problems Cause Estimation Problems
A team's past experiences with estimation can contribute to future problems with estimation. For example:
-
That one person who won't budge on an estimate.
-
Those one or two people who appear to go along with the process, but aren't truly putting in the effort.
-
Those new team members who are uncertain about estimates, story points, or are intimidated by more dominant voices in the room.
-
That one hold-out who refuses to estimate anything that isn't in their skillset. (The article "3 Roles That Need to Be Involved with Agile Estimating" goes into why it's so essential for the whole team to participate in giving estimates.)
If you can’t wrangle your individual team members' hidden biases and opinions—not to mention varying areas of skill and experience—you’ll struggle to work as a cohesive team and produce accurate estimates.
Other Reasons Teams Avoid the Estimation Process
Many teams are also reluctant to estimate for fear of creating a plan that stakeholders will use against them.
Clients and stakeholders will always want to know what will be delivered and when, but it's difficult to predict this if the work is new or unfamiliar, especially if these same stakeholders are expecting perfection rather than accuracy. (One sure way to help is to ensure everyone is on the same page about what kind of estimate is being provided.)
Teams with a history of going through the motions of creating plans and estimates that they know lack accuracy just to check a box are likely to complain about providing any estimates at all. Those teams likely complain that they just want to get on with building something. (Here's why, despite their protests, estimates can be helpful to developers.)
Sound familiar? Many of the issues surrounding estimates stem from the belief that, as humans, we're just bad at estimating.
But that's not true.
People Are Good at Estimating Certain Things
People definitely struggle to estimate some things but others they're surprisingly adept at estimating accurately.
For example, later today I plan to write another blog post. I estimate it will take two hours to complete the first draft of that. It's unlikely to take exactly two hours but it will probably take between one-and-a-half and three hours. For the purpose of planning my afternoon, that's a good estimate.
(Read "5 Ways to Achieve Accurate Estimates Everyone Trusts" for more on why perfect is the enemy of good, including why it's best to express estimates as a range.)
Back when I was teaching in-person Certified ScrumMaster® courses, I would set up the room the day before. I would put a lot of supplies out for each person in the class. I had to hang some posters on the wall. And so on. From experience, I'd estimate that a typical room set-up would take 30-45 minutes. I've set up for so many Certified ScrumMaster courses that I feel fairly confident in that estimate.
There are probably a myriad of similar tasks that you find yourself estimating (successfully) most days—whether it's fixing dinner, driving to a friend's house, or going grocery shopping.
"We're good at estimating familiar things. Estimating unfamiliar things is harder."
We're pretty good at estimating these things because we have a certain level of familiarity with them. We're not as good at estimating things we aren't familiar with.
Proof That Software Estimates Are More Accurate Than They Seem
Data supports my claim that humans tend to estimate well.
In a 2004 review of the existing research on software estimates, University of Oslo professor and Chief Scientist at the Simula Research Laboratory Magne Jørgensen found most estimates to be within 20 to 30% of actuals. And on software projects, he did not find an overall tendency for estimates to be too low:
The large number of time prediction failures throughout history may give the impression that our time prediction ability is very poor and that failures are much more common than the few successes that come to mind. This is, we think, an unfair evaluation. The human ability to predict time usage is generally highly impressive. It has enabled us to succeed with a variety of important goals, from controlling complex construction work to coordinating family parties. There is no doubt that the human capacity for time prediction is amazingly good and extremely useful. Unfortunately, it sometimes fails us. –Magne Jørgensen
Busting the Myth That Software Projects Are Always Late
But if we're actually fairly good at providing accurate estimates, why is there a common perception that we are bad at it, especially when it comes to estimates on software projects?
One reason is that organizations tend to green light to underestimated projects far more often than overestimated projects. (This blog post reveals why teams underestimate and the #1 reason even agile projects are late.)
Scenario 1: The Underestimated Project
Imagine a boss who describes a new product to a team. The boss wants an estimate before approving or rejecting work on the project. Let's suppose the project, if played out, would actually take 1,000 hours. Of course, we don't know that yet, since the team is just now being asked to provide an estimate.
For this example, let's imagine the team estimates the project will take 500 hours.
The boss is happy with this and approves the project.
But…in the end it takes 1,000 hours of work to complete. It comes in late and everyone involved is left with a vivid memory of how late it was.
Scenario 2: The Overestimated Project
Let us now imagine another scenario playing out in a parallel universe. The boss approaches the team for an estimate of the same project. The team estimates it will take 1,500 hours.
(Remember, you and I know this project is actually going to take 1,000 hours but the team doesn't know that yet.)
So what happens?
Does the team deliver early and celebrate?
No. Because when the boss hears that the project is going to take 1,500 hours, they decide not to do it. This project never sees the light of day so no one ever knows that the team overestimated.
"Overestimated projects are less likely to get approved."
A project that is underestimated is much more likely to be approved than a project that is overestimated. This leads to a perception that development teams are always late, but it just looks that way because teams didn't get to run the projects they had likely overestimated.
Overconfidence in Our Ability Leads to Inaccurate Estimates
Although we're not necessarily bad at estimation, teams are definitely not as accurate as they could be. In my experience, this usually stems from an overconfidence in our ability to estimate accurately.
To help people see how this happens, I ask a series of ten questions in class. The instructions are simple: Provide the answer a range that you are 90% confident will contain the answer. I explain that no one needs to know the exact answer to provide a correct answer; the answer will be correct if it is accurate, if it falls in the range.
For example, I might ask estimators to estimate when the singer Elvis Presley was born: "Give me a range of years that you are 90% certain will contain the correct answer."
If the estimator is a huge Elvis fan, they'll know the year he was born. They might even know the exact date, and as a result, they wouldn't likely need to give a range of years to provide an accurate answer.
But most time, people don't know quite that much about Elvis. Their range needs to be wider because they are less familiar with what they are estimating. Remember, I want a range of years that people are 90% confident contains the correct year.
They might start by thinking, “Didn't he have some hit records in the fifties? Or was it the sixties?”
They might then think that if Elvis was recording at the age of 20, and had hits in the fifties, an early year of his birth could be 1930.
And at the upper range, if he was recording in the sixties, perhaps he wasn't born until 1940.
So they might come back with this estimate: Elvis was born from 1930–1940.
And in this case, they'd be correct, since Elvis was born in 1935.
The next nine questions deliberately ask for answers that might be more difficult to narrow down. For example I might ask how many iPads were sold in 2019, or how many athletes competed in the 2016 Olympics, or for the length of the Seine River.
Now, for each question, the parameters remain the same:
-
Provide a range of numbers (units/athletes/miles, etc.) that you think contains the correct answer.
-
Be 90% confident that the correct answer is within that range.
What happens is surprising! Even though I tell them to give me ranges that are 90% certain of, most people get most questions wrong.
"Even though I tell them to give me ranges that are 90% certain of, most people get most questions wrong."
The ranges they provide are usually much smaller than they should be when considering their unfamiliarity with the subject.
For example, let's imagine that I have no idea how many iPads were sold in 2019. If I want to be 90% certain the range I give contains the correct answer, I should provide a huge range, say zero to one billion. That would be widely exaggerated, but I'd be confident in my answer.
In my experience, the ranges people pick are narrower, suggesting that we overestimate our own ability to estimate accurately.
This is a simplified illustration, but there are a number of studies that suggest people are overconfident when it comes to forecasting.
The Secret to Getting Better, More Accurate Estimates
Data shows that individual estimators do improve when presented with evidence that their estimates are wrong.
In one study of software development work ("How Much Does Feedback and Performance Review Improve Software Development Effort Estimation? An Empirical Study"), researchers found that on the first ten items teams estimated, programmers were correct only 64% of the time.
When provided with feedback that their estimates were wrong, these same programmers improved to 70% correct on the second set of ten items. And then to 81% on the third set, after additional feedback on their accuracy.
It's clear that knowing how close estimates match reality can help you and your team improve at estimating projects.
Another way you and your team can get better at creating accurate estimates and plans is through training. Mountain Goat offers public and private estimating and planning training, as well as on-demand video courses, to help you and your team improve at creating agile estimates and plans.
I encourage you to explore all of our estimating and planning course offerings to find the one that works best for your situation.
Last update: June 9th, 2025