Are We Really Bad at Estimating?

Are We Really Bad at Estimating?

The topic of estimating can be a contentious one. Teams may be reluctant to estimate for fear of creating something stakeholders will use against them. Clients and stakeholders will always want to know what will be delivered and when, but how do you predict this if the work is new or unfamiliar? And if you work in an environment where estimates are usually wrong, should you skip estimating altogether and just get on with building something?

I believe a lot of the frustration and issues surrounding estimates stems from the belief that, as humans, we’re just bad at estimating.

But I don’t think that’s true.

We’re Actually Pretty Good at Estimating (Some Things)

Don’t get me wrong—we’re definitely bad at estimating some things. But others we’re quite adept at.

For example, later today I plan to write another blog post. I estimate it will take two hours to complete the first draft of that. I’m pretty sure it won’t take exactly two hours but it will probably take between one-and-a-half and three hours. For the purpose of planning my afternoon, that’s a good estimate.

Back when I was teaching in-person Certified ScrumMaster® courses, I would set up the room the day before. I would put a lot of supplies out for each person in the class. I had to hang some posters on the wall. And so on. From experience, I’d estimate that a typical room set-up would take 45 minutes. I’ve set up for so many Certified ScrumMaster courses that I feel fairly confident in that estimate.

There are probably a myriad of similar tasks that you find yourself estimating (successfully) most days—whether it’s fixing dinner, driving to a friend’s house, or going grocery shopping.

We’re pretty good at estimating these things because we have a certain level of familiarity with them. We’re not as good at estimating things we aren’t familiar with.

Data supports my claim that we’re not really that bad at estimating. In a review of the existing research on estimates, University of Oslo professor and Chief Scientist at the Simula Research Laboratory Magne Jørgensen found most estimates to be within 20 to 30% of actuals. And on software projects, he did not find an overall tendency for estimates to be too low:

The large number of time prediction failures throughout history may give the impression that our time prediction ability is very poor and that failures are much more common than the few successes that come to mind. This is, we think, an unfair evaluation. The human ability to predict time usage is generally highly impressive. It has enabled us to succeed with a variety of important goals, from controlling complex construction work to coordinating family parties. There is no doubt that the human capacity for time prediction is amazingly good and extremely useful. Unfortunately, it sometimes fails us. –Magne Jørgensen

But if we’re not too bad at estimating, why is there a common perception that we are?

One Answer Lies in the Projects We Never Start

Imagine a boss who describes a new product to a team. The boss wants an estimate before approving or rejecting work on the project. Let’s suppose the project, if played out, would actually take 1,000 hours. Of course, we don’t know that yet, since the team is just now being asked to provide an estimate.

For this example, let’s imagine the team estimates the project will take 500 hours.

The boss is happy with this and approves the project.

But…in the end it takes 1,000 hours of work to complete. It comes in late and everyone involved is left with a vivid memory of how late it was.

Let us now imagine another scenario playing out in a parallel universe. The boss approaches the team for an estimate of the same project. The team estimates it will take 1,500 hours.

(Remember, you and I know this project is actually going to take 1,000 hours but the team doesn’t know that yet.)

So what happens?

Does the team deliver early and celebrate?

No. Because when the boss hears that the project is going to take 1,500 hours, she decides not to do it. This project never sees the light of day and no one ever knows that the team overestimated.

A project that is underestimated is much more likely to be approved than a project that is overestimated. This leads to a perception that development teams are always late, but it just looks that way because teams didn’t get to run the projects they had likely overestimated.

Our Overconfidence Often Leads to Incorrect Estimates

As I mentioned, we’re not necessarily bad at estimating, but we can definitely get it wrong. In my experience, this usually stems from an overconfidence in our ability to estimate.

I’ve done an exercise previously to help people see this: I ask a series of ten questions in class and ask people to provide me with a range that they are 90% confident will contain the answer.

For example, I might ask you to estimate when the singer Elvis Presley was born. And I ask you to give me a range of years that you are 90% certain will contain the correct answer. If you’re a huge Elvis fan, you’ll know the year he was born. You might even know the exact date, and as a result, you wouldn’t likely need to give a range of years

But let’s say you’re less of a fan, and your range is going to be wider.

You might start by thinking, “Didn’t he have some hit records in the fifties? Or was it the sixties?”

Remember, I want you to give me a range of years that you are 90% confident will contain the correct year. You might then think that if Elvis was recording at the age of 20, and had hits in the fifties, an early year of his birth could be 1930.

And at the upper range, if he was recording in the sixties, perhaps he wasn’t born until 1940.

So your range might be: 1930–1940.

And in this case, you’d be correct, since Elvis was born in 1935.

But Elvis was an iconic figure and many people will be familiar with when he was alive.

So for my other questions I deliberately ask for answers that might be more difficult to narrow down. For example I might ask how many iPads were sold in 2019, or how many athletes competed in the 2016 Olympics, or for the length of the Seine River.

Now, for each question, the parameters remain the same:

  • Provide a range of numbers (units/athletes/miles, etc.) that you think contains the correct answer.
  • Be 90% confident that the correct answer is within that range.

What happens is surprising! Even though I tell them to give me ranges that are 90% certain of, most people get most questions wrong.

The ranges they provide are usually much smaller than they should be when considering their unfamiliarity with the subject.

For example, let’s imagine that I have no idea how many iPads were sold in 2019. If I want to be 90% certain the range I give contains the correct answer, I should provide a huge range, say zero to one billion. That would be widely exaggerated, but I’d be confident in my answer.

In my experience, the ranges people pick are narrower, suggesting that we overestimate our own ability to estimate accurately.

Obviously this is a simplified illustration, but there are a number of studies that suggest we are overconfident when it comes to forecasting.

Can We Get Better? The Data Suggests It’s Possible (with Feedback)

When people are presented with evidence of their overconfidence, there’s data to show that individual estimators do improve.

In one study of software development work, programmers gave correct estimates 64% of the time on the first ten items they estimated.

When provided with that feedback, they improved to 70% correct on the second set of ten items. And then to 81% on the third set, after additional feedback on their accuracy.

Getting estimators to realize that excessive confidence in their own estimating abilities is misplaced is helpful to encourage a collaborative approach to estimating.

If someone is absolutely convinced of the infallibility of their own estimates, that person won’t engage constructively in debates about the right estimate to provide.

Want More Help With Story Points? (Coming Soon)

Very soon I’ll be releasing some free video training to help solve some common story points problems. If you want to be the first to find out when it’s available, register now to join the wait list.

Sign up for the FREE story points training

What Do You Think?

Do you think the perception that we’re bad at estimating is unfair? Have you and your team been able to improve estimates as you work together? How did you do it? Let me know in the comments.


Get Free Estimating Book Chapters!

Get Free Estimating Book Chapters!

Please provide your name and email and we’ll send you the sample chapters and we’ll send a short weekly tip from Mike on how to succeed with agile.

Get my chapters now!
22

Posted:

Mike Cohn

About the Author

Mike Cohn specializes in helping companies adopt and improve their use of agile processes and techniques to build extremely high-performance teams. He is the author of User Stories Applied for Agile Software Development, Agile Estimating and Planning, and Succeeding with Agile as well as the Better User Stories video course. Mike is a founding member of the Agile Alliance and Scrum Alliance and can be reached at hello@mountaingoatsoftware.com. If you want to succeed with agile, you can also have Mike email you a short tip each week.