Story Points – A Mathematical Perspective

In sprint planning meetings, The Development Team works to forecast the functionality that will be developed during the Sprint.  This often involves estimating the effort required to complete work on user stories.  Story points are an abstract measure of the effort required to complete a story, and it’s fascinating to me how a set of educated guesses turns out to be a fairly accurate estimate of the effort required to meet sprint goals.  Planning poker or some variant of it makes the whole exercise seem somewhat arbitrary.  But it works well in the end.

Product owners want estimates to be as accurate as possible.  It helps them set their timelines of delivery, make commitments to the stake holders, and make decisions that affect the project and possibly the entire business.

An accurate estimate!  Do you see something weird going on here?

According to Cambridge Dictionary, an estimate is a guess of what the size, value, amount, cost, etc. of something might be.  Note that it is just a guess.  There is no guarantee that your story points estimate represents true effort you would eventually require to complete the user story.  It can be accurate if you are lucky though.

But then, how can we rely on a series of story points – which are really a set of estimates – for planning and decision-making purposes?  If each estimate is potentially wrong, then how do we deal with the accumulated error of estimation for an entire sprint plan?

At this point, statistics comes to our help.

Statisticians tell us about something called a “normal distribution”.  It is a distribution that occurs naturally in many situations.  A quick google shows that this distribution is applicable in following scenarios:

  • Heights of people.
  • Blood pressure.
  • Points on a test.
  • IQ scores.
  • Salaries
  • Measurement errors.

The last application in this list is of interest to us.  I would argue that in case of story points estimation, our error in estimation would follow normal distribution, or something very close to it.


So, what does that mean?

Note that this curve is symmetric.  It’s reasonable to assume that the number of times I overestimate the effort required to complete a story (or the amount by which I overestimate some stories) will be roughly equal to the number of times I underestimate the effort (or the amount by which I underestimate).  With this mathematical intuition, I think it’s safe to assume that the overestimates will cancel the underestimates over many sprints!

And this shows why story points work nicely for planning purposes.

A consequence of this perspective is that there is no need to read too much into how many story-points a team has completed in any given sprint.  Many factors might have affected a single sprint.  The real value, from a planning perspective, is in the averages; for example, average story points completed in, say, last 5 sprints.


The above graph shows story points in an actual project I was recently involved in.  The lumpiness of this graph makes us feel that something is not quite right here.  But the situation makes more sense if we consider the averages over 5 sprints.


This graph shows the progress the same team made as the project progressed during the first 4 months or so.  After a rather slow start the team did gain some momentum.

As long as we are able to identify the reasons of lumpiness in any given pair of two consecutive sprints, I think we should be reasonably happy with this sort of progress.  The longer-term trends showing the averages are better representatives of the progress a Scrum team makes during the course of a project.

So, this is what I propose.   Next time when somebody passionately defends their estimate during a sprint planning meeting, just give up.  Make them happy by acknowledging their estimate as the correct one (within reason, of course), and take comfort in the fact that even if their estimate is wrong in your judgment, normal distribution will take care of errors of estimation in the long run.

Software Consulting

About two years ago, I joined Readify ( as a Senior Developer.

Readify is an award winning IT consultancy.  It won the 2015 Microsoft Country Partner of the Year Award for Australia and the 2015 Microsoft Application Lifecycle Management Partner of the Year Award.  The great thing about joining Readify is that you get to work with really smart people.  As a result, you learn a lot.

But the most important aspect of this experience for me is to learn about the business of consulting.  A consultant’s mindset is very different from what I was used to.  My career as a developer spans over 19 years working for various companies in three different countries.  But I have never worked for a consulting business before.  I have always worked for businesses that develop software products and make money by selling those products.

What’s so different about consulting?

I think it took me a while to understand why any business would engage a consulting firm for their software projects.  There are various possible scenarios in which consultants are engaged:

  • To increase the developer head count to get the project done within acceptable time-frame.
  • To use the expertise of the consulting firm in technologies being used.
  • To use the expertise of the consulting firm in a specific business domain.
  • To get help in designing overall architecture of their software project.
  • To get help with how the projects are run (think Scrum masters).

There are probably other scenarios as well.

I have come to realize that its absolutely crucial for consultants to understand why they have been engaged.  Our top priority must be to support our customers.  And that support can take different forms.  But primarily it’s all about giving the customers what they want, and help them make informed decisions.

Suppose you are asked to make enhancements to a product developed over a period of time.  You discover a few problems at the architectural level.   Would you go on and refactor to improve the design, or would you just follow the existing architecture, and implement the features/enhancements you were asked to implement?

Perhaps there is no clear answer here.  It depends.  But my point is that if you have worked in a product development environment, you will probably have the tendency to fix those architectural issues before you build anything new.  That’s what I was doing when I first started as a consultant.

I even started to tell my customer what exciting features they needed, as opposed to what features they were actually asking for!  That habit comes from years of working in a product based business where it’s not unusual for developers to help set product road-maps, because in some cases developers have spent a lot more time with the product than the recently employed BA or product owner.

You can’t always do that as a consultant.

In a short engagement of say, a month or so, you have to trust your clients; you have to accept that what they are asking for really fulfills their needs.  You might be working towards a deadline to ship the product.  You may feel that there are bugs of high severity in the system which will stop you from shipping the product in the given time-frame.

The point is that, that’s not your call to make.

What bugs get prioritized to be fixed; what features get into the next release and what gets relegated into future releases; these are the decisions a product owner must make.  We must help the customer make those choices.  We must explain the pros and cons of all options in front of them.  But we must not make those choices for them.  If you make those calls for the customer, you are taking an unreasonable risk on your shoulders; risk that client may feel they didn’t get what they wanted.

That’s perhaps the most important lesson I have learned so far.

A good post on the subject: