Product Management

Complexity to assess feasability

I’ve quickly mentioned this concept in earlier posts, but I’d like to spend more time on it, as it is one of the crucial parts of Scrum, and one of the hardest to understand, for developers and for managers.

A recurring joke I hear when teams start using complexity points, and don’t fully understand its principles, is their attempts to quantify the stock exchange value of complexity points vs man-hours. Funny, but pointless.

Complexity points are part of the Scrum paradigm. Mixing up paradigms is rarely a good idea.

Yes Alice, you should follow the white rabbit!

Complexity isn’t an other time dimension

The first thing to understand is that if that “complexity points” notion was introduced in Scrum, it’s not because it’s hype, nor to give an other name to the usual “man-hours” dimension.

If I give a shot at writing the mathematical expression of complexity points, I guess it would be something like this:

Complexity points (sprint) = Σ((feature dev time + Var (feature dev time))/average skill factor(team)) + Var (impediments)

Var() is for Variance: complexity points take in account a part of uncertainty in the capacity to deliver features in time.

Everybody, in any industry, has somehow faced impediments while try to reach a delivery milestone. And everybody has failed to reach that milestone because of some of those impediments.

Everybody, in whichever domain, has been working with people of various skill levels and competencies. There’s no point in saying that one dev in your team is better than an other: the team is delivering the work together. Therefore, we’ll talk only about the average skill factor.

And everybody, at least in the software industry, has somehow faced a situation where there was more work to do than estimated/expected. Because at some point devs stumbled upon a technical issue. Because at some point the team realized that a use case had been forgotten.

Hey, don’t shoot at the messenger! I can tell you: it’s virtually impossible to predict all use cases, and no matter how good your Product Manager is, he’ll always miss some.

Bucketing, Poker planning

Those are the names you might have heard of when it comes to your team trying to estimate features in complexity points.

Those meetings are usually named Backlog Refinements, and they’re the time to:

  1. Make sure everyone understand what’s expected out of a feature
  2. Make sure everyone understand the specs
  3. Shoot at the specs and try to assess the “use cases coverage”
  4. Make sure the devs get a sense of what needs to be done on the technical side (watch out! There should be no intensive discussions on technical implementation!)

It is more than acceptable to delay working on a feature that is not well understood, or which has a too poor use cases coverage.

Anyhow, each feature is reviewed from the top to the bottom of the backlog (which has properly been prioritized upfront). When the team has understood everything about it, they can estimate the complexity of developing it and getting it Done.

Note: The definition of Done is yours. A good one is: developed, tested and validated on a staging environment.

Usually, estimations are given according to the Fibonacci sequence: 0, 1, 2, 3, 5, 8, 13, 21,…

That can be called bucketing, or poker planning, depending on how you organize the estimation. What’s important is that the whole dev team takes part in it, and must reach a consensus. Doing so makes it possible to be sure that everyone really understood what the feature is about, and gives voice to everyone so that they can potentially warn about unseen issues.

Why Fibonacci?

The largest the feature is, the more chances there is that you forgot some use cases, or that you will face impediments.

The Fibonacci sequence illustrates that principle, where each new value increases exponentially, in order to cope with always more impediments.

Well, let’s say I’ll stick to “Level 2”, and let somebody else give more insights in Why use Fibonacci.


Lets get back to formulas. Velocity could be expressed as:

Velocity = Σ(complexity(sprint n-1) to complexity(sprint n-6))/5

So yeah, before you can start having a clue of how fast you’ll be in delivering features, you’ll need to wait for the 6th sprint! Then, you’ll be able to say that roughly, you can imagine delivering X complexity points in a sprint.

NO! That’s not when you can start doing that joke about stock exchange again! Unless you fully assume the metaphor and understand, that the “value” will change over time.

What’s important is that you create your own referential!

Your team may include brilliant developers, or beginners, or a mix. Of course, there’ll be a direct correlation between their skill level and the velocity of the team. But you’ll actually get a reliable metric, as it directly depends on your team, working together.

Iterations will make it possible to get predictability where there was nothing but fuzziness in planning.

Just like any adaptive system, Scrum sprints will enable you to:

  • assess your estimations after you delivered the features, learn and better estimate next time
  • face various kinds of impediments (bugs in development, production crash, team out of office), which will directly impact your velocity, thereby absorbing these impediments in the Var() part of the above formula.

All of that will create your very own referential. And if you start working with an other team, that work will need to be restarted.

Time consuming, you say? Maybe. But it’s definitely worth it. People tend to say that it’s impossible to plan a reliable roadmap in software development. My experience is that, when sticking to this paradigm, I’ve been able to plan a complete product release a year ahead with a delta of a month work. Yes, there was still a delta at the end. And there probably will always be. But looking at the fast pace of spec changes, the same project would probably have taken 2 years to complete with an equivalent scope.

Controlling less is controlling more

It is important to bootstrap complexity points evaluations (for instance with the Elatta Method), but after a while the evaluation should be made with regard to previous evaluations, and finally only just upon a feeling. When reached this final stage, your team has enough accuracy in estimating the actual development cost and integrating all the risks (human-related delays, refactoring necessities, production issues and other urgent and temporary refocus)

Divide and rule

Keep in mind: the largest an estimation is, the riskiest it is to start developing the feature. So far, I’ve experienced that from 8 points and above, there was a high rate of stories that weren’t delivered as expected, lacking in functional or technical specifications and insights. Clearly, if you reach an estimation of 13 points:

  1. Check with your team if they fully understood the feature, its purpose and how it works.
  2. Try *splitting the work in several stories*, which can then be estimated (hopefully) below 8 points each. And make sure the scope and objectives are clear for each of those.

The right questions

Usually, when your team doesn’t understand after a while how complexity points work, they start question their use or their nature, and to come back to the old paradigm. Those are wrong questions. I’ll try to list a few of the good questions:

  • Are we estimating correctly?
  • Do we compare with our previous estimations?
  • Do we generally have enough visibility/understanding on the features?
  • Shouldn’t we try to make divide stories on smaller units?
  • Have we been able to stabilize our velocity yet?
Featured image by Tim McDowell

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s