# Size Doesn't Matter, Agile Release Planning

By **
Kevin Sivic
**

Every organization needs some form of planning. Public companies often plan and budget quarterly. Large software companies like Google and Apple often have launch events scheduled far in advance. Other companies need to plan their launches around training cycles. Even small startups need to have some confidence that they can deliver their product before they run out of money.

The problem that each of these organizations face is how to plan effectively - and cheaply. Every team will eventually be asked “When will you deliver?” If they aren’t prepared to answer this question they will often find themselves in a very stressful - and expensive - exercise of trying to estimate all of the remaining work, take into account all of the possible risks, and commit to a delivery date. Because they know people will be disappointed when they miss that delivery date they will likely pad the date by adding 20-40% additional time for “unknowns” or unpredictable events.

This exercise is often wasteful, taking days away from the team actually creating their product to create an estimate that no one really trusts but everyone still expects to be right.

The good news here is that there is a better way!

The simple calculation that needs to be done to determine time to delivery is something like this:

Time to Deliver = Amount of Work / Delivery Rate

Let’s look a bit closer at each piece of this, we’ll work backwards.

Delivery Rate While the delivery rates of each individual piece of work that a team does may vary widely the distribution of those rates is generally fairly stable for a given type of work. You might not be surprised to hear that a story took anywhere from 1 to 5 days to deliver but it would be surprising to hear that a story took 3 months to deliver. Understanding these bounds and the distribution of them is fairly simple when a team has been together for some time, just look at how long their stories have taken to complete in the past! The easiest way to this information is usually to get throughput by card count. How many cards has the team finished each week (or sprint) over the past couple of months? Maybe your team delivers between 3 and 10 cards every week.

Amount of Work Amount of Work needs to be broken out into two factors:

How many things do we know now that we have to complete before we deliver? How many things will each of those split into as we learn more about them? Understanding the known size of the work is fairly straightforward, just look at how many cards are in your backlog (and how many aren’t there yet). Since we don’t want to waste a lot of time this will likely result in some kind of a range (we have 40 items in our backlog and maybe another 5-10 that we need to deliver so the Current Size is 45 to 50.)

Finally, we need to understand at what rate the work grows. It is typical that a card in the backlog will be split into some number of additional cards that are eventually delivered as that work is refined and we begin to understand more about it. If it’s typical for you to discover 2 to 4 cards every time you refine one then your split rate is 2 to 4 cards per original card.

Taking this into account makes the calculation a bit more complicated but still fairly straightforward:

Time to Deliver = (Known Things * Unknown Things ) / Delivery Rate

Time to Deliver Plugging these numbers into the equation yields:

Time to Deliver = (45 to 50 items * 2 to 4 new items per item) / 3 to 10 cards per week

So far this has been pretty simple. But in this scenario the range of possible answers are between 9 weeks and 67 weeks! It seems unlikely that we’ll encounter either the worst case or the best case scenario and a range from 9 to 67 weeks is not particularly useful. This is where we have to introduce some probabilistic thinking.

If you are familiar with statistics you have probably at least heard of Monte Carlo methods. While I don’t claim to be a statistician the idea behind these methods is fairly simple. In our context they are designed to simulate situations where there is significant uncertainty. It was once described to me by Adam Yuret as doing math with ranges.

In a Monte Carlo simulation we simulate the results of this equation hundreds or perhaps thousands of times picking random inputs from the range of possible inputs. For example, in the first simulation we may randomly choose that we have 47 items. For the first item we will randomly choose between 2 and 4 new items. For the second item we’ll choose randomly again. We’ll continue to do this for each item. We’ll do this over and over again for each range we want to simulate and eventually come up with a distribution of the possible outcomes. This could be visualized as a set of burn down charts all layered on top of each other.

This image is taken from a free tool published by Troy Magennis which will perform many of these calculations for you! Check his work out here.

Once we have this set of possible outcomes we can then decide what confidence level we want to have in our forecast. If we are ok with being wrong 15% of the time then we might choose the date where delivery happens on or before that date 85% of the time (these are common choices).