While "agilefall" has many well documented downsides, I've found a counterintuitive bright side to the following aspect of it: "We have a product backlog with priorities, but we start working on a release with a long list of features already committed to the business."
In this case, we have a pretty detailed view of scope as well as developer estimates of complexity. There is a lot of data to crunch.
From a pure business perspective, the key variable that matters is the release date. Dates have a significant implication for the rollout across the company and existing client base. Here's a few business reasons why:
- marketing and sales plans need to be made (since they're separate "workstreams")
- overall costs and ROI of the program changes
- coordination and resources need to be shared/reassigned with other internal teams
And most importantly, the date is completely non-technical. Anyone who graduated from primary school can understand it. So the nice thing is that the release dates help focus the discussion on what matters, while only needing to go into a minimal amount of technical detail.
Making the relatively liberal assumption that scope will not change, it's possible to generate a pretty decent forecast of the date.
Moreover, it's possible to model the impact of a higher or lower average future velocity on release dates. In other words, if there is a need for a plan, you can make the entire plan using the following approach:
- dependent variables: incremental release dates
- independent variable: hypothetical velocity
And then look at what happens to the release dates as you vary velocity.
The entire plan fees like an accordion which can be pulled out or pushed in, depending on your assumptions about velocity. If velocity is high, the releases are close together. If it's low, they're quite spread apart.
Here's an example of a quick model I did by exporting my data from Jira and using Excel's scenario manager:
By varying that one number and holding scope & team constant, the dates which are hit for each release change significantly. Let's get into how to do it for yourself first.
Case Study
We've defined, broken out and even estimated scope for this program. It contains hundreds of stories (tasks really) related to the delivery of a big infrastructure. All of this is being tracked using a standard agile tool called Jira, with relevant details attached to each story.
Here's the type of analysis we'll need to do as an interim step:
The unit of effort measurement here is the "Story Point", an abstract measure of relative complexity of a piece of work. This is quite common in agile circles, to help give the delivery team a safe way to discuss and size the work, without biasing it towards what stakeholders "want to hear" and to help them "save face" if a complex task takes a long time (even though it wouldn't necessarily be obvious to a non-technical person).
The key "number behind the number" here is the velocity. It's expressed as the story points completed per unit of time. In this case: one week. Velocity tracks increments of fully complete work that has been specified, developed, checked, and signed off.
If we estimate the story points for each task up front, it's possible to observe velocity as the team works on the scope. Basically, you see performance as the change in that number over time.
On this project, we are tracking velocity in weekly increments. This gives us more data points and increases our confidence in the numbers faster than if we were calculating it on a monthly basis for example. More observations, higher confidence--statistically speaking.
In the example above, the actual team doing the work has been delivering at a rate of 27 story points per week. But what if the average future velocity was higher or lower than that?
Step 1: Pull out the scope sizes from Jira
For me, the fastest way to get to the real numbers is to pull up reports. At the moment, most of the modelling we are doing is based on the Jira versions (fixVersion). Previously, I've used Jira epics using different reports.
Then choose the version or epic report:
And finally pull out the actual story points completed in the version:
And incomplete:
And finally the number (count) of unestimated stories, as delivery teams tend to push back on spending too much time estimating rather than doing the work:
Remember to exclude any known bugs as these will typically have 0 story points anyway, and use your judgement for other issues.
Type these into Excel (yay data entry!):
Step 2: Create a key assumptions section
Hah! The 'A' word.
In your spreadsheet, put in an assumed value for the unestimated stories as the average size (in story points) of issues and the expected velocity for your delivery teams:
Step 3: Fill out the remaining formulas to estimate your release date
Essentially, once we have the estimated total scope and velocity, it's just a matter of dividing the former by the latter, to generate the number of weeks of work remaining:
Here are the actual excel formulas I used to generate the magic date:
Step 4: Rinse and repeat for remaining versions on your backlog
Essentially, you are holding the team, scope, and velocity constant and using your estimates to figure out when you will be ready to release each version. This is, of course, based on what you know now, which is subject to change. :)
The formulas are more or less the same throughout the versions. In this case, I'm assuming the team will finish version 1 and move immediately to working on version 2, so column I needed some adjustment.
Step 5: Kick up the What If analysis using Excel's scenario planner
First move your cursor to the variable you want to model, in this case your velocity assumption:
This is the "independent" variable in statistical terms that you want to vary, in order to see how it affect the dependent variables you care about: the completion dates.
This is probably one of Excel's most powerful non-quantitative easter eggs. While allowing you to vary individual values, you can extrapolate impacts that reach far beyond what is possible (for most humans) intuitively.
Within there, choose the Scenario Manager, and the following comes up:
Click add, and then use the cell you selected previously as the main variable to vary:
You can call the scenario whatever you want that has business meaning to you and your company. Then enter a value for this scenario in the next prompt:
And then do this again for a few other scenarios. In this case, I created a higher velocity at 60 and a lower velocity at 20. This range is so wide, simply because I have no data to go on, so I'd like to explore the optimistic and the pessimistic scenarios.
When you have all of this done, and the scenario manager is correctly set up, click on summary:
Based on that, Excel asks you which cells in the sheet you actually care about when you are varying scenarios (the dependent variables):
And after you hit OK, Excel hangs for a bit and returns with the following magical table:
This gives you a sense of how different average velocities will affect your delivery dates and plan. It gives you a qualitative feel for what would happen anywhere within your range, too, by looking at the outer boundaries. It's enough for you to take and present to stakeholders in terms of what the velocity number actually means, what tradeoffs might need to be adjusted, what resources added, and all of the other things which might need to be changed to influence velocity.
Of course, you'd want to make it a bit prettier and clarify what these things mean:
Ok, now you're talkin'! So we are finally at the velocity accordion state. Depending on what velocity we end up getting out of the team, these are the dates we expect to hit for each incremental release. As you can see, the difference between doing the entire scope at a velocity of 20 vs 60 is over an entire year out. Even if all of the details of what is delivered are exactly the same, done in exactly the same order, by exactly the same people, there more than an entire year of difference in the final release.
In other words, detailed planning this far into the future should not include commitments to dates--even if you do have your entire backlog defined and specified and you're hoping it's "accurate".
Figuring out your velocity and how to increase it has much greater business value (bringing release dates in) rather than trying to specify and estimate everything up front to the utmost detail.
This is particularly true in a corporate context with large budgets and a need for "saving face" in case the product isn't fully perfect before it's even discussed with prospects or existing customers. It's much better to commit to small increments of work that are then tied to specific customers needs, as that is the shortest path to revenue in an enterprise sales environment. On top of that, focus on managing velocity and not detailed planning, as that will get your product out there faster. Ultimately your customers could care less how detailed your release plan is; they just want a product that addresses their needs.
Keep following the story
I'm writing a book on adaptive innovation. If you'd like to know more, ask questions, or just be notified when it's out, enter your details over here.