Key Takeaways
- For organisations using Scrum, Sprints are the building blocks for successful delivery, and it’s essential that teams are able to deliver Sprint objections consistently and reliably in order to meet long-term user and business expectations.
- It’s critical to choose the right metrics that not only enable you to track and review your sprint effectiveness, but also understand how sprint performance is improving broader delivery KPIs.
- The key questions you should be asking are: a) are we able to meet our commitments/goals reliably, b) Is our work flowing smoothly throughout the sprint, c) are there any risks emerging that may impact our ability to meet our sprint goals, d) has this sprint improved our overall delivery performance?
- Teams should also track how much value is being delivered to users, whether through abstract Value Points or nominal Business Values (whatever is most practical and agreeable with your business stakeholders)
- These sprint metrics should be ever-present, particularly the Flow metrics. They should be reviewed daily in stand-ups as well as in a team’s retrospectives.
Many experienced Agile software delivery organisations view sprint accuracy as a critical building block to software delivery dependability. With multiple teams working on complex product workstreams, delivery outcomes are far more consistent if individual teams consistently meet their own sprint delivery goals over the sprint cycles.
The question therefore that is often raised is: “Which metrics should we look at during our sprint retrospectives and daily stand-ups to help ensure that we meet our sprint goals?” So how do you identify what matters most, and what will help make a material difference to your team’s success?
For Scrum teams, there are a few key areas that determine both short and long-term success. Our aim is not only to meet our current sprint goals but also to build and maintain healthy patterns of work and collaboration that will lead to future success.
The key questions to ask about your sprint performance are:
- Are we able to meet our commitments/goals reliably?
- Is our work flowing smoothly throughout the sprint?
- Are there any risks emerging that may impact our ability to meet our sprint goals?
- Has this sprint improved our overall delivery performance?
Below we will explore each of these questions further and provide you with some metrics to help you answer them.
Meeting Sprint Commitments
Dependability is critical these days as more and more organisations adopt Agile to deliver larger programmes of work and/or manage products with significant ties to business planning. The three metrics we recommend are Sprint Completion, Sprint Target Completion, and Sprint Work Added Completion.
Perhaps the most important of the three, Sprint Target Completion, looks at the scope you agreed during sprint planning and tracks how much was completed, showing you how effective the team is at establishing the right priorities and subsequently delivering them.
In our experience working with many clients, Sprint Target Completion rates lower than 80% can start to cause serious challenges with long term dependability and meeting key milestone commitments, especially in Scaled Agile environments where sprints are the fundamental building blocks to delivering a successful Programme Increment (PI). Indeed, predicting the delivery status at the end of a single PI becomes extremely difficult if multiple teams are involved and many are not consistently defining and meeting their sprint objectives.
Sprint Work Added Completion focuses only on work that was added to a sprint after it started, which is a very common challenge for scrum teams and one that presents a major risk to productivity (i.e. lost velocity) and meeting commitments. It can also signal, in some instances, a team who might be struggling to plan accurately and fully ahead of time.
Sprint Completion looks at the whole picture, regardless of whether work was planned for the sprint or added afterwards. Putting the planning mechanism aside, it helps teams gauge how resilient they can work with a dynamic backlog of work.
Whilst Velocity continues to be the most popular measure across teams, it can drive a misleading representation of a team/organisation’s delivery capability if considered alone. It remains a one-dimensional view of throughput, and one that’s relatively troubled by the fact that story points are a reflection of complexity and risk rather than value delivered in business terms (let alone the right value to be delivered). Not only should Velocity be coupled with a rate-based metric like Target Completion in order to understand the utilisation of the team’s capacity and planning effectiveness, but we always recommend, where possible, that teams also use and measure an abstract or nominal representation of the value delivered by the team (e.g. value points delivered).
Figure 1: Example sprint completion metrics
Lastly, if you use Sprint Goals to capture broader objectives above and beyond the tickets planned in the Sprint, you will want to track how consistently you can meet these goals. We always recommend using Sprint Goals as a way to distil your ambitions down to 2-3 goals. It has the desired effect of focusing a team on the bigger picture and encouraging them to be more outcome-focused in line with delivery and business objectives.
Figure 2: Example Sprint Goals Delivered Metric
Delivering efficiently within a sprint
In sprints, you only have a couple of weeks (sometimes more) to deliver a specific scope of work, so it’s critical that:
- work flows smoothly throughout the sprint,
- bottlenecks/delays are spotted and addressed immediately, and
- feedback (from users/product owners) is provided to the team as quickly as possible so any issues can be resolved within the sprint instead of cannibalising capacity in the next sprint
With Sprint Flow (see below), you can track how your work is flowing throughout the Sprint and easily spot any delays or bottlenecks emerging that may potentially put your commitments at risk. One of the most common pitfalls we see with scrum teams is work being signed off at the end of the sprint. This delay backloads risk and makes it difficult for teams to address any feedback by the end of the sprint, which in turn cannibalises capacity in the next sprint and can create a nasty snowball effect.
A feature of successful sprints is the tight feedback loop between the user (often represented by the PO) and the team, and work is being signed off as early as possible. Some of the most common challenges to this feedback loop are: work is slow to start at the beginning of the sprint, a backlog is forming with the QA team, and the PO has limited availability to review and sign-off work. It’s essential that all member of the team, but particularly a Scrum Master, have visibility of these potential delays, which is why Sprint Flow is an incredibly powerful metric to review in daily stand-ups and retros.
Figure 3: Example Sprint Flow analysis
Many teams today still use burn-down (or burn-up) charts to track velocity trends over the course of a sprint. The challenge we see with this approach is that burn-downs and burn-ups are binary in their analysis; they only differentiate between incomplete and complete, which often leaves a team unable to identify issues until it’s too late to react. Sprint Flow, on the other hand, enables teams to have a clearer view of risks and delays starting to materialise, thereby providing them with a way to proactively address these issues ahead of them throwing a sprint off-course.
Identifying risk and mitigating it
Whilst Ticket Flow above is a great way of seeing the impact of risks on delivery, there are a number of other metrics that we recommend to combat some common challenges that teams face.
Moving goalposts
Whilst we embrace changing priorities, too much change within an active sprint will compromise a team’s ability to deliver effectively (and should raise questions on the planning process). With Ticket Scope, you can track any key tickets being added or removed from a sprint. Great for retros and in stand-ups.
Figure 4: Example Sprint Scope graphic
Figure 4 shows a typical sprint with a manageable number of tickets being added and removed throughout the duration of the sprint, reflecting the agility of a mature scrum team. However, it is common for multiple tickets to be added later in the sprint with the result that ‘agility’ becomes ‘conflicting priorities and potential inefficiency’.
Unplanned bugs
New bugs/defects, particularly those from Production, can derail teams very quickly. It’s important to track the arrival of new bugs that can side track teams. Even if bugs are not immediately resolved, the triage process can (and often does) distract teams from their core focus on delivering sprint work.
We recommend filtering by critical bugs (e.g. P1 and P2), as well as distinguishing between bugs originating from production vs your “QA/UAT” process.
Figure 5: Example timeline of unplanned bugs
Keeping the ‘big picture’ in mind
We recommend that every organisation has a set of “North Star” metrics that they use to measure their overall delivery effectiveness and agility. These are best championed by technology leadership and give the entire delivery organisation a set of key metrics around which to align.
Sprint retrospectives provide a great opportunity to reflect on how the work delivered in that sprint has contributed to the overall progress against these ‘North Star’ metrics, especially as some of these metrics cover activities that extend beyond the time box of a sprint.
What it’s all about
Lead Time and Cycle Time remain two of the most important metrics, as they reflect one of Agile’s core values: the “early and continuous delivery of valuable software”. During each team’s retrospective, they should reflect on how the sprint’s deliverables have impacted the trend over time and examine where there are opportunities to improve in future sprints.
Figure 6: Example graphic showing Cycle Time variance over time
The Cycle Time metric in Figure 6 refers only to the development cycle time and therefore excludes additional time taken to integrate, test and deploy to live. This more complete view of the end-to-end delivery process is reflected in the Lead Time metric which is ultimately the more representative measure of true agility, though it is not so suited for a scrum team as it takes into account delivery stages beyond the scrum team’s control.
Where can we improve?
Lead and Cycle time are great metrics to examine overall delivery, but if you are looking to balance that with a view of where you are most and least efficient, Flow Efficiency is the perfect complement.
A team can see precisely where they are spending the most inactive time, e.g. ‘Awaiting QA’, ‘To Do’, ‘Awaiting sign-off’, and then agree on some focused actions to reduce this waste in future sprints. It is not uncommon for teams to have a Flow Efficiency of less than 20%, meaning that over 80% of the team’s Cycle Time is taken up with tickets in potentially avoidable ‘inactive’ statuses.
Figure 7: Example Flow Efficiency graphic
Bringing it all together
We believe that the metrics above should form the backbone of any team’s retrospective, however, they are not the only metrics you may want to consider. Teams will face different challenges over time and may have different self-improvement initiatives in flight during their sprints, so any metrics you are using to track these should also be included.
In terms of what you might have in your retrospective versus stand-ups, we believe the answer is pretty simple: the same!
If the metrics you chose for a retrospective reflect success for your team, then the stand-up is merely a good opportunity to check your progress against your targets so that you can ensure success, intervening if and where possible.
Author the Author
Will Lytle is the Director of Customer Success at Plandek and is passionate about helping his clients build high performing, motivated delivery teams. He works closely with them to identify their biggest delivery challenges, form meaningful objectives, and find the right metrics to drive success. Will joined Plandek in 2019 from Deloitte, where he specialised in digital transformation and leading cross-functional delivery teams. He has over 15 years of global experience built on digital delivery expertise, operating models, developing talent, and helping businesses shape and deliver technology-enabled business transformations.