When making estimations using story points didn’t feel helpful, a team decided to experiment with #NoEstimates. Breaking down stories into smaller tasks gives them insight into their velocity and has made them more predictable. It also helps them to spend less time on process and have more time available for delivering value.
Andre Schweighofer, a technical product owner and software engineer at Runtastic, spoke about his teams’ experience with story points and the outcome of a #NoEstimates experiment at Lean Agile Exchange 2020.
Schweighofer said that their estimation process costs time and energy which was not spent on the actual user story, and their estimations weren’t all that useful to them. They stumbled into various problems on sprint planning day; their velocity was not granular enough to make meaningful predictions. This led to drawn-out discussions about the sprint scope which was based on their gut feeling, not based on their story point velocity.
Based on discussions in their retrospectives, they decided to try out #noestimates:
We started to break down user stories into smaller tasks, mostly independent implementation steps, to clarify the work we have to do.
The benefit that they get from #noEstimates is that there are less processes for the same results, and that they have more time and energy on delivering value to their customers:
When we started basing our predictions on throughput we saw that the things that make our work easier also help make better predictions. The best example of this is breaking down the story into smaller tasks. We did it to align within the team on an implementation strategy, but it became a perfect velocity for short-term predictions. So we’re now focused on getting things done, rather than guessing when they will be done.
If their team is asked for an estimate, they can now pull up data and come up with a data-driven forecast. If the prediction does not work, they can have a discussion on their backlog growth rate, descoping, and ways to improve velocity, Schweighofer said.
InfoQ interviewed Andre Schweighofer about the problems that they had with estimation, how they experimented with #NoEstimates, the metrics that they use to do estimations, and what they have learned.
InfoQ: How did you do estimation and what problems did that bring?
Andre Schweighofer: We had a rather common approach to estimations: we used story points with the Fibonacci sequence and used planning poker to agree on an estimate during our backlog refinement.
In theory, planning poker should lead to a meaningful discussion of the user story. However, we saw that more often than not we talked about our estimation process. Creating a shared understanding of what exactly a story point is, how we use it and why turned out to be more difficult than it seemed, especially as teams are constantly changing!
Tracking your velocity comes with another problem: it focuses your team on output, not on outcome. It feels great to burn story points, increase velocity and look at your burndown chart, but you can burn 1000 story points without delivering any customer value.
InfoQ: What made you decide to experiment with #noEstimates?
Schweighofer: We had repeatedly discussed our estimation process during our retrospectives. We tried finding smaller reference stories to create a more granular velocity. But our smallest stories weren’t really user stories anymore, which made the whole process feel unnatural. We also tried slicing stories into smaller chunks which led to the same issue.
In a retrospective, one of our team members, Natalia, brought up the #noEstimates topic. We discussed the obvious benefits such as not having to spend any time on the estimation process anymore, and also addressed our concerns about having inaccurate predictions. Since our sprints were hit and miss anyway, we decided to give it a try!
InfoQ: How did the experiment go?
Schweighofer: An instant relief was felt due to not having to go through planning poker. Just because you have a deck of cards in a meeting doesn’t make it fun. At the same time, it gave us more focus on the user story. We simply didn’t have the chance anymore to defer difficult discussions and talk about our estimation process instead.
The most surprising fact was that our predictability actually increased! We break down user stories into smaller tasks. A task is a mostly independent implementation step. Tasks became our desperately searched velocity. They were more granular than our velocity. But what’s more important is that we did not break down stories into tasks for the sake of having a velocity metric. We do the story breakdown because it helps us clarify the work we have to do. So the predictability is just an added benefit.
Image: A project forecast. Turquoise line: stories delivered (six stories/sprint). Blue line: project scope (growing two stories/sprint). Red line: initial project scope. By considering the backlog growth rate, we can see that after the initial scope was delivered we’re only ⅔ through the project.
Surprised by this, we applied the same approach to forecasting epic release dates. For epics, we checked our velocity of user stories per sprint and also arrived at an accurate enough estimate when we also considered our backlog growth rate. The backlog growth rate is important because it helps visualise and deal with scope creep.
InfoQ: What metrics do you currently use to do estimations?
Schweighofer: We stopped using sprints altogether and instead use a kanban workflow. This made sprint forecasts irrelevant altogether. For longer-term predictions like project release dates, we use our average of stories delivered/week and backlog growth rate.
We also applied the same approach to our highest level of planning. We check how many epics we get done in a quarter and use this to predict the next quarter. This is then the basis for our commitment to our stakeholders.
InfoQ: What have you learned?
Schweighofer: It’s so easy to overlook the elephant in the room. However, once we started to question our deepest assumptions, we were able to transcend our reoccurring problems. We consistently found issues with our estimation process but it took us some time to question the need for estimates themselves. We assumed that estimates are the only and best way to predict a release date. It’s just what agile teams do. But when we started questioning this in our retrospective, it led to great results.
We humans are just not made for accurate estimations. Instead of trying to make us a little better at estimating, we can have better predictions by not estimating at all. If we use throughput-based forecasts, we change the game from people’s guesses to facts and figures - a fertile soil for meaningful discussion.