We need to become focused on outcomes and adapt our way of thinking and our processes to continuously release small changes to our products and services, argued Jeff Patton in the closing keynote at the Agile Greece Summit 2019.
Patton stated that we should pay to learn - not just build "potentially shippable software". We have to acknowledge that we fail - a lot - and build humility into our processes, he argued. Then we can build learning into our process:
If we don’t understand customers’ problems, we can do research. If we don’t know what they really want, we can pay to build prototypes, If we don’t know if they’ll keep using it, we can pay to build software to release to a limited number of customers and observe what they do. None of those things result in return on investment. Rather, they result in learning that helps us make better decisions about what we should be building at scale.
Teams’ mindsets start to focus on outcomes when we start to make them visible, said Patton.
Patton suggested to talk more about outcomes. Even using the word "project" sort of sabotages our thinking, he said. We need to remember that we want projects to end, but ideally we want products that are built to last for as long as possible. The outcome we focus on is only measurable after we deliver something and it’s put into use - after things come out, he argued; that’s why it’s called "outcome".
InfoQ spoke with Jeff Patton, product design consultant and author of User Story Mapping, after his talk at the Agile Greece Summit 2019.
InfoQ: How do 21st century software development practices challenge the process assumptions that are held by many in the software industry?
Jeff Patton: It’s actually the world we live in that’s changed. And the processes - the way we work - are catching up.
The best software development process and practice is a response to the demands of our businesses and the products and services we build and sell. Think about the types of products and services you use every day; how many of them rely on a digital experience? Your phone, your watch, your car, the series your stream, the games you play. Think about services like banking, buying airline tickets, or hotel rooms. Think about how you find a new restaurant, or order food to be delivered. Now, think about all these bits of digital experiences you use every day and think about the rate of big releases and big changes you see. My guess is you don’t see many. Rather, you see a continuous rate of small change.
This is the new normal. Everything is digital - requires software and technology. We’re no longer trying to pack as much into a release as we can get - rather, we’re trying to release continuous small improvements to our products and services.
It doesn’t matter what you call the process you follow, it can no longer be rooted in the 20th century model of big design followed my big delivery. This kind of thinking challenges many of the assumptions that businesses are built around.
For example, many businesses finance technology development using projects where time and the scope is fixed, and the goal is to get as much as we can in a fixed time. Projects are often long-lived - quarters or years. In contrast, contemporary product-centric orgs finance a product area for months or years without an understanding of the feature they’ll get. Rather, the focus is on observable market outcomes - like customer acquisition and retention. Teams in that product area use that funding to continuously improve that product area focusing on those same business outcomes.
InfoQ: What should we do differently in designing and building software?
Jeff Patton: A missing value in lots of agile processes is humility; the acknowledgement that we’re not perfect, that we fail a lot.
We fail at predicting how much we can get done in a sprint or a release. But, even more often we fail to predict if customers will like and use our products, and if enough of them will do so to get a real return on our investment. If this were easy, all startup founders would be billionaires.
Once we build humility into our processes, we can then build learning into our process. And we need to do that learning stuff faster than we used to a decade ago - because remember, the world moves faster today. That’s why process approaches like lean startup, design, thinking, lean ux, and design sprints are thriving. These are "pay to learn processes." Pairing those with more traditional agile approaches leaves us with something that’s called dual-track development; a process with a continuous learning track working alongside a track focused on building at scale.
InfoQ: Successful projects focus on outcomes, not outputs. How can we change our mindset and practice to move towards outcomes?
Jeff Patton: Talk about outcomes more, a lot more. It’s the project mindset and language that gets in the way,
Projects are defined using time, cost, and scope. In agile processes like Scrum, we focus on the same thing. We fix time to a two-week sprint. We fix cost by fixing the team size. And because Scrum can be a little sadistic, we force the team to fix scope on itself - to make a commitment about what it can get done by the end of the sprint. At a sprint review we’ll inspect the quality of what we did - but we don’t, and usually can’t talk about the outcome, because understanding the outcome can only happen after we ship, and often weeks or months after we ship. Like project thinking, Scrum pushes teams to focus on time, cost, and scope too.
I try to change people’s mindset by reminding them of those realities. I’ll ask teams, "How will we measure successful outcomes?" The result of this is sadly more work - specifically work to instrument products so we know if people really use them, and what features get used.
I’ll also ask teams to build a simple visualization. The left-to-right axis is an actual effort; the top-to-bottom axis is actual outcome. For every feature or capability they ship - put it in this simple chart. When putting features on the effort axis, I’ll ask them to tag the things that took longer than expected. They quickly learn that big things often take longer than expected. For actual outcome I’ll ask them to use different buckets. The first is "don’t know" - everything starts there because until things ship and users start to use them, we don’t know. But above that, things range from "awful" to "awesome." Teams start to learn how long it really takes to see an outcome. It may take weeks for something to move from "don’t know" to somewhere between "awful" and "awesome." And, they’ll also start to see how few things end up in "awesome." And finally, how the size of what we’re building has so little to do with the outcome. Often, the smallest things we build get the biggest results.