Key takeaways
|
At the recent Agile 2016 conference InfoQ spoke to Dennis Ehle, vice president of DevOps strategy for VersionOne about the evolution of DevOps both as a way of thinking and the toolsets that support the practices, and the importance of having visibility into how value is delivered over the DevOps pipeline.
InfoQ: There are many definitions and descriptions of DevOps – what do you mean by the term?
Dennis: DevOps is definitely a term that’s been grabbed by lots of marketing organizations and it’s been redefined to mean just about anything. The definition that I like is that it’s simply any optimization that can streamline the flow of business value between developers and end users. It leverages tools, technology, and culture. It’s taking the concepts and the ideals of agile and it’s moving them all the way through to delivering software.
InfoQ: So what’s so special about this?
Dennis: At VersionOne, I think we have a really unique approach to DevOps. The DevOps community has done a great job of creating all kinds of automation. Most enterprises have figured out how to automate just about everything. But the DevOps community in general still organizes around fragmented tools. So a DevOps tool chain, for example, might have 15 different automation tools that all work together to create a larger pipeline.
The problem is none of those tools have any awareness of the value that they’re moving. So they’re good at moving, at streamlining the process of taking a bit of committed code, processing, validating, and deploying. These tools work together to orchestrate delivery. But what they don’t do well is describe what value is moving and how that value moves from step to step to step in the process.
Once we convert our ideas into code and we build binaries, we essentially lose visibility into the flow of business value. Our approach to DevOps is leveraging the tools and technology that are already in place and then tracking value as it moves through those tools and technologies without interrupting the DevOps tool stack, This provides visibility to all stakeholders all the way through to the production deployment.
InfoQ: So we’re taking a user story. We’re doing all of the stuff in development to get it into a build. Now what?
Dennis: Right. I think we do a really good job of visualizing the flow of value from big ideas and big themes, and then decomposing them into epics or features. And then ultimately, we can even decompose them into user stories. So we can visualize the flow of value very well from a strategic level through development with the agile lifecycle management solutions that are available.
VersionOne’s DevOps solution called Continuum™ starts tracking value when developers check new code in. So each new commit that gets created can be correlated back to a user story. Now, we know this commit ties back to this user story. Then we connect with the build automation tools like Jenkins, Travis CI, or the CI Server Maven. We monitor the build process and correlate the output of the build process, which are some binary artifacts. So we can now articulate or translate artifacts into business value.
The third thing that we do is connect to the deployment automation. So when a tool like Octopus or UrbanCode Deploy sends artifacts into an environment, we can track the business value. We know this value is now in this environment. We can answer questions like, “I want to test this user story, where is it?” or “Has this user story been deployed to production yet?” So by connecting all this DevOps data together, we’re able to track the flow of value all the way through to the end user.
InfoQ: That means we can also start to put some metrics in place.
Dennis: Absolutely, yes! There are some really interesting metrics that come out of that process. It’s really helpful, as it turns out, to understand how long value gets stopped at every phase of delivery. How long does it spend in a testing phase? Or how much time does it spend after it’s achieved the definition of done? So essentially, it’s done, tested, and validated. How much time is spent between then and production deployment? In some cases, it could be six weeks between the definition of done, closing out the sprint, and when it actually makes it to the hands of the end users.
So being able to track metrics at every stage of a value stream map to really get precise information of where value gets stuck at a team level makes it much easier to make strategic decisions about where you want to invest. Now, you have a better idea of where the opportunities are and where the bottlenecks are. Ultimately, it gives organizations an opportunity to be agile, and iteratively, streamline the delivery process.
InfoQ: So if we look at what seems to be very fragmented tools, ecosystem, but everybody is implementing this and it seems there are obviously no one size fits all. But isn’t it dangerous that we’ve lost sight of value in there?
Dennis: I think that’s a great question. I think if you look back at where DevOps got started, it was in organizations where the amount of time between the initial build and deployment to end users was relatively short. Since the amount of time that value was in this delivery phase was short, wait time didn’t pose a lot of risk. Teams were able to deploy very quickly. As we’ve gotten more into the enterprise level, the amount of time between the first commit and deployment to the end users has gotten much longer – and so has DevOps wait time.
When our DevOps tools were originally built, there wasn’t a need to track business value because it was moving so quickly. In the enterprise, it’s not uncommon for a team to build a thousand times per production deployment. That means we’re now generating lots of artifacts, and each one is unique and has a different combination of user stories and value. The ability to track business value through so many artifacts and a complex enterprise delivery value stream is a critical first step toward reducing DevOps wait time.
InfoQ: So why are all the tools fragmented?
Dennis: I think the culture of the DevOps community seven to ten years ago was very motivated toward open source. Open source tools are almost, by definition, point solutions. I think a lot of the automation solutions, even the commercial automation solutions have been designed to solve a very specific or narrow problem. So there are tools that solve the deployment problem. There are tools that solve the configuration problem. There are tools that solve testing problems. And so on…There is no such thing as a standard DevOps tool chain. They’re like snowflakes. So developers gravitate toward their tool of choice and the DevOps culture encourages experimentation. Enterprises haven’t bought into the giant, does everything kind of tool. Instead enterprises are choosing very specific point solutions and then weaving them altogether to generate efficiencies across the value stream.
I think this approach has been highly effective. You can have the best-of-breed at any point in the stream. The downside is none of those tools have an understanding of the big picture process, and they don’t have an understanding of what happened before in the value stream or what’s going to happen next. And most importantly, the tools have little or no idea what value they just processed.
InfoQ: So are all the tools changing to become more consolidated or is this going to be the state?
Dennis: I think that there are certainly some very large enterprise vendors that are pushing toward an end-to-end solution, but I think that the community, still has a preference for best-of-breed point solutions at every step in the tool chain. And I don’t see that evolving toward consolidated tools anytime in the future. Historically, consolidation always happens. However, the heterogeneous nature of each deployment team and their diverse requirements seems to make any large scale consolidation in DevOps unlikely any time soon. In the meantime, we’ve got to get much better at leveraging the data generated by these fragmented tools – across the entire value stream.
InfoQ: You’ve told us about what VersionOne is doing in terms of looking across the value stream and looking into the tools and providing that visibility upwards. What’s happening with your product in the future?
Dennis: We’re going to continue to connect data across the value stream to better understand how we’re building, validating, and deploying new business value. One of the things that I’m most interested in is how we can leverage this data to generating some very intelligent and objective measures of deployment risk.
If we know the precise contents of each deployment, we can start to assess riskiness in an objective way. For example, analytics can identify what bits of code are more fragile or less fragile. Once the more fragile code has been identified, we can extend reporting to quantify the fragility of each deployment. To achieve even higher fidelity, we can also combine cyclomatic complexity data gathered in other DevOps tools. Combining all this important DevOps data will help us measure test coverage, code complexity, and technical debt across deployments, features, and epics.
About the Interviewee
Dennis Ehle is the vice president, DevOps strategy at VersionOne. He is a pioneer and thought leader in continuous delivery automation and agile delivery methodologies. With more than 14 years of experience in the technology automation space, Dennis continues to pioneer innovative ways to simplify DevOps and software delivery. Twitter: @DennisEhle