After the release of a revised second edition of his book "Continuous Delivery and DevOps: A Quickstart Guide" a few months ago, InfoQ reached out to author Paul Swartout in order to find out what have been the major changes in this space (and in the book) in the last couple of years.
Swartout shares his view on cultural challenges to DevOps adoption and how the rise of mobile and microservices impacts Continuous Delivery approaches, among other topics.
InfoQ: Have the Continuous Delivery (CD) and DevOps practices and principles evolved considerably since the first edition of your book, or are they the same but people are more aware of them?
Paul: I like to think that the underlying CD and DevOps practices and principles are pretty much as they were a few years ago; reducing the headache and complexity of shipping and supporting quality software. What has moved on is the mainstream recognition of both. There are now many commercial businesses specialising in and selling products and services related to CD and DevOps - some more true to the principles of CD and DevOps than others I would add. There is an underlying and worrying trend of people who believe they understand CD and DevOps but haven't really bothered to understand the context or what problems either have tried to address - good examples would be the growing number of businesses setting up dedicated and isolated DevOps teams or recruiters hiring so called DevOps engineers (just because someone has Puppet experience doesn't mean they understand DevOps culture). Ultimately if CD and DevOps are being discussed in the boardrooms or the executive bathrooms of big business and government departments then it is inevitable that commoditization will follow - just look at the multitude of consultancies and software vendors who "specialise" in scrum. Personally I do worry that the original message is getting lost or at least watered down - especially in relation to the DevOps movement and culture - but I'm also glad to see the recognition that CD and DevOps are things worth investing in.
InfoQ: How did that reflect in the new edition? What are the major changes compared to the first edition?
Paul: With the second edition I decided to apply more focus and attention to the human factors of CD and DevOps and how the culture and underlying behaviours can benefit or hinder the adoption. When it comes down to it the vast majority of the articles you read, presentations you see or conversations you have on the subject of CD and DevOps tend to end up focussing on the tooling and technical aspects; Puppet vs Chef, Docker vs Rocket, cloud vs hosted, etc. What I don't hear a vast amount of is "how should people work together in the spirit of CD and DevOps?" or more importantly "how should we change our culture and behaviours to be able to deliver software our customers want when they want it?". I wanted to open peoples eyes to the softer side of what makes CD and DevOps successful and highlight some of the pitfalls that come from cutting corners or at least ignoring the harder aspects of adoption.
InfoQ: So you think non-unicorn companies adopting DevOps still fail to understand the need for a cultural change? How did you approach the topic in your book considering it's such a context-dependent topic?
Paul: As stated previously we're now getting DevOps discussed at executive levels of established organisations however I fear the understanding of what that actually means is still shrouded in techno-babble. We have a growing number of tech savvy people in powerful decision making positions who do seem to understand (for example the USCIS CIO Mark Schwartz) but I fear the numbers are still far too low and the growth far too slow. Just like any other business change agenda, you need people at the top to understand what you're trying to do, if nothing more than to buy some time to allow change to happen. With that in mind I purposefully stripped back the content and language of the second edition to (hopefully) give the reader an insight into how important the human factors are when a business is considering, or is in the process of, adopting DevOps (and CD) ways of working. The downside of this approach is that some may see the second edition as another watered down "management manual" - something that I was mindful of - however I like to think that I've struck a balance to provide useful information and insight for technical and non-technical readers alike. The way I see it is if you're an engineer who is frustrated that the management continually fail to understand how painful it is for you to ship software then changing your approach so that you can "speak their language" may help. Hopefully this is something that the second edition can help with.
InfoQ: Has the commoditization of cloud computing in the last years helped streamline automated provisioning and continuous delivery practices?
Paul: Very much so. The cloud market has exploded and not just for the west-coast tech companies. Up until recently the cloud has been seen as something the hipster tech upstarts used to kickstart their VC funded businesses. Well established businesses wouldn't even consider storing their commercially sensitive application data on Amazon's spare hardware. With the likes of Microsoft, HP and Oracle applying vast budgets to their cloud solutions - and more importantly providing workable SLAs - bricks and mortar business and institutions are now seriously considering (or have started) using cloud solutions. As the market has grown so has the price war between the vendors - ultimately leading to more cloud for your money. Alongside this is the growth and widespread commercial recognition of various CD tooling tools and providers (Puppet, Chef and the like) which means you can purchase a secure, maintainable and reliable cloud solution and have the tooling to deploy to it relatively easily - compared to the situation 4 or 5 years ago. All established businesses need to do now is work out how to run their 15yr old legacy codebase on a distributed, flexible, multi-site platform - not an easy challenge but one that could be overcome with some insight into CD and DevOps practices and approaches.
InfoQ: In the last couple of years we've also witnessed an exponencial growth in the mobile development space, along with a multiplication of tools and workflows to cope with it. Yet we still have app stores and the need to support a myriad of application versions in production simultaneously. How do you see the adoption and evolution of continuous delivery in the mobile space compared to traditional web-based applications?
Paul: This is something I've been toying with for a while. I approached this with a simple question "can you use CD and DevOps techniques and approaches to deliver mobile apps?". I came to the conclusion that you can - sort of. The main drawback is, when compared to shipping web-based solutions, that you can't deliver your app many times per day to the "production" environment - as this is actually someone's mobile device. You also have no control over it (e.g. O/S version, storage, memory, network speed, etc). As you say there is also a myriad of tools and technologies available for development and deployment in the mobile app space which adds more options and ultimately more complexity. Some tools can help reduce that complexity; for example you could use the same JavaScript codebase for your web and mobile solution and use tools (such as PhoneGap) to "generate" native mobile apps but this can produce below par mobile user experiences. I prefer to keep things simple so suggest you remove the technology debate, go back to basics and look to apply some simple CD and DevOps best practice. For example; deliver small incremental chunks; ship regularly (weekly / bi-weekly); don't ignore NFRs (e.g. don't rely on network availability); build comprehensive diagnostics into your app and analyse the data as part of the development cycle feedback loop; ensure that the developers writing the code are the ones building, shipping, monitoring and supporting it; if you also have a backend platform then standardise (where possible) on the deployment and monitoring toolset across both mobile and platform so that you have a single view of all moving parts; ensure you are safe to fail. I'm sure your readers can think of many more examples but in essence delivering a mobile app is no different from any other software. You just have to be a little more creative about how you approach it and be mindful of the hurdles.
InfoQ: And what is the impact of mobile development in terms of cultural and organizational needs? What are the pros and cons of organizing teams by platform type (mobile vs web vs legacy)?
Paul: In an ideal world the answer to both would be "negligible". However we don't all live in an ideal world. Unless you are using the same tech stack across all of your clients (mobile, PC, web) as well as the backend you will naturally gravitate toward having team silos specialising in the technologies you're using. This isn't necessarily a bad thing but can encourage and reinforce the "us and them" behaviours within the organisational culture which, if left unchecked, can turn into "lord of the flies" moments. Ultimately engineering teams are employed to provide solutions to business problems. If your organisational structure is built around your technical platforms and your engineering teams are built around your organisational structure, the solutions they provide will in part solve business problems but mostly will try to compensate for organisational problems - this is by no means a new problem. When it comes to delivering value quickly and consistently there's nothing better than having everyone you need working together (say in the form of a cross-functional team) - as long as they have a consistent flow of work that they can all contribute to. The potential problem however is ensuring the work keeps everyone busy. Unless you have multi-skilled engineers (or engineers who are willing to simply muck in) things can become uneven and inefficient. Imagine what happens when you give mostly iOS client work to a team consisting of Clojure, JavaScript and C++ engineers. Some team members will be very busy indeed. One way around this problem is to have the best of both worlds: organise the engineering department based on technical expertise but swarm around business problems as and when they need solving - short lived cross-functional teams if you will. That way you can quickly deliver value but keep an overall structure that allows for a mixture of specialist and generalist engineers. Ultimately you want the right people, in the right place at the right time all focussed on solving the same problem - regardless of technical requirements or organisational boundaries. Of course you'll also need to take into account the cadence alignment problem mentioned above whereby the backend and web engineers can deliver to "production" with relative ease when compared to the mobile client engineers.
InfoQ: Microservices are gaining a lot of momentum. Do you see them as the next paradigm shift in software development?
Paul: As it becomes relatively cheaper and easier to ship and host code then the viability of microservices comes into it's own. This approach is also a natural and logical progression - especially in the web-services space. Breaking down complexity into small focussed and manageable chunks of code allows for greater flexibility and ultimately greater efficiency. For an engineer the complexity of writing, maintaining and supporting micro services is vastly reduced as the scope of what the service does, what it's for and what function it serves is far simpler and obvious to understand. Having discrete functionality in one place also provides simple scaling opportunities; for example if you have a large influx of user activity at certain points in the day then spinning up a few additional UserSessionManagement service (to grab a name from out of the air) machines for a few hours may be in order. Yes you may end up with more moving parts but if you have the tools, ways of working and maturity you will soon reap the benefits. As with any disruptive technology it's the young upstarts and tech based companies that have driven the adoption forward thus far. It may take longer for the old guard to catch up but they will.
InfoQ: Martin Fowler's popular "you must be this tall to use microservices" blog post warns about the prerequisites in terms of continuous delivery and DevOps maturity. Do you agree and why?
Paul: As Martin is a major thought leader in all things software delivery, what he says tends to hold some weight. That said I would suggest that the prerequisites taken at face value may be perceived as a barrier to entry for some. I agree that established organisations do need to do some prep before going all guns toward adopting a microservices approach but as with any new approach you can start small and work up from there. Maybe start with a small low value low risk feature (say scraping twitter for tweets related to a new product launch) and build out a simplistic pipeline and start deploying. Plug in some metrics and analytics, build some dashboards and start analysing the stats. Refine the pipeline, start deploying to a cloud provider and add metrics from their service. And so on. When you think about it microservices actually helps give you that ability to start small and build up - you don't necessarily need to move your entire platform to microservices overnight. If you take your time, plan your approach and keep refining you end up with a good case study on which to build your wider architecture strategy. In the end I agree that you do need to be quite tall if you decide to use microservices exclusively but even the tallest among us starts out short and grows.
InfoQ: Fowler also mentions the need for adequate monitoring practices when you are running a non-trivial set of microservices. Monitoring has traditionally been considered a purely post-deploy Ops activity. Do you think monitoring should be integrated earlier in the delivery pipeline and why?
Paul: Forgive me if I go off at a slight tangent here. TDD is a well established practice which teaches us that one of the best ways to deliver quality software is to think about how we will test our code before the code is cut rather than as an afterthought. BDD follows a similar approach. That being the case why should you only think about how you monitor software after it's been shipped? Building metrics, analytics and diagnostics into your system design and architecture from the get go is logical and makes more sense than not doing it. Even adding this functionality to an existing codebase shouldn't be that arduous. I admit that adding monitoring post deployment does provide some information and is better than nothing but the information can be limited and doesn't paint the whole picture. For example it's useful to know that the CPU is running hot but unless you can see why it's happening and what portion of code is eating up the CPU clock cycles you don't have much to go on. Yes there may be some additional cost in terms of initial development effort, tooling setup (although most open source tools give you more than you'll usually need) and knowledge acquisition but these will be far outweighed by the ability for engineers to see how software is behaving when it's being used by real people or simply being stress tested by a performance QA engineer. One thing to add here, building metrics and monitoring into software shouldn't be seen as mutually exclusive to microservices, this best practice should apply to any software.
InfoQ: If you had a crystal ball, how would you predict the evolution in this space in the next couple of years?
Paul: Although it's hard to predict (and some would say foolhardy to attempt to do so) I think there's still a few more miles left in the CD and DevOps journey. The emergence of DevOps as a commercially viable way of working is still a relatively new thing and the number of businesses using this approach is still in the minority which means there's still massive growth opportunity for the community. The CD and DevOps tools and services sector is still in its infancy however with the big boys starting to take an interest this sector will grow to be something quite substantial and we'll start to see a greater choice as new players enter the market. In terms of untapped potential, I think we'll see greater emphasis in CD pipeline tools for the ever growing cloud market, mobile apps and the weirdly named internet of things. I would also envisage closer integration with workflow and collaboration tools (e.g. cards flow across the Trello board as software goes through the CD pipeline or engineers can use Slack to collaboratively kick off a deployment). The natural progression for DevOps would be to extend these ways of working outside of the data centre and into the mobile app space. Alongside the recent "how to make staff happy" leadership movement I envisage we'll start to see more consultancies specialising in DevOps adoption with greater emphasis on leadership, culture and human factors - pretty much as happened when agile become commercially recognised. All in all I like to think that the future of CD and DevOps is looking pretty peachy - all I hope is that the original principles and messages don't get lost along the way and that whatever comes next ultimately makes it easier for engineers to deliver quality solutions to business problems.
About the Interviewee
Paul Swartout is a husband, father, dog owner, software development manager and author of "Continuous Delivery and DevOps: A Quickstart Guide - Second Edition".