Cloud computing is more than just fast self-service of virtual infrastructure. Developers and admins are looking for ways to provision and manage at scale. This InfoQ article is part of a series focused on automation tools and ideas for maintaining dynamic pools of compute resources. You can subscribe to notifications about new articles in the series here.
DevOps awareness is reaching a critical mass. Technology conferences are littered with DevOps sessions, it has overtaken “cloud” as the topic de-jour in technical journals (irony noted!), and surveys are showing that DevOps adoption is real.
However, when assessing technology that empowers a DevOps transformation, it’s easy to focus in on the headline capabilities (“configuration management!”) and miss out on the bigger picture. How can teams shipping cloud (or on-premises) applications use the full suite of DevOps technologies to simplify delivery and management at scale?
To be sure, a DevOps mindset requires a new way of thinking in IT where there is empathy for other teams within the organization, not just for end users. Developers and operations staff take on responsibility for their domain in ALL running environments.
Teams pursue shared objectives and capture the data they need to identify where improvement is needed. Waste, bottlenecks, and inefficiencies are ruthlessly hunted down and dealt with. Without a proper investment in the cultural aspects, a DevOps initiative will achieve, at best, some isolated success.
Nonetheless, without some key enabling technologies, organizations will fail to achieve the desired increase in throughput and struggle to manage a growing infrastructure footprint. Although the individual technologies seem to change on a daily basis, they can be categorized in the following way:
- Collaboration technologies. Help teams work together more easily, regardless of location.
- Planning technologies. Provide transparency to stakeholders.
- Issue tracking technologies. Increase responsiveness and visibility.
- Monitoring technologies. Clear, shared responsibility for relevant parts of service health.
- Configuration management technologies. Enforcing desired state and consistency at scale.
- Source control. Accessible, controlled means for storing key assets.
- Development environment technologies. Accelerating development by reducing setup time and inconsistencies.
- Continuous integration technologies. Instant feedback by merging code regularly.
- Deployment technologies. Tools for building out environments and regularly updating systems.
In concert, these sets of technologies support ongoing DevOps efforts by delivery efficiency through transparency and automation.
Let’s take a look at each of these categories in greater depth, and name some specific technologies that apply.
Collaboration
When you hear “collaboration” at your organization, does “more meetings” jump into your mind? That doesn’t have to be the case. The goal is to have rapid, action-oriented communication across teams so that waiting is reduced, knowledge is shared, and there are fewer last minute emergencies. Multiple tools can facilitate this, but be careful not to use the wrong tool for the wrong purpose.
Dealing with an outage or a crisis? Bring diverse groups together in tools like Campfire, Slack, or IRC. By having a consistent place to go when something needs quick, collective attention, you focus on the task at hand and NOT trying to tell everyone where to go! When outside parties need to be engaged, look at web conferencing tools like GoToMeeting. Need to make a quick decision? Instant messaging solutions like Skype and Lync fit the bill. Just don’t forget to record the decision in a persistent, visible location. Follow the lead of the team at WordPress and others and use tools like blogs and wikis to capture decisions in a searchable way. Looking to keep a pulse on what’s going on? Cross-team awareness and empathy are huge factors in sustained DevOps success, and team chat rooms are a fantastic way to observe and engage. IRC and Campfire create a sense of community, even for distributed teams that never “see” each other. Team leaders can listen in and see what’s being accomplished, and even get an early warning to emerging issues.
Collaboration is at the heart of DevOps, so take a long look at what tools you’re giving your team to quickly engage each other.
Planning
Shared goals matter. Teams that plan together towards a common goal have a better understanding of dependencies, can “see” the bottlenecks before they emerge, and work through any conflicting priorities. Whether using something resource oriented like Microsoft Project, or a Kanban board technology like Trello, it’s important to leverage living assets, not static plans. In Agile, there’s a focus breaking down objectives into manageable tasks that get completed relatively quickly. Static plans that are refreshed weekly and sent out via email are not the best way to go. Instead, shared planning tools make it easy to see each other’s progress in real time, and do cross-team tasks in a collaborative way.
Issue Tracking
Do you use one system for collecting user feedback, and another to assign related work to the appropriate team? Stop that! For development and operations to work together with minimal friction, it’s important to use the same tools and not waste time copying information around.
There are mature tools like Jira and ZenDesk, and new offerings like Visual Studio Online from Microsoft. All your teams should be in the same issue tracking tool, with the same context, and taking shared responsibility for customer satisfaction.
Monitoring
Can DevOps success hinge on how you approach system monitoring? If done poorly, it will certainly increase the tension between the very teams that DevOps tries to bring together. This is where developer empathy for operation staff is critical. Are developers building the solution with their customer (operations) in mind? Do they understand the production infrastructure and what operations staff must do to maintain a high quality of service on the application?
Developers: add thoughtful instrumentation to your cloud applications and emits meaningful information for operators to use when deciding if there’s a problem. Capture business events so that other stakeholders can track higher level events and make better decisions. Operation staff: invest in open source tools like Graphite or Logstash to collect and store logs, and tools like Kibana to make sense of the data and have a better sense of the overall system health.
A well-designed monitoring solution means that operators aren’t getting randomly paged, developers aren’t stuck troubleshooting opaque issues, and cloud environments can expand and contract without raising false alarms.
Configuration Management
This is what many people equate with the idea of DevOps. Manage cloud server consistency at scale using automated configuration enforcement. Treat infrastructure as code that can be provisioned and configured in a repeatable way. Infrastructure may refer to everything from development environments to a fleet of production cloud servers. The idea is to avoid the configuration drift that makes it so difficult for developers and system administrators to reconcile why an application works in one environment (or server), and not another.
Mature options abound in this category. Stalwarts like Puppet, Chef, and CFEngine provide both open source and commercial solutions for configuring Windows and Linux machines at scale on-premises or in the cloud. Whether using a centralized master server that stores a declarative representation of the desired machine state, or a decentralized solution where servers rely only on the local agent to apply configuration state, both Chef and Puppet change how you think about maintaining infrastructure.
A newer crop of tools have popped up in this space as well. Ansible uses an agent-less approach and relies on SSH to automate configuration of Linux servers. Salt also shuns the agent model and focuses on fast, push-based updates to Windows and Linux servers. Microsoft’s even getting into this game with PowerShell Desired State Configuration. This new platform is designed to make it easier to keep Windows servers in a consistent state.
Regardless of which platform you choose – and perform due diligence to see which fits your organization best – configuration management is a vital way that operations staff can help accelerate and standardize the development experience.
Source Control
Why is source control a contributing factor to DevOps success? First off, infrastructure configurations now become a controlled asset that developers and operators alike can contribute to and pull from. Second, when you use technologies like Microsoft Team Foundation Server, CVS, or Git, you commit to an environment that many other automated components can access. Need to quickly deploy a branch of your cloud application so that the marketing team can review it? That’s simple if you’ve got the source code in a modern repository that a deployment tool can access. Trying to figure out why a new set of cloud servers has stopped behaving as expected? Go to the source code repository, look at the check in history, and compare the previous configuration to the current one.
A strong source control strategy – coupled with configuration management tools – moves you towards treating infrastructure as code and greatly reduces the time wasted tracking down anomalies. By relying on source control for infrastructure configurations, you have confidence that what you tested is what got deployed, and you can easily trace the history of changes.
Development Environments
“It worked on my machine!” How often is this the response of development OR operations staff when a system falls flat after deployment? This is prone to happening when developers and system administrators aren’t working in mirrored environments. One way to resolve this is to ensure that everyone use standardized images for application development and configuration testing. New tools like Vagrant make it extremely easy to leverage a single workflow for building complex virtual environments with a multitude of different providers. With a single command and a (source-controlled!) manifest, users can spin up and configure virtual machines in AWS, VMWare, VirtualBox, Hyper-V, Docker, and more. Teams can version their base “boxes” in a shared location and ensure that everyone is always using the same representation of key environments.
What about scenarios where you have contract employees or simply need new team members to be up and running as quickly as possible? A new crop of browser-based development suites makes it quick and easy to build and deploy cloud applications without requiring anything on the local machine. Products like Cloud 9 IDE and Codenvy give developers nearly everything they need to develop source-controlled applications that are destined for cloud endpoints.
Users expect cloud applications to be built quickly, and the right tools can ensure that development teams can start working immediately with production-quality setups.
Continuous Integration
With continuous integration, teams are merging developer code many times per day. This agile approach prevents last minute integration problems that plague traditional waterfall projects. Continuous build tools are often coupled with automated test suites that verify code quality before passing the build.
Teams have a diverse set of options for doing continuous integration for cloud applications. Run technologies like TeamCity to constantly build .NET, Ruby, and Java applications and even host build agents on platforms like Amazon EC2. Like TeamCity, hosted solutions like TravisCI have integrations with source code repos like GitHub and team collaboration tools like Campfire. Some cloud platforms like CloudBees bake continuous integration tools directly into their product offering.
Deployment
Ideally, deployments are boring. In a successful DevOps environment, application deployments are frequent, predictable, and reliable. Continuous delivery means that applications can be released to production any time you want. Continuous deployment means that every change goes immediately to production. Regardless of which one you embrace, the goal is to accelerate deployment frequency while establishing consistency.
Some continuous integration tools also do deployments, but you’ll also find specialized tools that focus on taking built code and publishing it. Octopus is a tool for automating the deployment of .NET applications to on-premises or cloud (AWS and Microsoft Azure) environments. Deployment tools like Octopus or the open source ThoughtWorks Go include rich traceability, clear audit trails, and the option to do broad or targeted deployments.
Consider additional tools that standardize deployments. For instance, Packer lets you create templates for (cloud) platforms from a single image. Why does this matter? Imagine creating immutable servers that NEVER get patched or re-configured. After each build, create a new gold image that reflects the current representation of the application. Use Packer to convert that gold image into templates that can run in VMWare, VirtualBox, Digital Ocean, Amazon EC2, Google Compute Engine, and more. Then, simply replace existing server instances with the new gold image template and be assured of consistency across environments.
Deployment tools bring DevOps to life. Successfully implementing a consistent deployment pipeline requires close collaboration between developers and operations staff, and these tools let each party focus on what they do best.
Summary
Are there other ways to categorize the DevOps toolchain besides the categories I’ve listed here? Absolutely. Nevertheless, it’s key to take a wide look at the pool of technologies and thoughtfully consider which can help your team achieve their goal of efficiently building apps and managing them at scale. DevOps isn’t about developers learning how to manipulate hardware, or operations staff learning how to code. The tools we’ve looked at in this article help cloud developers and operations staff reduce waste, and accelerate delivery through standardization, automation, and transparency.
Thoughts? Add them to the comments below!
About the Author
Richard Seroter is the director of product management for cloud computing leader CenturyLink, a Microsoft MVP, trainer, speaker, and author of multiple books on application integration strategies. Richard maintains a regularly updated blog on topics of architecture and solution design and can be found on Twitter as @rseroter.
Cloud computing is more than just fast self-service of virtual infrastructure. Developers and admins are looking for ways to provision and manage at scale. This InfoQ article is part of a series focused on automation tools and ideas for maintaining dynamic pools of compute resources. You can subscribe to notifications about new articles in the series here.