Moving applications to the cloud has somewhat become commodity in the meantime - not only for big players, but also for smaller companies that keep an eye on flexibility and resource utilization. In his presentation "Implementing Infrastructure as Code" at QCon New York 2016, Kief Morris, cloud practice lead at ThoughWorks, shares some key principles and recommendations on how to leverage cloud based infrastructure.
To set the context for his talk, Morris first elaborates on the motivation and also the challenges when using cloud infrastructure. Usually, companies focus on speed - getting a minimum viable product to market quickly and then improve it over time. Cloud technology is lowering the barriers for doing so, but nonetheless there remain some risks in fields like security, performance or stability that one has to bear in mind.
The overall goal must be to go at speed but still remain save. Fix everything that might impact quality immediately - not after experiencing first outages - and end up in an overall higher quality for your product.
Morris goes on with mentioning that creating servers with a single mouse-click is not where it ends. Usually this leads to a large fleet of servers and often some configuration drift. Since inconsistent servers are not easy to maintain automated, maintenance for those machine is likely to be done manually which leads to even more inconsistencies.
This is where infrastructure as code is introduced as a possible solution to the problem and a way to create well defined servers: Using tools like Puppet or Chef in an mode of "unattended automation". These tools run on schedule with no way for manual changes. Even small things have to be fixed in the underlying templates and configuration, eventually producing a landscape of immutable or containerized servers with no more manual changes at all. This concept should also be leveraged when promoting servers between environments. Re-use as much of the templates and configuration between stages as possible.
While automating operations saves a lot of manual efforts, Morris also mentions how one can benefit in the field of quality assurance when using a continuous delivery pipeline. By automating delivery, every change will be tested in various stages before it reaches production by including pipeline steps to verify correctness of server configurations or to ensure that certain security requirements are met. Governance processes are effectively enforced by using automation.
But it's not only the benefits that Morris mentions, it's also some drawbacks or pitfalls that are part of his presentation. Just like in any other system that is maintained or developed by multiple teams you have to keep in mind that there will be integration points, bottlenecks or dependencies to keep track of. For example, you will need to provide test instances for depending services and leverage consumer or contract driven testing to ensure that all services cooperate in the desired way. If parts of the templates or configuration are used like shared libraries, they also need to be tested thoroughly before distributing them to other teams.
Morris closes with once more highlighting the fundamental benefits that one can gain from an automated devops system:
- provision and evolve quickly
- fix bugs effortlessly
- keep servers consistent
- focus on the value created
Please note that most QCon presentations will be made available for free on InfoQ in the weeks after the conference and slides are available for download on the conference web site.