Puppet Labs’ latest version of Puppet Enterprise - version 2015.2 - includes new features like node graph visualization, inventory filtering, and a VMware vSphere module. It provides users with major enhancements of the Puppet Language and an updated web UI. Puppet Enterprise 2015.2 is the first release following a new versioning system introduced in July 2015 and is based off the Open Source Puppet code base version 4.2.1.
Michael Olson, Senior Product Marketing Manager at Puppet Labs spoke with InfoQ about Puppet Enterprise.
InfoQ: What are the problems the new code visualization solves? How exactly does the interactive graph help "to optimize their code and respond to changes faster“?
Michael: Understanding the full picture of how infrastructure is configured has long been a black box for IT, mostly because one-off scripts and manual methods don’t provide any reporting or insight into the state of infrastructure or how different configurations on a server interrelate. This problem only magnifies within larger organizations as distributed IT teams contribute to the configuration of an entire infrastructure. The result has been higher ideal change failure rates and difficulty for IT to troubleshoot issues quickly.
The new infrastructure visualization capabilities with Puppet Enterprise solve that problem, by enabling IT teams to visualize and understand how they have modeled their infrastructure as code within Puppet. Now, it’s easy to quickly drill down on a node’s configuration to identify where changes or failures have occurred, and understand the context around how those relate to other upstream or downstream configurations. Because IT teams no longer need to search through their infrastructure code to piece together how individual configurations are related to others, they can optimize their code and respond to changes faster.
InfoQ: Do you have a concrete example for using Inventory Filtering to help in managing warranty expirations?
Michael: Our new inventory filtering capability lets our customers easily go into their configuration and filter by specific facts, or details, about that machine and its configuration. For example, we work with lots of teams responsible for managing warranty information. Before Puppet, they may have been highly dependent on manually maintaining a list of machines with their warranty information. With the new inventory filtering capability of Puppet Enterprise, it’s easy to quickly run a report on warranty status, understand which machines have a warranty that’s running out within the next 30 days and thus need to be replaced, or how many have expired or are expiring within 30 days that need to be renewed or replaced.
As another example, we have lots of customers who are routinely migrating infrastructure from one OS to another, where they need to decommission older OS’. With the inventory filtering capability, it’s easy to quickly get a report on where you’re at, how many systems have you decommissioned, how many are left. Without Puppet Enterprise and this capability, it’s often a black box to understand exactly which infrastructure is running on which OS’ and status of migrations. Now, we’re making this easy.
InfoQ: How exactly did you optimize the core agent components to be at least 20 times faster, and to have a 50 percent smaller memory footprint? What are the absolute numbers here? Do you have data to show that improvement?
Michael: We’ve completely rewritten the inventory discovery service with Puppet Enterprise in a more modern and performant language (C++) to make the Puppet agent faster, more efficient and reduce its memory footprint.
The rewritten agent technology was validated to perform 20 times faster, with a 50% smaller memory footprint, as detailed here: https://puppetlabs.com/blog/speeding-up-puppet-on-windows. As you can see from the chart included within the blog post, the time to handle network interfaces improved anywhere from 4-12 seconds down to fractions of a second.
InfoQ: What are the top 5 examples of the major enhancements to the Puppet Language?
Michael: We’ve made some pretty big improvements to the Puppet language for describing infrastructure as code that, in sum, represent the biggest enhancements to the core language since its introduction in 2005. Some of the key improvements include:
- Cleaner and more consistent behavior - For the first time, you can check out a written specification that explains how the language syntax and all the operators and data types work. With this release, we’ve made the language much more consistent so it behaves in a more predictable manner.
- Iteration and loops - It is now possible to reduce manual repetition of infrastructure code without resorting to writing logic in Ruby. This can help you write more succinct code and use data more effectively.
- With our latest release, iteration features are implemented as functions that accept anonymous, inline functions. That is, you write a block of code that requires some kind of extra information, and then you pass it to a function that can provide that information and evaluate the code. This differs from other languages where looping constructs are special keywords. In Puppet, they’re just functions.
- Data System - A new data-type system makes it much easier to write high-quality Puppet code, as Puppet does the parameter checking. With Puppet Enterprise 2015.2 you can easily test what a data type value is and catch common errors early by restricting allowed values for class/defined type parameters. Now a lot of boilerplate code can be replaced with much stronger assertions about what type of data a parameter ought to expect.
- Error Handling - The Puppet language now provides even more detailed information about errors, and can more accurately point to where a problem was detected, so our customers can troubleshoot configuration issues faster.
- Puppet Templates - We’ve introduced a new feature to the Puppet language which provides a way to intersperse logic with text output within language templates. Previously, Ruby syntax was required to do this, and containing all logic and text commands within Puppet code improves consistency and makes it easier and safer to describe infrastructure as code.
InfoQ: How does the vSphere module compare to Chef vSphere support and the Vagrant vSphere plugin?
Michael: Other vendors do offer plugins for vSphere, but the main difference with Puppet isn’t necessarily an individual module, but rather our overall approach to automation. We pioneered a unique, declarative approach that enables IT teams to define the desired state of their machines and how they should be configured, and then we handle the work of ensuring that those systems always match their desired state, as well as remediate any unexpected configuration changes that may occur.
Traditional procedural approaches to automation (like Chef) require you to map out each and every step associated with automating a workflow and manually intervene by troubleshooting breakages when any individual step in the process fails. Because Puppet is declarative, our customers are able to ensure consistency across environments so can they make deployments pain-free, not to mention freeing up time that would otherwise be spent fighting fires and fixing issues manually if and when machines deviate from their original state.
InfoQ: What’s coming next? Where does Puppet Enterprise go?
Michael: Historically, Puppet has helped IT manage infrastructure very well. As technology becomes more of a strategic differentiator and teams work cross-functionally to align their work to the customer value being delivered, the importance of delivering innovative and reliable business applications has never been greater. As a result, a big focus for us will be helping IT extend the benefits of automation and modeling beyond the core infrastructure.
We’re seeing a lot of change in IT today, which has added additional complexity and pressure for teams. Whether that’s wrangling a hodgepodge of technologies as part of a mixed environment, moving infrastructure to the cloud, testing out new ways of standing up infrastructure like Docker, or dealing with increased expectations from customers for faster time to market and greater performance, it’s clear that in the future there will only be more (not fewer) servers and devices to manage.
So, a big focus for us is continuing to eliminate complexity for IT and to broaden our ecosystem support for automating the range of technologies our customers use. This can include laying down operating systems on bare metal, running virtualized environments, moving infrastructure to the cloud, leveraging containers and microservices to be faster and more nimble, or exploring hyperconvergence with the network and storage parts of the data center. Regardless of the technologies our customers use within their environments, we’re continuing to invest in broadening our support for helping them efficiently manage their entire infrastructure and application stack.