At the recently concluded Helm Summit in Amsterdam, the Helm project was front, left and center. Helm is already a defacto package manager for the Kubernetes community and is on the verge of entering Cloud Native Compute Foundation (CNCF) as a top level project.
Helm is an application package manager running on top of Kubernetes, and describes an application's structure through Helm Charts, making it convenient to install and manage packages and their dependencies. Helm is akin to the OS package managers yum, apt, and Homebrew, etc.
With the advent of microservices and the need to scale and manage these services independently, Helm offers a way to do this through the use of Helm Charts.
InfoQ caught up with Matt Butcher, the founder of Helm and the organizer for the Helm Summit in Amsterdam, and explored the growth in usage of Helm and its future roadmap. Butcher discussed the history of Helm, how its design was influenced by other package managers, how it's helping the Kubernetes community, its tremendous growth, and some of the security challenges.
InfoQ: Could you begin by giving a very brief "trip report" of the recent Helm summit, and a brief history of Helm? Can you provide some insight into how it got started and some inside stories that may be relevant?
Matt Butcher: At Deis, I was leading a team to build the first Kubernetes-based PaaS offering, called Deis Workflow. Installing this multi-microservice application was difficult, to say the least. When Deis had a two-day all-company meeting, we had a hackathon team building exercise. The winning team at the hackathon got a $75 Amazon gift card. I grouped up with Jack Francis and Rimas Mocevicius, and we decided to tackle the problem of installing applications into Kubernetes. During the two-day hackathon, we built a tool we called "k8s place" (pronounced "Kate’s Place") that was a package management system for Kubernetes. We won the hackathon.
The day after the all-company meeting, the CEO and CTO called me. They had been talking, and wanted to know if Jack, Rimas, and I thought that this K8S Place thing could be made into a real tool. I said that I thought it could. But we all agreed that the K8S Place name was just a little too cute for a project. So Jack and I got on a phone call and read through a glossary of nautical terms to find a better project name that fit with the Kubernetes theme. And thus Helm was born.
A couple of months later, we launched Helm to the public at the first KubeCon. And a few months after that, Helm became an official Kubernetes sub-project.
The First Helm Summit - Helm gained more traction than we had ever anticipated. But when it came to KubeCon, we generally were stuck giving pretty standard deep dive sessions. And the KubeCon organizers couldn’t allocate too many sessions to Helm because the ecosystem is so big. So Karen Chu, the community manager at Deis and then at Microsoft Azure, floated the idea of doing an official Helm Summit. For the first one, in Portland, OR, we did a single-track conference. It was great. We filled the venue, and I got to meet dozens and dozens of devops, developers, and system administrators. Brian Grant, the lead on the Kubernetes project, came. Afterward, he suggested that perhaps Helm had outgrown being a Kubernetes sub-project. He championed Helm moving up directly into CNCF, and helped us along the way.
The First European Helm Summit - The Helm Summit in Amsterdam was our first European summit. I had initially targeted 120 people, but we ended up at 130. And we had initially wanted to do it in Milan, Italy. But we couldn’t find a perfect venue. Folks within Microsoft in Amsterdam advocated for a venue there, and so we switched.
Helm Summit Amsterdam was the first Helm event run by CNCF. That was spectacular. They bring deep expertise, and were fabulous to work with. I have never been part of such a well-organized conference before.
For this summit, we switched from the single-track style of Helm Summit Portland to a two-track conference. This enabled us to accept more speakers and also provide some workshop content in parallel with regular sessions. Session quality was very high, and as far as I know we received unanimously positive reviews. I think a big part of the positivity was the size. I think I met and shook hands with almost every single person who attended. While the 12,000-person KubeCon is fun and flashy, a small conference like this enables some personal connections, providing ample opportunity to get to know people with similar interests.
The high point of Helm Summit for me was meeting Yusuke (known in the Kubernetes world by his GitHub handle ‘mumoshu’). I have worked online with Yusuke for years, and he’s contributed to many Helm-related projects as well as Brigade (another CNCF project that I started). But we had never had the opportunity to meet in person. I flew from Colorado. He flew from Tokyo. It’s a fabulous world that we live in when we can collaborate for years without ever actually meeting -- and then meet for the first time on a continent foreign to both of us!
InfoQ: Since Helm is often positioned as the Homebrew / apt / yum package manager for Kubernetes, is Helm more applicable to system admins than developers and architects?
Butcher: Jack, Rimas, and I were deeply inspired by both Apt and Homebrew. Adam Reese and Michelle Noorali (who joined Helm in its first official week as a project at Deis) were both deeply influenced by the Ruby ecosystem. Together, we believed we were extending the packaging metaphor to a brand new space: Cloud native (distributed) apps.
As we see it (and I think I can comfortably speak for all of them), Helm’s first goal has always been to make the "zero to Kubernetes" story easy. Whether you’re a developer, a seasoned devops engineer, or a student just getting started, our goal is to get you installing apps on Kubernetes within two minutes.
Of course, since the days when we set that goal, things have changed quite a bit. Kubernetes has exploded in popularity. It’s now common to see production k8s clusters. At an inflection point like this, it’s good to ask: Who is the real audience for Helm?
We believe that those in operations are going to derive the most immediate benefit from Kubernetes. The Helm Hub, conceived by engineers at Bitnami (now VM Ware), is a portal designed with Kubernetes operations in mind.
But we have worked hard to make Helm Charts easy for developers to create. My team created a project called Draft that was specifically designed to help developers go from code-to-chart without having to learn Kubernetes manifests first. Our design philosophy here (as in Helm and our other projects) is this: Give people a tool to get their high-priority job done NOW, then give them the means to "learn their way backward" into the underlying technology. Ruby On Rails did this exceptionally well for the Ruby world, and we have long seen that as an inspiration for our work.
InfoQ: Do Helm Charts replace the complexity of YAML files for installing Kubernetes apps? How are some of the other criticisms, especially the complexities of Kubernetes, addressed in Helm?
Butcher: Our main goal with the chart format was to make it so that a Kubernetes newcomer would never have to see a Kubernetes YAML manifest in order to install something on Kubernetes. But as such users got increasingly interested in doing more with Kubernetes, Helm would become a pedagogical tool. Again, drawing on the previous answer, Helm is supposed to help people "learn their way backward" into Kubernetes. Install a chart. Then go see what was created (after all, Helm tells you). Then change a few things. Try your changes. Look at a few more charts. See how they work. Before long, the motivated Helm user becomes a knowledgeable Kubernetes user. And all along the learning process, those individuals can be successfully deploying applications!
One thing we are proud of about Helm 3 is that charts continue to be relatively simple. We had options to add lots of different template languages, many more hooks, and all sorts of additional bells and whistles. But when we listened to our users, what we heard was:
- (a) simplicity was great for learning
- (b) the tooling we provided was largely sufficient for 95% of the users’ use cases
- (c) where Helm was not sufficient, a wealth of add-on tools have been created to fill in the gaps
InfoQ: Is Helm relevant in the continuous integration or delivery (CI / CD) of microservices, and if so, what role does Helm play?
Butcher: I’m not sure that grouping CI and Microservices together is a natural fit, but I’ll try.
The microservice architecture has become something new: It has become a way to truly write and operate distributed applications. But to do this well, microservices really needs the orchestration layer that Kubernetes provides. Otherwise, it is simply too complex to wire all of the pieces together.
Helm tackled one part of the cloud native/distributed application sphere: It made the Kubernetes part easier. But I would say that we fell short in a few key ways:
Helm only addresses the Kubernetes part of the equation. But modern cloud offerings include hosted services, managed tools (like database services), virtual machines, and networked storage--all outside of Kubernetes. Helm does nothing for these. Helm does not manage the container images. It assumes that those are managed externally, and it merely manages the Kubernetes manifests.
We pondered adding both of these to Helm, but realized that this would ruin one of the key things we loved about Helm: It’s simplicity.
Following the UNIX mentally of "do one thing well", we created a new tool to handle the large cloud-native space. We created an open specification called Cloud Native Application Bundles CNAB, donated it to the Linux Foundation’s JDF, and then created both a reference implementation and an opinionated declarative builder. CNAB works with Helm--and also with Terraform, Ansible, and just about any deployment management tool--to enable truly rich deployments of cloud native applications.
CI is an interesting story as well. We frequently deploy our Helm charts as part of a gitops-style pipeline. And the other tools we built have similar needs. But when we looked at CI and CD, we noticed some uncharted territory. Kubernetes provides the rudimentary layers of continuous operations: Jobs are short-lived pod runtimes, while Deployments are perfect for the long-running gateways. But when we took a look at the landscape in 2017, nobody had made use of Kubernetes in this capacity.
So we took the knowledge we gained from Helm, and we created Brigade. Brigade is not just a CI system, though. It is more flexible than that. Inspired by the way UNIX shell scripts allow users to chain together shell commands and pipe data from one to the other, we created a general framework for writing scripts (in JavaScript) that could chain Docker containers together, passing information from one to the other to form processing pipelines.
Brigade, also a CNCF project, provides an event-driven interface on top of this. So you can write Brigade scripts that respond to GitHub pull requests, or cloud events, or Kubernetes API events, or Kafka pub/sub messages ... the sky is the limit.
At Helm Summit Amsterdam, Yusuke (mumoshu) gave a presentation in which he showed how he had taken Brigade and built a sophisticated CI pipeline for Helm charts that then handed off tested charts to a CD system that immediately deployed them into production. It was such an enjoyable moment to see someone take a little iota of an idea that I had and run with it, building something far more elegant and powerful than I had imagined when my team started on Brigade.
InfoQ: Can we ask what is new in Helm 3? How are some of the security issues of the Tiller cluster-side component of Helm being addressed?
Butcher: First, to set the record straight, "rumors of Helm 2’s insecurity are largely exaggerated". The main complaint was that Kubernetes was not multi-tenant secured by default, which is true. One has to turn on security in Helm. A few very loud individuals spread rumors that there was some deep and inherent security flaw in Helm, when in actuality, the solution is "turn on mTLS". (Helm itself has only ever had two CVEs, neither one critical)
Helm 3 completely removes Tiller. We were never thrilled with Tiller when we added it to Helm 2, but at the time Kubernetes was looking like it was going to go all-gRPC, and we had envisioned a more integrated experience. Instead, Kubernetes stuck with YAML and REST. So we dutifully rode out the Helm 2 cycle without making any breaking changes. Then the first thing we did in the Helm 3 branch was remove Tiller and refactor the code. Helm 3 now uses in-cluster resources with atomic versioning to store releases, but no longer requires running any code in-cluster.
We have done our very best to remain true to SemVer2, not breaking compatibility between Helm 2.0 and now. But Helm 3 is our chance to fix up many things. For example, CRDs were introduced over a year into Helm 2’s lifespan. Thus, we could not add support for CRDs in any way that might break existing clients or charts. But with Helm 3, we’ve finally been able to write a sensible implementation of CRDs.
While we’ve literally rewritten the entire clockworks of Helm, the cool thing is that the everyday Helm user won’t have to change much at all. A few command line flags changed. We introduced a new chart format (Chart v2), but almost everything will "just work" as it always has. And for established Helm installations, we’ve even got a migration tool to ease the transition.
InfoQ: Can you talk about the growth of the Helm community? Is there anything else you want to highlight in Helm that obviates some pain points for developers and architects?
Buthcer: Helm’s ecosystem has gotten big enough that I can no longer track it. One of the most petrifying moments of my life happened when an individual saw me wearing a Kubernetes logo and pitched me on their company, which made their income off of Helm. He had no idea who I was, and I never told him. I just quietly listened to his pitch, asked a few questions, and moved on. But it was a powerful moment for me. I had never really thought about the fact that a tool my team built was actually the source of livelihood for a company with 20+ people!
We used to track all of the tools in the Helm ecosystem on GitHub. But at this point, that document is no longer all that representative of a much larger ecosystem that has formed around the project. I cannot tell you what a privilege it is to know that so many projects have been built around our little hackathon project. Open source ... it just blows my mind some days. Think of the thousands of hours all of these developers spread all around the globe put into building tools that "scratch an itch", and then they offer it up for others to use and expand.
On the code side, I am just utterly blown away by the participation of the community. Thousands of contributors from over 700 companies have helped us build Helm, create charts, and manage the core Helm tools and website. Many of those people also contribute to other projects in the broader ecosystem, like Helmfile, Helm Hub, and Monocular.
I could say that I am proud of Helm. But "proud" is not the right word. It implies that I raised it to be what I wanted it to be. Rather, I am overawed by Helm. The people, the code, the ecosystem--all of these are so far beyond what Jack, Rimas, and I imagined when we first hacked out our little K8S Place demo. Two years ago, when Microsoft acquired Deis, I was asked what my hopes were for Helm. At the time, I said, "I hope that Helm is a 10-year project," meaning that I hoped it was a project that could reach a ripe old age in the fast-paced software world. And I still hope that is true. But these days, my real hope for Helm is that the vision of the community around it will continue to outpace my own shallow imagination.
In summary, Helm has become the defacto package manager for Kubernetes, and it has experienced steady growth as it solves core application development pain points on the infrastructure platform.
More detailed information is available in the Docs, including a getting started guide.