WKSctl is an open-source project to install, bootstrap, and manage Kubernetes clusters, including add-ons, using SSH. WKSctl is a provider of the Cluster API (CAPI) using the GitOps approach. Kubernetes cluster configuration is defined in YAML, and WKSctl applies the updates after every push in Git, allowing users to have repeatable clusters on-demand.
The CAPI is a Kubernetes project that enables users to create, configure, and manage Clusters declaratively, just as they would any other Kubernetes resource, such as Deployments or Services. Currently, there are other implementations of the CAPI from infrastructure providers like AWS or VMware vSphere. An interesting feature of WKSctl is that it only needs a list of SSH endpoints and associated credentials of the target VMs on which to bootstrap Kubernetes.
InfoQ recently talked to Alexis Richardson (CEO), Mark Ramm (director of product), Cornelia Davis (CTO), and Mark Emeis (engineer manager) from Weaveworks, to learn more about WKSctl.
InfoQ: What is WKSctl, and why did you build another Kubernetes tool?
Alexis Richardson: Currently, there are roll your own (RYO) installers like kops or kubeadm, packaged installers like rancher, kind, or minikube. WKSctl gives you the power of RYO combined with a packaged installer that works for developers, data centers, and in the cloud.
WKSctl is an open-source implementation of the CAPI, not a Kubernetes distribution. It’s a GitOps installer and controller for upstream Kubernetes clusters. WKSctl subscribes to a repo (using Flux), which contains Kubernetes bits and other pieces that you wish to install. Then, WKSctl builds and bootstraps the cluster. All you need to do is to give WKSctl a list of SSH endpoints, and WKSctl does the rest. Once the cluster is up and running, if you make changes to the cluster config in Git, WKSctl applies those changes to the cluster.
The overall benefit is to have a flexible way of working with multiple different distributions of Kubernetes. You can have reproducible clusters, as many times you want. Clusters themselves are cattle, not pets. We believe that the cluster should ideally be 100% disposable, and you must be told when the cluster is in an incorrect state.
InfoQ: What would be a typical use case for WKSctl?
Richardson: The purpose of WKSctl is not to do the machine provisioning. So the ideal customer of WKSctl is someone who uses Terraform, vSphere, Salt, Ansible, or Chef to provision machines themselves. Then, customers can use WKSctl to install and bootstrap Kubernetes on those machines. But if you don’t want to provision the machines, we also have Firekube, which goes one step further of WKSctl. Firekube takes a vanilla Kubernetes and uses WKSctl to install it on Ignite (Firecracker VM) clusters.
Another use case is for multi-cloud or hybrid scenarios where you have a more advanced pattern that we call "master-of-masters" (MoM) pattern. In this pattern, you’ll have a MoM cluster that provisions Kubernetes master management clusters in each target environment to then provision as many Kubernetes clusters you want.
InfoQ: How does WKSctl work? Is it a CLI, a Kubernetes controller, or both?
Mark Emeis: Both. WKSctl bootstraps the first node in a Kubernetes cluster with the CLI then installs the WKSctl controller to manage the installation of additional master and worker nodes. WKSctl CLI creates a cluster based on a Git repository, installs Flux to subscribe to that Git repository–this is what we call the "Git to Kubernetes reconciliation loop." Finally, WKSctl installs a custom resource controller to apply cluster-level changes based on the configuration received from Git.
Cornelia Davis: There are two reconciliation loops. There’s Flux, a reconciler that watches the Git repository for any changes to the configuration of the cluster. Its job is to take those things from Git and essentially commit them to etcd. And the second reconciler, which is the WKSctl controller, is sitting there watching etcd just like the replication controller in Kubernetes.
InfoQ: What about patches and upgrades to Kubernetes clusters? Is there any downtime?
Mark Ramm: WKSctl reacts to changes to the CAPI manifests. When an operator changes the version of Kubernetes, the controller updates the master nodes, and then the worker nodes. Applications need to be constructed to handle being rescheduled as the cluster is being updated effectively. Following the twelve-factor app pattern is a good start.
InfoQ: Can you customize the cluster architecture to decouple components, like having a VM dedicated to etcd and the API component in another server?
Ramm: By default, WKSctl spins up three masters with worker nodes. We intended to keep it simple. If you’re doing more complex things, it tends to push towards the use of our commercial product, WKP. By in large, what we’ve seen happening with WKSctl is users are spinning up dev and test clusters on-demand, the clusters are ephemeral (not large in high-scale). Many clusters come and go as opposed to having one big cluster. We see that the tendency generally among enterprise customers is to have more clusters instead of having one big cluster–for permissions and security reasons. Also, systems scale very easily when you scale the cluster horizontally as opposed to creating big clusters.
InfoQ: How has the adoption been within the community for WKSctl?
Ramm: Deutsche Telekom is a customer who’s in the process starting to use WKSctl. They recently shared a small video with a demonstration of how WKSctl works in their environment.