Key Takeaways
- Container runtime choices have grown over time to include other options beyond the popular Docker engine
- The Open Container Initiative (OCI) has successfully standardized the concept of a container and container image in order to guarantee interoperability between runtimes
- Kubernetes has added a Container Runtime Interface (CRI) to allow container runtimes to be pluggable underneath the Kubernetes orchestration layer
- Innovation in this space is allowing containers to use lightweight virtualization and other unique isolation techniques for increased security requirements
- Between the OCI and the CRI, interoperability and choice is becoming a reality in the container runtime and orchestration ecosystem
In the Linux operating system world, container technology has existed for quite some time, reaching back over a decade to the initial ideas around separate namespaces for file systems and processes. At some point in the more recent past, LXC was born and became the common way for users on Linux to access this powerful isolation technology hidden within the Linux kernel.
Even with LXC masking some of the complexity of assembling the various technology underpinnings of what we now commonly call a “container”, containers still seemed like a bit of wizardry, and other than niche uses for those versed in this art of containers, it was not broadly used by the masses.
Docker changed all this in 2014 with the arrival of new, developer-friendly packaging of this same Linux kernel technology that powered LXC—in fact, early versions of Docker used LXC behind the scenes - and containers truly came to the masses as developers were drawn to the simplicity and re-use of Docker’s container images and simple runtime commands.
Of course, Docker wasn’t alone in wanting to realize market share for containers given that the hype cycle showed no signs of slowing down after the initial explosion of interest in 2014. Over the past few years various alternative ideas for container runtimes from CoreOS (rkt),Intel Clear Containers and hyper.sh (lightweight virtualization married to containers), and Singularity and shifter in the high performance computing (HPC) scientific research arena appeared.
As the market continued to grow and mature, attempts to standardize the initial ideas that Docker promoted via the Open Container Initiative (OCI) took hold. Today many container runtimes are either OCI compliant or on the path to OCI compliance, providing a level playing field for vendors to differentiate on their features or specifically focused capabilities.
Kubernetes Popularity
The next step in the evolution of containers was to bring distributed computing, a’la microservices, and containers together in this new world of rapid development and deployment iteration - DevOps we might say - that quickly arose alongside Docker’s growing popularity.
While Apache Mesos and other distributed software orchestration platforms existed prior to its ascendancy, Kubernetes had a similar meteoric rise from a small open source project from Google to the flagship project of the Cloud Native Computing Foundation (CNCF). Even after Docker revealed a competing orchestration platform, named Swarm, built into Docker itself with Docker’s style of simplicity and a focus on secure-by-default cluster configuration, it didn’t seem to be enough to stem the growing tide of interest in Kubernetes.
Confusing to many interested parties outside the bubble of cloud-native communities, it was unclear to the casual observer whether it was Kubernetes versus Docker, or Kubernetes and Docker. Since Kubernetes was simply the orchestration platform, it required a container runtime to do the work of managing the actual running containers being orchestrated via Kubernetes. From day one, Kubernetes had been using the Docker engine, and even with the tensions of Swarm vs. Kubernetes as orchestration competitors, Docker was still the default runtime required by an operational Kubernetes cluster.
With a short list of container runtimes available beyond Docker, it seemed clear that interfacing a container runtime to Kubernetes would require a specially written interface, or shim, for each runtime. Due to the lack of a clear interface to implement for container runtimes, adding more runtime options to Kubernetes was getting messy.
The Container Runtime Interface (CRI)
To solve the growing challenge of incorporating runtime choice into Kubernetes, the community defined an interface—specific functions that a container runtime would have to implement on behalf of Kubernetes—called the Container Runtime Interface (CRI). This both corrected the problem of having a sprawling list of places within the Kubernetes codebase in which container runtimes changes would have to be applied, and clarified to potential runtimes exactly what functions they would have to support to be a CRI runtime.
As you might expect the CRI’s expectations for a runtime are fairly straightforward. The runtime must be able to start/stop pods, and handle all container operations within pods (start, stop, pause, kill, delete), as well as handle the image management with a container registry. Utility and helper functions around gathering logs, metrics collection, and so on also exist as you might expect.
As new features enter Kubernetes, if those features have a dependency on the container runtime layer, then changes are made to the versioned CRI API, and new versions of runtimes which support new features (user namespaces, for one recent example) will have to be released to handle the new functional dependency from Kubernetes.
Current CRI Landscape
As of 2018, several options exist for container runtimes underneath Kubernetes. As the image below depicts, Docker is still a viable choice for Kubernetes, with its shimnow implementing the CRI API. In fact, in most cases today, Docker is still the default runtime for Kubernetes installations.
One of the interesting outcomes of the tensions around Docker’s orchestration strategy with Swarm and the Kubernetes community was that a joint project was formed, taking the core of Docker’s runtime and breaking it out as a new jointly developed open source project, named containerd. Containerd was then also contributed to the CNCF, the same foundation that manages and owns the Kubernetes project.
With containerd as a core, un-opinionated pure runtime underneath both Docker and Kubernetes via the CRI, it has gained popularity as a potential replacement for Docker in many Kubernetes installations. Today, both IBM Cloud and Google Cloud offer containerd-based clusters in a beta/early-access mode. Microsoft Azure has also committed to moving to containerd in the future, and Amazon is still reviewing options for runtime choice underneath their ECS and EKS container offerings, continuing to use Docker for the time being.
Red Hat also entered the container runtime space by creating a pure implementation of the CRI, called cri-o, around the OCI reference implementation, runc. Docker and containerd are also based around the runc implementation, but cri-o states that it is “just enough” of a runtime for Kubernetes and nothing more, adding just the functions necessary above the base runc binary to implement the Kubernetes CRI.
The lightweight virtualization projects, Intel Clear Containers and hyper.sh, merged underneath an OpenStack Foundation project, Kata containers, and are also providing their style of virtualized container for extra isolation via a CRI implementation, frakti. Both cri-o and containerd are also working with Kata containers so that their OCI-compliant runtime can be a pluggable runtime option within those CRI implementations as well.
Predicting The Future
While it is rarely wise to claim to know the future, we can at least infer some trends that are emerging as the container ecosystem moves from mass levels of excitement and hype into a more mature phase of its existence.
There were early concerns that disputes across the container ecosystem would create a fractured environment in which various parties would end up with different and incompatible ideas of what a container is. Thanks to the work of the OCI and commitment by key vendors and participants, we see healthy investment across the industry in software offerings prioritizing OCI compliance.
In newer spaces where standard use of Docker had gained less traction due to unique constraints, for example, HPC, even those non-Docker-based attempts at a viable container runtime are now taking notice of the OCI. Discussions are underway and there is hope that OCI can be viable for the scientific and research community’s specific needs as well.
Adding in the standardization on pluggable containers runtimes in Kubernetes via the CRI, we can envision a world where developers and operators can pick the tools and software stacks that work for them and still expect and experience interoperability across the container ecosystem.
To help see this clearly, let’s take the following concrete example:
- A developer on a MacBook uses Docker for Mac to develop her application, and even uses the built-in Kubernetes support (using Docker as the CRI runtime) to try out deploying her new app within Kubernetes pods.
- The application goes through CI/CD on a vendor product that uses runc and some vendor-written code to package an OCI image and push it to the enterprise’s container registry for testing.
- An on-premises Kubernetes cluster using containerd as the CRI runtime runs a suite of tests against the application.
- This particular enterprise has chosen to use Kata containers for certain workloads in production, and when the application is deployed it runs in pods with containerd, configured to use Kata containers as the base runtime instead of runc.
This whole example scenario works flawlessly because of compliance to the OCI specification for runtimes and images, and because the CRI allows for this flexibility on runtime choice.
This opportunity for flexibility and choice in the container ecosystem is a great place for us to be, and is truly important for the maturity of the industry that has risen up nearly overnight since 2014. The future is bright for continued innovation and flexibility for those using and creating container-based platforms as we enter 2019 and beyond!
Additional information can be found from a recent QCon NY talk delivered by Phil Estes: “CRI Runtimes Deep Dive: Who's Running My Kubernetes Pod!?”
About the Author
Phil Estes is a Distinguished Engineer & CTO, Container and Linux OS Architecture Strategy for the IBM Watson and Cloud Platform division. Phil is currently an OSS maintainer in the Docker (now Moby) engine project, the CNCF containerd project, and is a member of both the Open Container Initiative (OCI) Technical Oversight Board and the Moby Technical Steering Committee. Phil is a member of the Docker Captains program and has broad experience in open source and the container ecosystem. Phil speaks worldwide at industry and developer conferences as well as meetups on topics related to open source, Docker, and Linux container technology. Phil blogs regularly on these topics and can be found on Twitter as @estesp.