At the inaugural CoreOS Fest in San Francisco, the CoreOS team announced that the App Container specification (appc) has recently gained support from Google, Apcera, Red Hat and VMware. Google have added support for CoreOS’s appc implementation ‘rkt’ into Kubernetes, and Apcera have created a new implementation of appc, named ‘Kurma’. A new governance policy has also been established for appc, and maintainers have been elected from Red Hat, Google, and Twitter, in addition to the existing CoreOS maintainers.
Since the announcement of the App Container specification (appc) in December 2014, CoreOS have been the sole company officially driving the development of the specification. Appc provides a definition of how to build and run containerised applications, with an emphasis on security, portability and modularity. In tandem with defining the appc spec, CoreOS have also developed rkt, a container runtime that was the first implementation of appc.
The CoreOS blog states that as security and portability between stacks become core to the successful adoption of application container, it was announced at CoreOS Fest that Google, Apcera, Red Hat and VMware have offered their support to this effort.
...companies and individuals are coming together to ensure there is a well defined specification for application containers, providing guidelines to ensure security, openness and modularity between stacks.
In April Google Ventures led the investment of $12 million into CoreOS, which also coincided with the launch of CoreOS’s Tectonic, a commercial Kubernetes platform. Kubernetes is Google’s open source orchestration system for application containers, which handles scheduling onto nodes within a compute cluster and actively manages workloads. At CoreOS Fest Google stated that a pull request to the Kubernetes project to add appc support has now been accepted. This means that CoreOS’s rkt (and any other appc compliant container implementation) can now be run alongside Docker containers within Kubernetes.
This is an important step forward for the Kubernetes project and for the broader containers community. It adds flexibility and choice to the container-verse and brings the promise of compelling new security and performance capabilities to the Kubernetes developer.
Apcera have added support to the appc efforts by releasing Kurma, an open source implementation of appc that evolved from the work on containerising the deployment of Apcera’s Hybrid Cloud Operating System (HCOS). The Apcera blog asserts that the company’s goals are aligned with the goals of appc:
While Apcera’s HCOS continues to be about security and policy, Kurma is the base container environment that can improve operational efficiencies as the cornerstone for a HCOS deployment. AppC became attractive to us, [... because ...] things like network ontologies, image discovery, image validation/encryption, and application identity are all topics we’re keen on
The CoreOS blog states that Red Hat have assigned an engineer to participate as a maintainer of appc. This is in addition to the appc project establishing a governance policy, and electing several new community members from Google and Twitter, who are unaffiliated with CoreOS. Two of the initial developers of the spec from CoreOS, Brandon Philips and Jonathan Boulle, also remain as maintainers.
This new set of maintainers brings each of their own unique points of view and allows appc to be a true collaborative effort.
In April, VMware announced support for appc and shipped rkt within Project Photon, VMware’s lightweight Linux operating system, which makes rkt available as a deployment mechanism for VMware’s vSphere and vCloud Air customers. The CoreOS blog states that VMware have reaffirmed their commitment to appc, and is working closely with the community to evolve the specification.
InfoQ caught up with Jonathan Boulle, project technical lead at CoreOS, and asked questions in relation this recent annoucement at CoreOS Fest.
InfoQ: The introduction of rkt support in Kubernetes is great for supporting a diverse ecosystem of containers within this platform. Are you working with other cluster management framework offerings, such as Mesosphere's Marathon, Apache Aurora or AWS's ECS, to add rkt support?
Boulle: Ever since we first started developing the appc spec and rkt we've been in close contact with the teams working on Apache Mesos – both at Mesosphere itself, and at Twitter where a lot of core development happens. They have provided a lot of useful input and are also actively working on their own implementation of the specification inside Mesos itself. This fits well with their architectural model which tends to prefer integrating functionality into the core of Mesos. And it's a great example of exactly what we're trying to accomplish with the spec: a well-defined standard that sees a variety of different, interoperable implementations. With appc incorporated directly into the core of Mesos, it means that all frameworks that run on top of the Mesos platform – Mesosphere's Marathon, Apache Aurora, and others – will be able to take advantage of it.
We’re also in active discussions with other cluster management framework offerings about integrations with appc and rkt, so stay tuned!
InfoQ: Storage and networking are still a challenge within a containerised platform. How do you see the appc spec contributing to solving these issues?
Boulle: When first developing the spec, we were intentionally very agnostic about these two areas – networking in particular, which is an incredibly complicated topic because almost every environment has its own unique, esoteric requirements. We didn't want to be overly prescriptive about this in the spec so we kept things at a very simple level, stating only that runtimes need to provide a usable layer 3 network interface to an app container.
As we started working on implementing the spec ourselves, we had a clear need to come up with a concrete networking solution that would work for rkt, while also providing flexibility to cater to all these different requirements. To achieve this we worked with the community (who provided a lot of very helpful input!) to develop a plugin-based networking proposal – the basic idea being that standalone plugins could be developed to interface with all kinds of different networking backends, from cloud provider technologies to the latest Linux kernel features like ipvlan. Our networking guru Eugene, who's also the lead developer behind the flannel project, then implemented this networking functionality and integrated it into rkt.
But as the rkt networking proposal matured, one of the key pieces of feedback from the community was that it seemed that it would be a more generally useful approach to dealing with networking outside of rkt itself. At the same time we started hearing from others in the industry – like engineers at Red Hat working on OpenShift, and the team at Google working on Kubernetes – that they were also interested in exploring similar plugin-based solutions for their projects.
As you probably know we're huge fans of open and common standards here at CoreOS, and seeing this strong demand in the industry for something that looked like it could be shared made us want to do something about it. It's incredibly powerful for the industry and the success of the container ecosystem if we can coalesce around a shared standard and leverage common, interoperable resources. So we recently created what we're initially calling CNI, the Container Network Interface, which is a specification defining network plugins for application containers on Linux. While for now we decided to release it under the appc organization on GitHub, it is completely distinct from the appc spec itself and much smaller in scope (for one thing, it’s targeted very specifically at Linux whereas the appc spec is intended to be cross-platform). We’re collaborating actively with engineers from Google, Red Hat and Cisco as well as other members of the community to flesh out the specification and hopefully come to an agreement very soon so that we can start to have it integrated into different projects.
I do want to emphasise that while this is still a proposed specification and very much under development, we already have a number of fully functional plugins that work standalone and there are several more in active development. In fact, we've already migrated rkt itself to using the CNI plugins with great success, and in the last few days we've had a community member add a plugin for ipvlan.
As for storage, that is one area we haven’t yet had the time to focus on. What I'll say for now is that wherever possible we’re hoping to stand on the shoulders of giants working on large distributed systems – like Kubernetes and Mesos – to leverage their expertise and experience and feed that back into the spec.
InfoQ: As containers are becoming more mainstream on Linux servers, we are seeing other markets, such as Windows servers and IoT platforms, now opening up to containerisation. Do you see appc or rkt making an impact here?
Boulle: At CoreOS we are focused on building well-defined, efficient and highly composable components and we strive to make them as portable as possible. We can definitely see some interesting applications with containers in the embedded devices or IoT field. There is a lot of interest from the community in this area: we've had users exploring using rkt and appc on ARM devices, and to this end we've recently added ARMv7 and v8 architectures to the standard set of architectures recognized in appc. Since the ARM ecosystem has historically been a very diverse set of architectures with some disagreement on naming in implementations like Linux, we wanted to be careful we did this in the right way, and so we worked with engineers from ARM to ensure we were using the appropriate nomenclature.
As for Windows, from the very beginning we have intended appc to be a cross-platform specification and we would love to see someone from Microsoft or the Windows community working on an implementation. While developing the spec we are working very hard to strike a careful, pragmatic balance between agnosticism and portability, by having the core of the spec very generic but supporting OS-specific sections as well.
InfoQ: The announcement of additional companies supporting the appc spec is great for ensuring the future of this community effort. Do you see Docker Inc, officially joining the list of supporters in the future
Boulle: The appc spec is an open project and we invite all companies – and indeed, anyone from the community – to participate in discussion and shaping the spec. Any individual who contributes actively to the spec is eligible to be elected as a maintainer. It's important to note that now that we have an established governance policy CoreOS does not control the fate of the project, as the majority of maintainers are outside the company.
We would absolutely welcome an engineer from the Docker team to get involved with the project. The obvious place to start would be to provide feedback on the current state of the specification and where it might not work well for the Docker project, then work with the existing maintainers to resolve those issues in the spec, working towards ultimately implementing appc support inside the Docker Engine itself. With several implementations of appc runtimes under active development today – including jetpack, kurma, and rkt – we already know that we can collectively agree on a spec and have functional independent implementations being developed in tandem. So having another group of engineers implementing the spec and participating in the discussion to make it as well-defined as possible would be a welcome addition.
Our goal is to have a community of people who all want to see containers be successful in the variety of creative ways they can be used, and to ensure there is a shared standard that meets the needs of everyone.
InfoQ: The 'Appc as a runtime option in the docker engine' PR opened on the Docker Github repository did not generate much discussion (other than on the associated code POC PR). Were you surprised by this?
Boulle: I would say we were surprised, because we felt strongly that our overall goals for the container ecosystem are aligned and we had hoped to work through the issues in the open and be able to coalesce on a common solution. We could have been more clear in the first incarnation of the PR (#10776) and expressed more unequivocally that it was purely a proof-of-concept to guide discussion on the issue (#10777) you mentioned and demonstrate our willingness to contribute code. Unfortunately, the code became the focal point instead of the discussion we were hoping to foster. So we have to accept some responsibility for not being as clear as we could have been from the beginning.
InfoQ: Some developers are suggesting that the container space is still too young to be attempting standardisation. Do you have any thoughts on this?
Boulle: While the container space is young, we do believe there are certain problems that can be standardized and shared between multiple implementations today, and these are the things we are trying to define in the appc specification. The issues around image format, environment, naming, discovery, versioning, compression, and cryptography are well-understood problems that a team of engineers can implement independently and debate separately.
Reaching consensus on these issues means that an application developer can write a single appc container image and run it consistently anywhere that adheres to the spec. This is one of the primary reasons why it is important to reach agreement around a container standard: to make developers' lives easier, and allow them to use a variety of tools interoperating with a shared format. Then, when it's time to run software in production, an operations engineer can choose any runtime that adheres to the spec and have confidence the application will run in a predictable way.
Now, it's true that trying to standardize everything about a containerized infrastructure – especially while the field is so nascent – would be difficult or perhaps even impossible. But appc doesn't attempt to standardize things like the runtime or networking in minute detail. Compared to a technology like virtual machines, containers consist of a spectrum of implementation decisions, and in the spec we abstract that away to a higher level. We intend to keep the appc spec generic and focused on the areas that it is today; things like externally-facing APIs, management of resource hierarchies, network implementations, and more are definitely out of scope.
InfoQ: Can you provide any guidance of when the appc spec or rkt will become generally available at v1?
Boulle: CoreOS has a "release early, release often" philosophy and rkt is still a young project. That said, we are hard at work getting it to a stage where we feel comfortable calling it production ready. One thing to note is that pre-v1.0 we are intentionally trailing rkt's major/minor version number behind that of the spec (to be clear about which version it implements), so the rkt release versions are not representative of its pace of development or maturity. This also obviously implies that rkt is dependent on the stabilisation of the spec before we can announce 1.0!
As for the spec itself: with the recent appointment of additional maintainers from outside CoreOS, we now have the collective help of others to make it ready. The last couple of weeks have seen progress ramp up significantly with these additional contributors, and we are starting to become very focused on what is necessary to reach 1.0. When we reach this milestone, it will be based on the collaborative work of the maintainers on the governance board, as well as the broader appc community.
The CoreOS blog concludes by inviting new companies to participate within the appc community, and encouraging developers to get involved with the appc specification and rkt implementations by joining the appc mailing list and discussions on Github.