The Service Mesh Interface (SMI) was recently announced as an industry-wide initiative led by Microsoft that seeks to define a set of common, portable APIs that provide developers with interoperability across different service mesh technologies. The announcement of the SMI was prominently featured in the keynote at Kubecon EU.
Service mesh technology is gaining prominence with the explosion in usage of microservices and containers, and orchestrators like Kubernetes. There is a proliferation of service mesh technologies, with many vendors pushing diverse platforms to facilitate application development. However, there is danger of vendor lock-in, with application developers losing portability in moving from one platform to another.
At a high-level, the Service Mesh Interface provides:
- A standard interface for service meshes on Kubernetes
- A basic feature set for the most common mesh use cases
- Flexibility to support new mesh capabilities over time
- Space for the ecosystem to innovate with mesh technology
The Service Mesh Interface (SMI) borrows from the principles or Kubernetes resources like Ingress, which defines a common set of APIs but allows vendors to deliver their own best-of-breed implementations. Programming against these APIs will provide portability for these applications.
InfoQ caught up with Lachlan Evenson, principal program manager at Microsoft, regarding the annoucement of SMI. Topics also discussed included the ecosystem of service meshes on Kubernetes with quotes from Gabe Monroy's blog.
InfoQ: In the KubeCon EU keynote, there was talk about “smart endpoints and dumb pipes”, and how with the evolutions of microservices the pipes now need to become a lot smarter. Can you summarize what this means?
Lachlan Evenson: For years, the mantra for network architecture was to keep your network pipes as dumb as possible and build smarts into your applications. The network’s job is to forward packets, while any logic for encryption, compression, or identity lives inside the network endpoints. The Internet is premised on that mantra, so you could say it has worked fairly well.
But today with the explosion of micro-services, containers, and orchestration systems like Kubernetes, engineering teams are faced with securing, managing, and monitoring an increasing number of network endpoints. Service mesh technology provides a solution to this problem by making the network smarter, much smarter. Instead of teaching all your services to encrypt sessions, authorize clients, emit reasonable telemetry, and seamlessly shift traffic between application versions, service mesh technology pushes this logic into the network, controlled by a separate set of management APIs.
InfoQ: Can we talk first about the relationship between microservices and service meshes? Based on the hallway discussions at Kubecon EU, does it appear that the SMI might be too early in the adoption lifecycle for microservices in general, and Kubernetes in particular?
Evenson: We see a proliferation of service mesh technologies with many vendors providing new and exciting options for application developers. The problem is developers who turn to mesh technologies must choose a provider and write directly to those APIs. They become locked into a service mesh implementation. Without generic interfaces, developers lose portability, flexibility, and limit the ability to benefit from innovation across the broad ecosystem.
In addition to what Gabe Monroy mentioned in the blog, it’s the right time for SMI because like other common abstractions in the community (CNI,CRI, Ingress, NetworkPolicy), they focus on solving a developer need, rather than dictating the technical implementation. The creation of a common set of abstractions in the service mesh domain allows an ecosystem of implementations to grow, and provides for the developers and cluster admins the freedom to choose best in class implementations.
InfoQ: Can you provide more technical details on the Service Mesh Interface, and explain whether this is aimed primarily at developers or platform providers? If it is aimed at the latter, does it impact developers in any way?
Evenson: It’s aimed at both in different ways. It allows platform providers to supply a single abstraction for the most common service mesh features whilst maintaining the freedom to choose the best implementation for the job. This allows the platform to remain nimble and maintain a layer of portability whilst providing users access to the most requested service mesh features.
From a developer perspective they now have a set of APIs that represent what they want from services meshes. These common abstractions remove the implementation specific idiosyncrasies. For example, “I want to be able to offer canary deployments for my service” is a common ask, and rather than having to program discrete lower level resources, they can simply use the SMI TrafficSplit API to handle this complexity on their behalf.
InfoQ: In the keynote, Service Mesh Interface was mentioned as analogous to Kubernetes resources like Ingress, which was introduced in Kubernetes 1.1. To be fair, the scope of the ingres’ functionality is very specific. The SMI on the other hand is trying to provide a common interface across multiple diverse platforms (Istio, Linkerd, Consul, etc.) which have a lot more functionality than a network ingress. How will this challenge be overcome?
Evenson: Instead of approaching the service mesh ecosystem looking purely at raw functionality, we worked with enterprise customers to understand their top asks when looking at service mesh. We’re letting these common enterprise asks lead the abstractions we create in SMI.
InfoQ: Istio has gained quite a lot of adoption within the service mesh space, and so how much of the internals (or concepts) from Istio will be present in the SMI? Or is the SMI focused purely as an abstraction over core functionality provided by a service mesh?
Evenson: The latter, SMI, will provide the most common abstractions based on what we hear from enterprise customers as to why they are using service meshes.
InfoQ: Can you talk about the community behind SMI, and also the roadmap?
Evenson: SMI is an open project started in partnership with Microsoft, Linkerd, HashiCorp, Solo.io, Kinvolk, and Weaveworks; and with support from Aspen Mesh, Canonical, Docker, Pivotal, Rancher, Red Hat, and VMware.
We have joined forces with HashiCorp, Buoyant, Solo.io, and others to build initial specifications for the top three service mesh features we hear from our enterprise customers:
- Traffic policy – apply policies like identity and transport encryption across services
- Traffic telemetry – capture key metrics like error rate and latency between services
- Traffic management – shift and weight traffic between different services
This is just the beginning of what we hope to achieve with SMI. With many exciting mesh capabilities in development, we fully expect to evolve SMI APIs over time, and look forward to extending the current specification with new capabilities.
We are working with the founding partners to identify other APIs that should be defined, and are also building the roadmap for the upcoming projects milestones.
In summary, service mesh technology has garnered significant attention from application developers. SMI borrrows from time-tested principles of application development, and aims to provide an unifying specification for multiple vendors and platform providers to compete on implementations, but at the same time provides portability for application developers via a common set of interfaces.
More detailed information on SMI is at smi-spec.io with the specification itself on GitHub.