HashiCorp has released Consul 1.14, adding new features that simplify deployments and improve the resiliency of their service mesh platform. The release includes Consul Dataplane, an improved architecture for deploying onto Kubernetes. The cluster peering model that was introduced as a beta feature in 1.13 has been moved into full general availability.
The introduction of Consul Dataplane removes the need for deploying the Consul client agent when deploying on Kubernetes. Instead, the Consul Dataplane is deployed as a sidecar container alongside the workload's pods. The Consul Dataplane is then responsible for discovering and watching the Consul servers available to the pod. It also manages the initial Envoy bootstrap configuration and the execution of the process.
This new architecture has several benefits. Consul Dataplane does not use the gossip protocol and instead only requires a single gRPC connection out to the Consul servers. It also does not require peer networks between the Consul servers and runtimes running workloads. Since Consul Dataplane does not make use of the gossip protocol there is no need for configuring the gossip encryption key. As well, there is no longer a need to distribute separate ACL tokens for each client agent.
Consul Dataplane runs as a sidecar alongside the workloads allowing for support for multiple runtimes. This allows for running Consul on Kubernetes clusters such as Google Kubernetes Engine (GKE) Autopilot or AWS Fargate. Finally, Consul Dataplane provides an easier upgrade process as new Consul versions will no longer require upgrading the various Consul client agents. With Consul Dataplane, only the Consul servers will need to be upgraded.
HashiCorp has indicated that support for Consul clients will remain for non-Kubernetes deployments. This support will continue to include both service discovery and service mesh use cases.
This release also sees the move to general availability for the cluster peering model. Introduced as a beta feature in Consul 1.13, the cluster peering model provides an alternative to WAN-federation for multi-datacenter deployments. The 1.13 release did not include support for any cross-datacenter traffic management features.
Version 1.14 moves the cluster peering model into full GA along with added support for advanced traffic management capabilities. This includes support for blue/green deployments, A/B testing, and service failovers.
Cluster peering has many advantages over WAN federation for multi-datacenter deployments. With cluster peering, each cluster, data center, or enterprise admin partition, is fully independent. This includes full control over which services can be accessed from which peered cluster. Additional improvements include support for more complex data center topologies such as hub-and-spoke and support for enterprise admin partitions in multiple connected data centers.
The release also includes enhancements to service failover. This includes support for service failover targets existing in WAN-federated datacenters, cluster peers, and local admin partitions. As well, service failover targets now support different service names, service subsets, or namespaces from the unhealthy local service.
Consul 1.14 is now generally available. More information about the release can be found on the HashiCorp blog or within the Consul documentation.