At DockerCon last week, Docker released version 1.12 of the core product, Docker Engine. The biggest new feature is that Docker Swarm is no longer a separate tool - now it's built into Docker Engine, making it easier to combine multiple Docker hosts into a single logical unit for increased scale and reliability. Docker Captain Adrian Mouat believes the new swarm mode makes Docker “a serious competitor in the orchestration space”.
Having Docker Swarm integrated into Docker Engine is a major advantage, but it's an added feature and you only use it if you want it. You can install, run and upgrade Docker 1.12 in exactly the same way as previous versions, and it is backwards compatible with existing container images and tools.
Running Docker on a single host and orchestrating containers with Docker Compose works as before. You can even use Docker 1.12 engines with an existing Docker Swarm. Unless you explicitly create a swarm using the new engine, the runtime behavior is the same as previous releases.
The original Docker Swarm product came in kit form and didn't have a core feature set built in. The Docker Swarm runtime itself ran as a container on each of the nodes, and you needed multiple additional technologies, like Consul or etcd for discovery and Nginx for load balancing. Your swarm would be running a mixture of infrastructure containers, alongside your own app containers.
Setting up an 'old' swarm wasn't straightforward either, because the discovery component had to be in place before you created the swarm, but then you would want the discovery running as part of the swarm, so you had a chicken-and-egg problem to resolve before you could do anything else (Jacob Blain Christen's article “Toward a Production-Ready Docker Swarm Cluster with Consul” explains this well).
With swarm mode you create a swarm with the 'init' command, and add workers to the cluster with the 'join' command. The commands to create and join a swarm literally take a second or two to complete. Mouat said “Comparing getting a Kubernetes or Mesos cluster running, Docker Swarm is a snap”.
Communication between nodes on the swarm is all secured with Transport Layer Security (TLS). For simple setups, Docker 1.12 generates self-signed certificates to use when you create the swarm, or you can provide certificates from your own certificate authority. Those certificates are only used internally by the nodes; any services you publicly expose use your own certs as usual.
From DockerCon, Nigel Poulton shared this comparison of securing a swarm with previous versions and with 1.12:
Docker recommend running all Swarm nodes inside the same L3 subnet, but depending on your environment, you could subdivide them so nodes which run public facing containers are segregated from internal nodes.
This segregation is the same approach you might have taken with the standalone swarm product, which I've documented with regard to Microsoft's cloud in my post “Production Docker Swarm on Azure”.
The self-awareness of the swarm is the biggest and most significant change. Every node in the swarm can reach every other node, and is able to route traffic where it needs to go. You no longer need to run your own load balancer and integrate it with a dynamic discovery agent, using tools like Nginx and Interlock.
Now if a node receives a request which it can't fulfil, because it isn't running an instance of the container that can process the request, it routes the request on to a node which can fulfil it. This is transparent to the consumer, all they see is the response to their request, they don't know about any redirections that happened within the swarm.
That feature, which Docker calls the routing mesh, supports external load balancing. You put a public-facing load balancer at the front of your swarm, and configure that as the single entry point for all the services. The public load balancer distributes incoming traffic blindly among the swarm nodes, and the node which receive the request intelligently re-distributes any traffic it can't handle itself. The Docker Core Engineering team explains that the routing mesh uses core Linux functionality, a “load balancer that’s been in the Linux kernel for more than 15 years”.
A combination of the routing mesh and the scheduler ensure that a failed node doesn't cause a service outage. If a node goes down, the load-balancer won't send it any traffic. If the loss of the node reduces the required number of replicas for a service, the scheduler spins up new replicas on other nodes.
Docker Swarm is still the name for native Docker clusters, but with version 1.12 swarm mode is an integral part of the Docker Engine, not a standalone product. You get discoverability, and you can scale your swarm with multiple managers for reliability, and as many workers as you need.
In this first release, Docker Engine is ahead of the rest of the Docker products - you can't yet create a new-style swarm with Docker Machine, or deploy services using Docker Compose. But the Docker community works fast, so expect those integrations to come soon.