Andrea Luzzardi

Announcing Swarm 1.0: Production-ready clustering at any scale

Andrea Luzzardi

Today is a big milestone for Swarm: we’re taking it out of beta and releasing version 1.0, ready for running your apps in production.

Swarm is the easiest way to run Docker applications at scale on a cluster. It turns a pool of Docker Engines into a single, virtual Engine. You don’t have to worry about where to put containers, or how they talk to each other – it just handles all that for you.

We’ve spent the last few months tirelessly hardening and tuning it, and in combination with multi-host networking and the new volume management in Docker Engine 1.9, Swarm is ready for running your apps in production at any scale. In our tests, we’ve been running it on EC2 with 1,000 nodes and 30,000 containers and it keeps on scheduling containers in less than half a second. Not even breaking a sweat! Keep an eye out for a blog post soon with the full details.

Before we even reached 1.0, Swarm was already being used for all sorts of things – everything from O’Reilly building authoring tools to the Distributed Systems Group at Eurecom doing scientific research.

Rackspace has built their new container service, Carina, on top of Swarm. Simon Jakesch, Product Manager for Carina, explained why they chose it: “We are using Swarm to power Carina because it’s easy-to-use and has Docker-native APIs. Our users can scale up their apps in production using the same tools and APIs they use in development.”


What’s new

It’s not all hardening in this release – we’ve also got a few extra things we think you’re going to like:

Multi-host networking: Docker Engine 1.9 features a new networking system, and Swarm integrates fully with this. Any networks you create in Swarm will seamlessly work across multiple hosts. See the documentation for more details.
Persistent storage support: Engine 1.9 has a new volume management system which makes it much easier to create, manage, and attach volumes to containers. If you use a volume driver that works across multiple hosts (such as Flocker or Ceph) you’ll be able to store persistent data on your Swarm regardless of where containers get scheduled on your cluster. Read more about volume management in the Docker 1.9 blog post.

There are a whole bunch of other things too. Check out the full release notes for details.


Getting started

So what’s the best way to get up and running with Swarm? First, you want to get Swarm set up with multi-host networking. You then can use Compose to define and deploy a multi-container app on your Swarm. On its own, Compose works great for running development environments and CI tests, but you can also use it in combination with Swarm to scale up your apps. Check out the guide here to get started.

As well as entire distributed apps, Swarm is also particularly good at things like Jenkins, data processing, batch jobs, and loads of other stuff. Keep an eye out for some more guides over the next few weeks.

Join us for our online meetup on Swarm version 1.0 with Alexandre Beslic on November 11 – this is a great opportunity to learn more and get answers to your Swarm questions!


 Learn More about Docker

• New to Docker? Try our 10 min online tutorial
• Sign up for a free 30 day trial of Docker
• Share images, automate builds, and more with a free Docker Hub account
• Read the Docker 1.9 Release Notes
• Subscribe to Docker Weekly
• Register for upcoming Docker Online Meetups
• Attend upcoming Docker Meetups
• Register for DockerCon Europe 2015
• Start contributing to Docker



10 thoughts on “Announcing Swarm 1.0: Production-ready clustering at any scale

  1. Hello,

    This is very interesting. Is there any documentation or video on how we can setup Swarm on AWS ECS? If AWS ECS is not supported, how do we set it up on AWS in general?


  2. That volume subcommand is critical to any description of Docker as production ready . Docker defaults to leaving volumes dangling on the filesystem and there was previously no supported way to remove them. Really glad to see this make it in.

Leave a Reply