Ben Firshman

Orchestrating Docker with Machine, Swarm and Compose

Ben Firshman

Back in December, we announced our new tools for orchestrating distributed apps: Machine, Swarm, and Compose.

Today the first versions of these tools are available to download. They’re not ready for production yet, but we’d really like for you to try them out and tell us what you think.

Machine takes you from “zero-to-Docker” with a single command. It lets you easily deploy Docker Engines on your computer, on cloud providers, and in your own data center. Read more and download on Machine’s blog post.

Swarm is native clustering for Docker containers. It pools together several Docker Engines into a single, virtual host. Point a Docker client or third party tool (e.g., Compose, Dokku, Shipyard, Jenkins, the Docker client, etc.) at Swarm and it will transparently scale to multiple hosts. A beta version of Swarm is now available, and we’re working on integrations with Amazon Web Services, IBM Bluemix, Joyent, Kubernetes, Mesos, and Microsoft Azure. Read more on Swarm’s blog post.

Compose is a way of defining and running multi-container distributed applications with Docker. Back in December we opened up its design to the community. Based on the feedback from that, Compose will be based on Fig, a tool for running development environments with Docker. Read more and download on Compose’s blog post.


All Together Now

The Docker Engine works really well for packaging up simple, single container apps, making them much easier to build, deploy and move between providers. But we hear all the time that you want to be able to define and deploy complex applications consisting of multiple services. If you want to do this with Docker at the moment, it’s not clear what to do. You probably have to cobble together something with shell scripts to make it work.

This isn’t ideal. We’d like to share with you what some of our ideas are for how distributed applications will work with Docker, and how the tools we’re releasing today are going to help with that.

We’re hearing that you want your distributed applications to be:

  • Portable across environments: You want to be able to define how your application will run in development, and then run it seamlessly in testing, staging and production.
  • Portable across providers: You want to be able to move your application between different cloud providers and your own servers, or run it across several providers.
  • Composable: You want to be able to split up your application into multiple services.

The tools we’re releasing today are the start of a platform that will make all this possible.

First, you use Machine to create hosts with Docker Engines already installed. Machine can create these hosts on your computer, on cloud providers, and/or inside your own data center.

Next, you pool these Engines together using Swarm. Swarm connects Docker Engines together into a single, virtual Engine. It’s a bit like a really big computer for running Docker containers. You no longer have to worry about managing servers – they are all pooled together and a clustering system manages the resources for you.

Swarm comes with a built-in clustering system, but if you want to integrate with existing systems you have running, in the future, Swarm will also work on top of Mesos or Kubernetes.

Also, if you’re hosting in the cloud and don’t want to set up your own Swarm, we’re working with Amazon EC2 Container Service, IBM Bluemix Container Service, Joyent Smart Data Center and Microsoft Azure to integrate their offerings with Swarm.

Once you have a Swarm, you can then run your application on top of it. You define your multi-container application with Compose (e.g., a web server container connected to a database container). Compose can then run this application on anything that can speak the Docker API. In development, you can run it directly on a Docker host running on your computer. When you need to take your app to staging and production, you can then point Compose at your Swarm cluster.

So, if you define your application with Compose, you can run it across any environment or provider: your computer, a Swarm in the cloud, a Swarm in your own data center, a Swarm running on top of Mesos or Kubernetes – anywhere that speaks the Docker API.

To see it in action, check out this video:


Get Involved

This design is still being worked on and we’d really like your input. As we do with a lot of things at Docker, we’re building all this out in the open so people can start using it. This lets us see how users are working with it so we can iterate development quickly. And, because it’s open source, you can get directly involved in building it.

Head over to the installation instructions for Machine, Swarm and Compose to download them. They can be used as standalone tools, but if you’re feeling adventurous, they also work together (albeit in a very experimental state). Check out the documentation for the Machine + Swarm integration and the Compose + Swarm integration.

If you have any comments on the high-level design, then the docker-dev mailing list is the place to go. The actual day-to-day development of the projects is happening on the Machine, Swarm and Compose GitHub repositories.

If you’re the sort of person who used a Docker release before 1.0, then this is the time for you to get involved!


Learn More about Machine, Swarm and Compose


Learn More about Docker

Read More in the Press

forbestc-techcrunch data infoworld Zdnet-lightbg-200px siliconANGLEtwitter



4 thoughts on “Orchestrating Docker with Machine, Swarm and Compose

  1. Cool!! Nice concept.

  2. Dipesh Bhardwaj


  3. i love docker

  4. This is exactly what I’ve been waiting for! One question, though… With Machine, does this mean you will wind up with a virtual host for each container? Doesn’t that mean you lose some of the performance benefits of using a container natively?

Leave a Reply