Scott Johnston

Announcing Docker Machine, Swarm, and Compose for Orchestrating Distributed Apps

Feb 2015 Updates:

1) Orchestrating Docker with Machine, Swarm and Compose

2) Announcing Docker Machine Beta

3) Scaling Docker with Swarm

4) Announcing Docker Compose


As users start exploring Docker and Docker Hub, they typically start by Dockerizing some apps, incorporating Docker into their build-test pipeline, creating a Docker-based development environment, or trying out one of the other half-dozen common use cases.  In these use cases and others, we’ve heard from many community members about how Docker has sped up development-to-deploy cycle times and eliminated the friction of moving apps from laptops to data centers, or from one cloud to another.

DockerCon_Blog_-_Announcing_Docker_Machine__Swarm__and_Compose_for_Orchestrating_Distributed_Apps_v2_1_-_Google_Docs

 

Da capo

After getting their feet wet with simple, single-container apps, we see many users start to employ Docker as a platform for more complex, distributed applications (a.k.a., cloud native, microservices architecture, twelve-factor apps), or for apps composed of multiple containers, typically running across multiple hosts.  Users’ motivations for exploring distributed apps vary, but some common themes are emerging. Users find them appealing because they are:

  • Portable.  Application owners want an application stack that is independent of the underlying infrastructure so they’ll have the freedom to deploy on the infrastructure of their choosing – and then move or scale without friction as conditions change;
  • Composable.  Application owners want apps composed of multiple smaller services so they can keep teams small and fast-moving as well as assemble apps from trusted, secure components;
  • Dynamic.  Development-to-deployment cycles measured in months are no longer acceptable – application owners want cycle times measured in hours or even minutes.  Breaking down monolithic applications into multiple smaller services that can be updated independently reduces change management risk and compresses cycle time.
  • Scalable.  To simultaneously meet changes in demand while efficiently using infrastructure resources, application owners want to frictionlessly scale-up and -down across hosts, data centers, and even public clouds.

However, as users in the Docker community take the early steps on their path to distributed apps, we often hear questions like these:

  • “My singleton Docker containers are 100% portable to any infrastructure; but, how do I ensure my multi-container distributed app is also 100% portable, whether moving from staging to production, or across data centers, or between public clouds?”
  • “My organization has standardized on Docker; how do I retain this standard while still taking advantage of the breadth and depth of the tools in the Docker ecosystem?”
  • “How should I refactor my app delivery pipeline to support rapid iterations on the containers providing the services?”
  • “Breaking up my monolithic apps into microservices means there are many more components to keep track of and manage; what’s available in the Docker ecosystem to help with this?”

To help users along their distributed apps journey, today at DockerCon Europe we’re excited to announce three new Docker orchestration services: Docker Machine, Docker Swarm, and Docker Compose.  Each one covers a different aspect of the lifecycle for distributed apps. Each one is implemented with a “batteries included, but removable” approach which, thanks to our orchestration APIs, means they may be swapped-out for alternative implementations from ecosystem partners designed for particular use cases.

dockercon blog - orchestra conductor

Docker Machine

Docker Machine takes you from zero-to-Docker in seconds with a single command.

Before Docker Machine, a developer would need to log in to the host and follow installation and configuration instructions specifically for that host and its OS.  With Docker Machine, whether provisioning the Docker daemon on a new laptop, on virtual machines in the data center, or on a public cloud instance, the same, single command …

% machine create -d [infrastructure provider] [provider options] [machine name]

… gets the target host ready to run Docker containers.  Then, from the same interface, you can manage multiple Docker hosts regardless of their location and run any Docker command on them.

Furthermore, the pluggable backend of Docker Machine allows users to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface.  This driver API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance.  In this Alpha release, Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances; more drivers are in the works for AWS, Azure, VMware, and other infrastructure.

Note that Docker Machine is a separate project from the Docker Engine.  To try out the Alpha build and contribute to Docker Machine and its drivers, go to its repository.

docker-whales-transparent

Docker Swarm

Docker Swarm is native clustering for Dockerized distributed apps. It picks-up where Docker Machines leaves off by optimizing host resource utilization and providing failover services.  Specifically, Docker Swarm allows users to create resource pools of hosts running Docker daemons and then schedule Docker containers to run on top, automatically managing workload placement and maintaining cluster state.

For input, the default scheduler uses the resource requirements of the Docker container workloads and the resource availabilities of the hosts in the cluster and then uses bin pack to automatically optimize placement of workloads.  For example, here is how a user would schedule a redis container requiring 1 gig of memory:

% docker run -d -P -m 1g redis

To support specific requirements and policy-based scheduling, Docker Swarm provides standard and custom constraints.  For example, say that to ensure good I/O performance you want to run your MySQL container on a host with SSD storage.  You could express this as a constraint when scheduling the MySQL workload as follows:

% docker run -d -P -e constraint:storage=ssd mysql

In addition to resource optimization, Docker Swarm provides high-availability and failover.  Docker Swarm continuously health-checks the Docker daemon’s hosts and, should one suffer an outage, automatically rebalances by moving and re-starting the Docker containers from the failed host to a new one.

One of the unique aspects of Docker Swarm is that it can scale with the lifecycle of the app.  This means that the developer can start with a “cluster” consisting of a single host and maintain a consistent interface as the app scales from one host to two, 20, or 200 hosts.

Finally, Docker Swarm has a pluggable architecture and ships “batteries included” with a default scheduler.  To this end, we’re excited to announce a partnership with Mesosphere to make it a “first class citizen” in Docker Swam for landing Docker container workloads.  Stay tuned for the public API in the first half of 2015 which will allow swapping-in a scheduler implemented by an ecosystem partner or even your own custom implementation.  Nevertheless, regardless of the underlying scheduler implementation, the interface to the app remains consistent, meaning that the app remains 100% portable.

The above just scratches the surface.  Like Docker Machine, Docker Swarm is a separate project from the Docker Engine.  If you want to learn more about Docker Swarm, including getting your hands on an Alpha build, head on over to its repo.

Docker Compose

Docker Compose is the last piece of the orchestration puzzle.  After provisioning Docker daemons on any host in any location with Docker Machine and clustering them with Docker Swarm, users can employ Docker Compose to assemble multi-container distributed apps that run on top of these clusters.

The first step to employing Docker Compose is to use a simple YAML file to declaratively define the desired state of the multi-container app:

containers:
  web:
     build: .
     command: python app.py
     ports:
     - "5000:5000"
     volumes:
     - .:/code
     links:
     - redis
     environment:
     - PYTHONUNBUFFERED=1
  redis:
     image: redis:latest
     command: redis-server --appendonly yes

This example shows how Docker Compose takes advantage of existing containers.  Specifically, in this simple two-container app declaration, the first container is a Python app built each time from the Dockerfile in the current directory.  The second container is built from the redis Official Repo on the Docker Hub Registry.  The links directive declares that the Python app container is dependent on the redis container.

Not that it’s defined, starting your app is as easy as …

% docker up

With this single command, the Python container is automatically built from its Dockerfile and the redis container is pulled from the Docker Hub Registry.  Then, thanks to the links directive expressing the dependency between the Python and redis containers, the redis container is started first, followed by the Python container.

Docker Compose is still a work-in-progress and we want your help to design it. In particular, we want to know whether or not you think this should be a part of the Docker binary or a separate tool. Head over to the proposal on GitHub to try out an alpha build and have your say.

Coda

All this is just the briefest introduction to Docker Machine, Docker Swarm, and Docker Compose.  We hope you’ll take a moment to try them out and give us feedback – these projects are moving quickly and we welcome your input!

We also wish to thank the many community members who have contributed their experience, feedback, and pull requests during the pre-Alpha iterations of these projects.  It’s thanks to you that we were able to make so much progress so quickly, and in the right direction.

Distributed apps offer many benefits to users – portability, scalability, dynamic development-to-deployment acceleration – and we’re excited by the role the Docker platform, community, and ecosystem are playing in making these apps easier to build, ship, and run.  We’ve got a ways to go, but we’re psyched by this start – join us and help us get there faster!

Happy Hacking,

  • The Docker Team

Learn More

 

Read more in the news

forbes vb   Zdnet-lightbg-200px   tc-logo thevarguy


Scott Johnston

Announcing Docker Machine, Swarm, and Compose for Orchestrating Distributed Apps


17 Responses to “Announcing Docker Machine, Swarm, and Compose for Orchestrating Distributed Apps”

  1. Paul Otto

    Compose is clearly where Fig went… how does it work with clusters? Prior to Docker, Inc. buying Fig, I was exploring modifications to Fig to be Marathon + Mesos aware…

    Reply
  2. Tom Bortels

    This all looks awesome! One quick question – will this cover the case of having a container linked to a container on another node in the cluster (ie. Cross-host links?) I can’t tell for sure.

    Reply
  3. Tom Robinson

    Glad to see Fig living on as Docker Compose!

    Reply
  4. Christian Landgren

    Great news!

    The docker compose syntax looks very much like fig. Are there any differences or should it work like the fig up command?

    Also, do you have any plans on remote host links?

    Reply
  5. Serkan Haytac

    Awesome job on all the tools. Docker compose usage and yaml file looks identical to Fig project. If it is built on top of Fig, has there been any new features added to it?

    Reply
  6. Amos Folarin

    I think this is an interesting one (the storm over this seems more about perception and semantics than anything https://news.ycombinator.com/item?id=8699957), I’m generally supportive to docker ecosystem growing out — as is their right, as long as everyone gets their fair hearing. It was only a few weeks ago I came across the presentations of docker-hosts and docker-cluster (which have perhaps now re-emerge as docker-machine and docker-swarm) and seemed quite far off being a “real things” it is wise that a broad discussion take place over integration, that may not have happened (code for docker-cluster was not even exposed at the time).

    Reply
  7. Jim Haughwout

    Hmm, does this mean I should shift away from Ansible? (We actually Dockerize from DEV to PROD)

    Reply
  8. Manuel Ortiz

    Same question as Tom Bortels!

    Reply
  9. Konstantin

    Hi! I didn’t get if Swarm cares about presence of container image on selected host? If no, it can lead to high and useless disk space consumption.

    Reply
  10. Erkan Yanar

    There is no failover with docker swarm.

    Reply
  11. Wei-Chih Ting

    Docker Machine saves my life! Thank you!

    Reply
  12. Michael A. Smith

    (psst: s/Swam/Swarm)

    Reply
  13. Thomas Decaux

    Great announcement, Fig is really amazing, mount 10 isolated but linked containers in a few lines of YML whereas it takes 1 week to our sys-admin to do same thing with VMWare ^^

    An important piece is missing here, what about a cool web UI to monitor all these cool things!

    Reply
  14. Docker终极指南 | 大话运维

    […] ANNOUNCING DOCKER MACHINE, SWARM, AND COMPOSE FOR ORCHESTRATING DISTRIBUTED APPS […]

    Reply
  15. Docker in Light of the Socketplane Acquisition | Weaveworks Blog

    […] gets you the ‘batteries included but removable‘ platform that you may have heard of from the Docker team.  As a minimum, this requires plugin/extension APIs so that Docker supports the swapping out of […]

    Reply
  16. Guide to build a Scalable Web App on Amazon Web Services | Learn Technology for free

    […] themselves has recently thrown their hat in the ring with Docker Machine, Docker Swarm, and Docker Compose. These also look like promising technologies, […]

    Reply

Leave a Reply to Jim Haughwout

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.