Nathan LeClaire

Docker Machine 0.3.0 Deep Dive

Nathan LeClaire

dockermachine030blog post written by Nathan LeClaire,
Open Source Engineer at Docker

We recently released Docker Machine 0.3.0. I am a maintainer on the project, and I want to share with you some of the goodness that we have been working hard on integrating in the months since the previous release.


For those of you who may not be familiar, Docker Machine is a tool to simplify the automatic creation and configuration of Docker-enabled machines, whether they are VMs running locally in Virtualbox or on a cloud provider such as Amazon Web Services. It is sort of a spiritual successor (and eventual replacement) to the venerable boot2docker-cli. It makes me very happy to be able to quickly spin up Docker-capable instances which one can connect to quickly and securely using automatically-configured TLS, and I find it is very useful to be able to manage local VMs as well as VMs running in cloud providers.

A basic creation of a machine looks like this:

$ docker-machine create --driver virtualbox dev

This will create a Docker-enabled machine locally running in Virtualbox and point you in the right direction for being able to connect to that using the Docker CLI. You can start, stop, and ssh into the created VM using the Machine commands of the same name.

We’ve added some pretty exciting new features which extend the capability of Machine in this upcoming release, and I want to share a few of them with you today. We will be going over:

  1. generic driver for importing existing machines
  2. Configure Docker Engine options at create time
  3. Configure Docker Swarm strategy at create time
  4. Copy files from machine to machine using docker-machine scp
  5. Import your existing boot2docker VM into Docker Machine
  6. Support for RancherOS, Red Hat, and more as base operating systems

There are several additional features dropping which aren’t outlined here that you can check out in the full release notes here.

generic driver for importing existing machines

What if you have SSH access to a machine which is not originally created using Docker Machine, but which you would like to manage and use the same way you do with other Docker Machine created hosts? How can you import that into Docker Machine, and/or take advantage of the nice TLS/Swarm provisioning and setup that Machine can provide you out of the box at create time?

Previously, there was no way to do so. That’s all changing in this release with the recently merged generic driver.

If you have SSH access to the machine in question, you can import it into Docker Machine like so:

$ docker-machine create -d generic \
--generic-ssh-user ubuntu \
--generic-ssh-key ~/Downloads/ \
--generic-ip-address \

This will install Docker on the running instance (created independently of Docker Machine, perhaps through the AWS CLI), configure it to be talked to securely using TLS, and “import” it into Machine for the familiar workflow:

$ eval "$(docker-machine env jungle)"
$ docker info
$ docker-machine ssh jungle
Welcome to Ubuntu! etc.

It will also bootstrap the proper Swarm containers if the Swarm options are specified.

This really eases the pain of needing to have the kind of knowledge needed to bootstrap Docker, TLS, Swarm, etc. yourself and offloads the provisioning burden onto Machine. It’s not perfect, but I believe it’s a huge step in the right direction. This starts to open the doors to new and creative ways of using Machine alongside tools such as Ansible and Terraform, which I will definitely be going over in a later article.

If you are interested, I have a slightly more fleshed out example of using this driver with Swarm in my 0.3.0 Sneak Preview article here.

Configure Docker Engine options at create time

As you may or may not know, the Docker daemon accepts a huge plethora of run-time flags which affect the way that it behaves, runs containers, etc. These include critical things such as storage drivers.

Historically, setting and experimenting with these flags can be time-consuming and/or painful, especially to inexperienced users, because to change one of these settings requires that the user:

  1. Dig up the proper configuration file and edit it to have the setting you want, e.g. /var/lib/boot2docker/profile on boot2docker, /etc/default/docker in Ubuntu, etc.
  2. Set the configuration flags you want, or the corresponding esoteric environment variables for those flags
  3. Restart the daemon (which is done differently on various OSes – e.g. on boot2docker you run sudo /etc/init.d/docker restart, you do something like sudo service docker restart on Debian-based distros, etc.)
  4. Verify that it worked. If everything didn’t work out, potentially go dig up the Docker daemon logs to figure out what, debug, etc.
  5. Repeat steps 1-4 if you flubbed something.

Machine reduces these to a repeatable process which is mostly pretty OS-agnostic at runtime (it’s very new, so there may be a few issues, but the basic idea is there). You can specify which flags that the Docker daemon should be run with using the subclass of docker-machine create flags which begin with –engine-.

We have support for a few “top level” flags which we anticipate being commonly used, such as –engine-storage-driver to specify whether to use aufs, devicemapper, overlay, etc. or –engine-insecure-registry to specify an insecure registry to allow connection to (this is common for experimentation with private registries). Additionally, every single daemon flag available is supported through the “arbitrary flag” option –engine-opt, which allows you to set any daemon flag you want through key=value pairs. Therefore, you can mix and match to create Docker daemons with a wonderful variety of properties.

Due to the nature of Docker Machine, this means you can run several machines side-by-side which are all running Docker daemons with a variety of different properties. I anticipate this will be useful for testing and experimentation. Let’s take a look at a few examples which might indicate the usefulness of this feature.

One great example is the case of wanting to try out alternative storage drivers. As you may or may not know, part of how Docker works its funny voodoo is through the use of layered, copy-on-write filesystems, and a variety of options for various “drivers” are available. Of these, aufs is one of the most common, and it is the default on boot2docker today.

Interestingly, support for using the more light-weight OverlayFS was merged into Docker recently. OverlayFS is right in the kernel as of version 3.18 or so, and boot2docker >1.6 include kernels which are modern enough to use overlay. To do so with Docker Machine 0.3.0, it is simple:

$ docker-machine create -d virtualbox --engine-storage-driver overlay overlay
$ eval "$(docker-machine env overlay)"
$ docker info
Storage Driver: overlay
Backing Filesystem: extfs

(Note: If your local “cached” copy of boot2docker.iso is too old, you might have to run docker-machine upgrade overlay before the daemon will start properly with the configured option.)

As shown above, you can verify that the created daemon is using the overlay storage driver using docker info.

And that’s just storage drivers! Like I mentioned, you can use the –engine-* machine flags for all kinds of things. For instance, maybe we like Google’s DNS servers and want to pass –dns= to the created Engine. Or, perhaps we want to add labels to our created daemons to assist with Swarm scheduling (so that you could selectively schedule disk-IO sensitive containers on machines with SSDs, for instance). For kicks, we could also configure the daemon to always send container logs to syslog using the –log-driver flag.

Such a create would look like this:

$ docker-machine create -d virtualbox \
--engine-label disktype=ssd \
--engine-label distro=b2d \
--engine-opt dns= \
--engine-opt log-driver=syslog \

When that’s done creating, let’s docker-machine ssh into the created server and take a look.

Probably nothing will be in your syslog to being with, but we can fix that right up by running a container. Because the daemon has been configured to use the log driver “syslog”, it will be routed to the system log automatically instead of going through the traditional process of getting managed on disk in JSON by Docker.

$ docker-machine ssh hotrod
Welcome to Ubuntu ...
root@hotrod:~# cat /var/log/syslog
root@hotrod:~# docker run -d busybox nslookup
root@hotrod:~# cat /var/log/syslog
May 28 20:11:17 hotrod kernel: [ 1662.659979] device vethc485759 entered promiscuous mode
May 28 20:11:17 hotrod kernel: [ 1662.662329] IPv6: ADDRCONF(NETDEV_UP): vethc485759: link is not ready
May 28 20:11:17 hotrod kernel: [ 1662.689506] IPv6: ADDRCONF(NETDEV_CHANGE): vethc485759: link becomes ready
May 28 20:11:17 hotrod kernel: [ 1662.689571] docker0: port 1(vethc485759) entered forwarding state
May 28 20:11:17 hotrod kernel: [ 1662.689582] docker0: port 1(vethc485759) entered forwarding state
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Server:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 1:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Name:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 1:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 2:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 3:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 4:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 5:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 6:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 7:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 8:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 9:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 10:
May 28 20:11:17 hotrod docker/7901f1e34f24[3540]: Address 11:
May 28 20:11:17 hotrod kernel: [ 1663.075095] docker0: port 1(vethc485759) entered disabled state
May 28 20:11:17 hotrod kernel: [ 1663.075608] device vethc485759 left promiscuous mode
May 28 20:11:17 hotrod kernel: [ 1663.075622] docker0: port 1(vethc485759) entered disabled state

You can see the container logs in the lines with docker/7901f1e34f24 (your ID will be different, but you get the idea).

Note the daemon labels with docker info:

root@hotrod:~# docker info

Configure Docker Swarm strategy at create time

In addition to installing and configuring the Docker Engine, Docker Machine is a convenient gateway for bootstrapping Swarms.

We’ve had support for doing this using Docker Hub discovery for a while, but in this release we have introduced a few modifications that allow you to customize the parameters of the Swarm containers you run to create the swarms.

For instance, the default strategy for scheduling containers on Swarms created through Machine is spread, but for many purposes the binpack strategy might be more effective (binpack will pack containers as tightly as possible onto hosts, whereas spread will spread them out as widely as possible). You can set this Swarm master option using the –swarm-strategy flag. There is also an option for very fine-grained specification of master options, similar to the one mentioned for Docker Engine, called –swarm-opt. You could use this to set, for instance, the heartbeat of the swarm.

Here is an example create for a Swarm Master using these options:

$ docker-machine create -d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery token://<token> \
--swarm-strategy binpack \
--swarm-opt heartbeat=45 \

Copy files from machine to machine using docker-machine scp

This one seems simple at first glance but has already proven to be really useful for me.

We have introduced a docker-machine scp command which will allow users to move files from the Docker Machine host to created machines, vice versa, from machine to machine and so on.

Imagine running some kind of resource-intensive operation on a remote machine, and then simply copying the resulting artifact back to your local machine.

Usage is based on the machine names, and is similar to the default scp syntax. E.g.:

$ docker-machine scp -r droplet:/tmp/build_artifacts .

Import your existing boot2docker VM into Docker Machine

A lot of users have boot2docker-cli created VMs which may have large collections of images and containers, and they may not want to blow all of this away and start over with a new virtual machine managed by Docker Machine yet. The good news for them is that we are introducing an option to import existing boot2docker VMs with a new flag, –virtualbox-import-boot2docker-vm. To do so, simply specify the boot2docker VM you want to import by name (the default name is booot2docker-vm):

$ docker-machine create -d virtualbox --virtualbox-import-boot2docker-vm boot2docker-vm b2d

You should see in the newly created VM that all of the containers and images you had in your boot2docker VM have been preserved.

Support for RancherOS, Red Hat, and more as base operating systems

With Machine 0.3.0 we have expanded our provisioning system. Previously the only base operating systems we supported officially were Ubuntu and boot2docker. Those are still the defaults, but with the release we have introduced a new way of specifying how to install and configure Docker on a variety of distributions.

One example of a newly supported base operating system is RancherOS, a new minimalist operating system where everything runs inside of Docker containers, even system services like cron and ntpd. I’m pleased to announce that with this release, thanks to the relentless contribution efforts of Darren Shepherd and others, it is now very straightforward to try out and use RancherOS with Docker Machine.

Simply run a command such as the following (Rancher has kindly released a Docker Machine-friendly ISO — when Docker 1.7 is released, I highly recommend seeing if new ones are available for the latest and greatest), and you will be treated to a local RancherOS VM.

$ docker-machine create -d virtualbox \
--virtualbox-boot2docker-url \

Note that


is a bit of a misnomer here, so we will probably change the flag name to be more general in subsequent releases. When that’s created, let’s take a look at what’s going on under the hood a bit.

First, SSH into the created VM:

$ docker-machine ssh rancheros

Take a look at the Docker daemon processes which are running.

[docker@rancheros ~]$ ps -e | grep 'docker -d'
1 root     {system-docker} docker -d --log-driver syslog -s overlay -b docker-sys --fixed-cidr --restart=false -g /var/lib/system-docker -G root -H unix:///var/run/system-docker.sock
1232 root     docker -d -s overlay -G docker -H unix:///var/run/docker.sock -H tcp:// -H unix:///var/run/docker.sock --storage-driver overlay --tlsverify --tlscacert /var/lib/rancher/conf/ca.pem --tlscert /var/lib/rancher/conf/server.pem --tlskey /var/lib/rancher/conf/server-key.pem --label provider=virtualbox
1386 docker   grep docker -d

You’ll notice that one is running listening on /var/run/docker.sock and TCP :2376, which is usually what you would expect to see on a Machine-created VM. This is called the “user docker” and is used to handle the normal interactions with Docker that users will do, such as running a web app or utility containers. But you can also see that there is another Docker daemon process running which is listening on /var/run/system-docker.sock. What’s up with that?

Well, that is the RancherOS “system docker” which is used for managing system processes such as cron. You can take a peek with Docker’s -H (host) flag.

[docker@rancheros ~]$ sudo docker -H unix:///var/run/system-docker.sock ps
CONTAINER ID     IMAGE       COMMAND       CREATED                     STATUS       PORTS       NAMES
ac31a2242ff4   ntp:latest   "/usr/sbin/   10 minutes ago   Restarting (0) 3 minutes ago   ntp
4e7364981fc5   console:latest "/usr/sbin/   10 minutes ago   Up 10 minutes   console
1f5ba4d221fd   docker:latest   "/usr/sbin/   10 minutes ago   Up 10 minutes   docker
b6f959e73c69   acpid:latest   "/usr/sbin/   10 minutes ago   Up 10 minutes   acpid
f3b1fbcb0b50   udev:latest   "/usr/sbin/   10 minutes ago   Up 10 minutes   udev
aca8971302e8   syslog:latest   "/usr/sbin/   10 minutes ago   Up 10 minutes   syslog

NICE! It’s Docker all the way down with RancherOS.

That’s a little tour of RancherOS, but we didn’t stop there with this release. We also included support for recent versions of Red Hat, CentOS, Fedora, and Debian. For instance, to create a Red Hat Enterprise Linux VM on Amazon Web Services EC2, you could use a sequence of commands such as the following (Machine will read flag values from the corresponding environment variables where available):

$ export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxx
$ export AWS_ACCESS_KEY_ID=yyyyyyyyyy
$ export AWS_VPC_ID=vpc-12345678
$ docker-machine create -d amazonec2 \
--amazonec2-ami ami-12663b7a \
--amazonec2-ssh-user ec2-user \
$ docker-machine ssh rhel0 -- uname -a
Linux rhel0 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux

Have fun!

I hope that this gives you a feel for some of the new features we are introducing which we are very excited about. In addition to these new features countless bug fixes, stability improvements and so on have been introduced. We are lucky enough to have so many new features in the release that I could not even cover all of them in one post, so I recommend checking out some of the ones that I missed such as the newly merged Exoscale driver, or the fact that the VMware Fusion driver can finally use the mainline boot2docker ISO instead of a custom one (thereby alleviating issues that some users had with upgrade).

Special thanks to all of the developers, testers, and dreamers who have contributed to the project so far. We are super appreciative of your support.

Try out Machine today!



Learn More about Docker


5 thoughts on “Docker Machine 0.3.0 Deep Dive

  1. Hi,

    I really found this post useful. I was wondering if we could use the virtualbox-boot2docker-url option to create a new CentOS 7-based Docker host using the virtualbox driver?

    If so, do we need a special ISO to be built for this?

    Thank you!

  2. Hi,
    Any options/information about running a docker infrastructure on openVZ containers under Proxmox?

  3. Hi,
    I created one with virtualbox driver, created a couple containers within it, mounted the volumes of container to specific folder in the vm. I then stopped the VM, and started it again. I saw that all the folders on the vm used for the containers are erased.
    Is there a way to persist the folders after a restart of the vm?

  4. Thanks for posting.
    I think there’s an error in your article: using log-driver=syslog when using a virtualbox does not work. You will get this error: “Failed to initialize logging driver: Unix syslog delivery error”

Leave a Reply