Mike Coleman

Docker Compose and Kubernetes with Docker for Desktop

If you’re running an edge version of Docker on your desktop (Docker for Mac or Docker for Windows Desktop), you can now stand up a single-node Kubernetes cluster with the click of a button. While I’m not a developer, I think this is great news for the millions of developers who have already been using Docker on their Macbook or Windows laptop because they now have a fully compliant Kubernetes cluster at their fingertips without installing any other tools.

Developers using Docker to build containerized applications often build Docker Compose files to deploy them. With the integration of Kubernetes into the Docker product line, some developers may want to leverage their existing Compose files but deploy these applications in Kubernetes. There is, of course, Kompose, but that’s a translation layer which causes you to have two separate artifacts to manage. Is there a way to keep a native Docker-based workflow?

With Docker on the desktop (as well as Docker Enterprise Edition) you can use Docker compose to directly deploy an application onto a Kubernetes cluster.

Here’s how it works:

Let’s assume I have a simple Docker compose file like the one below that describes a three tier app: a web front end, a worker process (words) and a database.

Notice that our web front end is set to route traffic from port 80 on the host to port 80 on the service (and subsequently the underlying containers). Also, our words service is going to launch with 5 replicas.


    build: web
    image: dockerdemos/lab-web
     - "./web/static:/static"
     - "80:80"

    build: words
    image: dockerdemos/lab-words
      replicas: 5
      endpoint_mode: dnsrr
          memory: 16M
          memory: 16M

    build: db
    image: dockerdemos/lab-db

I’m using Docker for Mac, and Kubernetes is set as my default orchestrator. To deploy this application I simply use docker stack deploy providing the name of our compose file (words.yaml) and the name of the stack (words). What’s really cool is that this would be the exact same command you would use with Docker Swarm:

$ docker stack deploy --compose-file words.yaml words
Stack words was created
Waiting for the stack to be stable and running...
 - Service db has one container running
 - Service words has one container running
 - Service web has one container running
Stack words is stable and running


Under the covers the compose file has created a set of deployments, pods, and services which can be viewed using kubectl.

$ kubectl get deployment
db        1         1         1            1           2m
web       1         1         1            1           2m
words     5         5         5            5           2m

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
db-5489494997-2krr2      1/1       Running   0          2m
web-dd5755876-dhnkh      1/1       Running   0          2m
words-86645d96b7-8whpw   1/1       Running   0          2m
words-86645d96b7-dqwxp   1/1       Running   0          2m
words-86645d96b7-nxgbb   1/1       Running   0          2m
words-86645d96b7-p5qxh   1/1       Running   0          2m
words-86645d96b7-vs8x5   1/1       Running   0          2m

$ kubectl get services
NAME            TYPE          CLUSTER-IP       EXTERNAL-IP    PORT(S)       AGE
db              ClusterIP     None                    55555/TCP     2m
web             ClusterIP     None                    55555/TCP     2m
web-published   LoadBalancer        80:32315/TCP  2m
words           ClusterIP     None                    55555/TCP     2m

If you look at the list of services you might notice something that seems a bit odd at first glance. There are services for both web and web-published. The web service allows for intra-application communication, whereas the web-published service (which is a load balancer backed by vpnkit in Docker for Mac) exposes our web front end out to the rest of the world.

So if we visit http://localhost:80 we can see the application running. You can actually see the whole process in this video that Elton recorded.

Now if we wanted to remove the service you might think you would remove the deployments using kubectl (I know I did). But what you actually do is use docker stack rm and that will remove all the components created when we brought the stack up.

$ docker stack rm words
Removing stack: words

$ kubectl get deployment
No resources found

And, to me, the cool thing is that this same process can be used with Docker EE – I simply take my Compose file and deploy it directly in the UI of Docker Enterprise Edition (EE) – but that’s another post.

Want to try it for yourself? Grab Docker for Mac or Docker for Windows, but be sure to check out the documentation (Mac and Windows) for more info.

Learn more:

, , , , ,

Mike Coleman

Docker Compose and Kubernetes with Docker for Desktop

One Response to “Docker Compose and Kubernetes with Docker for Desktop”

  1. Muhammad Rehan Saeed

    A few questions about this:

    1. Is this a migration path or is using the Kubernetes yml format the future?
    2. Will all Kubernetes features be ported over to the docker stack format?
    3. If I wanted to run Docker CE, presumably Kubernetes will now get installed and I can switch orchestrator at will?
    4. Can I use the Kubernetes scaling project to scale the number of nodes in my swarm?


Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.