Scott Coulton

#SwarmWeek: HealthDirect Uses Docker Swarm for Blue Green Deployments

 

Healthdirect Australia provides all Australians with access to health information and advice, no matter where they live, or what time of the day or night it is, they can talk to a health professional, find trusted advice online about the appropriate care for their health issue and find the closest local services that are open when they need them.

Our Docker journey originally started with the open source edition and early on we invested in building out our own tooling to support infrastructure and application provisioning. Recently we transitioned to Docker Datacenter consisting of Engine, Trusted Registry and Universal Control Plane (which comes with Swarm). Having an integrated end-to-end platform with management, plugins and ecosystem integrations available right out of the box was the deciding factor for us. Docker Datacenter provides us with a solid foundation for our application environment so our team can spend more time directly building or deploying applications that help our customers.

In this blog post we will explain how we use Docker Swarm to do blue green deployments.

At Healthdirect we use Puppet to deploy our Swarm cluster to ensure our cluster is in a consistently defined state each time we deploy. In this example we will be pulling our Docker images from our local Trusted Registry. Our environment is based on AWS but the same approach could be easily ported to any other cloud provider or Openstack, for example. The reason we use an internal registry is to take the advantage of the increased speed we will get at deployment time by having our images close to the endpoints they will be deployed to.

So first of all let’s look at the plumbing: we are going to assume that we have a deployment tool to provision our instances in a build server like Jenkins. As part of the build process the cloud init userdata will bootstrap Puppet onto our nodes.

The environment looks like the following:

healthdirect_docker3
 

To build the cluster we use the following Puppet modules: _Docker, _Docker Swarm and _Consul (as the service discovery backend for Swarm).

To build our Consul cluster we use the following code:

healthdirect_docker1
 

In the above code snippet we will setup our Consul cluster and set up a simple health check to make sure our Docker daemon is healthy. You will notice that we have two blocks of code to set up Consul. The first to bootstrap the cluster, the second to join the cluster.

Next we will set up Swarm with a native Docker network called swarm-private. We will create a private network called swarm-private using the overlay driver. Again we have two blocks of code to set up our Swarm cluster. The first is the block is the master, the second again will give us the functionality to add more nodes to our cluster.

healthdirect_docker5
 

The final code snippet will actually deploy our application to our Swarm cluster. In this instance we have an nginx, application and db Docker images in our Docker Trusted Registry. We will use Puppet to pull our images and deploy them to our cluster.

healthdirect_docker4
 

 

As you can see we are taking advantage of the native Docker networking stack to lock down access to our database. In a production environment you would want to add scheduling filtering to make sure you have all your containers spread correctly.

healthdirect_docker2

 

Now pictured in the diagram above we have a working Swarm cluster, that is defined and repeatable. When a new build of the application is released we can now build a new Swarm cluster. This is perfect for developers, QA teams and Production as you can deploy blue and green deployments and either repoint your load balancer or DNS.


 

Thanks Scott for sharing your Docker story! Learn more about Healthdirect and read their mention in the Docker Datacenter launch announcement.

Make sure to check out the Puppet module for Swarm contributed by Scott himself.

And don’t forget to participate in our DockerCon ticket raffle! Share a picture or description of your Swarm with us on Twitter and tag @docker and #SwarmWeek for a chance to win a free ticket to DockerCon 2016 in Seattle, WA on June 19-21.
 

Additional Resources


 

Learn More about Docker

, , , , , , , ,

Scott Coulton

#SwarmWeek: HealthDirect Uses Docker Swarm for Blue Green Deployments


2 Responses to “#SwarmWeek: HealthDirect Uses Docker Swarm for Blue Green Deployments”

  1. Luke Chen

    Amazing sharing, thanks! A couple of questions:
    1. In the graphs there’re 2 swarm-a-02 boxes, is it intentional or just a typo?
    2. What’s the magic that switch green to blue, e.g. updating the ‘image’ value by pointing to a newer version of image?
    3. How easy if there’s a need to backout the new deployment?

    Reply
  2. sjey

    Amazing blog post Scott. How do you guys manage docker scheduling when a container crashes or the container host dies?

    Are you guys using dns to flip over between blue and green environments? If so, there aren't any dns resolution errors during flipover?

    Thanks in advance

    Reply

Leave a Reply to Luke Chen

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.