Orchestrating Docker containers in production using Fig

In the last blog post about Fig we showed how you could define and run a multi-container app locally.

We’re now going to show you how you can deploy this app to production. Here’s a screencast of the whole process:

Let’s continue from where we left off in the last blog post. First, we want to put the code we wrote up onto GitHub. You’ll need to initialize and commit your code into a new Git repository.

$ git init
$ git add .
$ git commit -m "Initial commit"

Then create a new repository on GitHub and follow the instructions for how to set up a remote on your local GitHub repository. For example, if your repository were called bfirsh/figdemo, you’d run these commands:

$ git remote add origin git@github.com:bfirsh/figdemo.git
$ git push -u origin master

Next, you’ll need to get yourself a server to host your app. Any cloud provider will work, so long as it is running Ubuntu and available on a public IP address.

Log on to your server using SSH and follow the instructions for installing Docker and Fig on Ubuntu.

$ ssh root@[your server’s IP address]
# curl -sSL https://get.docker.io/ubuntu/ | sudo sh
# curl -L https://github.com/docker/fig/releases/download/0.5.2/linux > /usr/local/bin/fig
# chmod +x /usr/local/bin/fig

Now you’ll want to clone your GitHub repository to your server. You can find the clone URL on the right hand side of your repository page. For example:

# git clone https://github.com/bfirsh/figdemo.git
# cd figdemo

With your code now on the server, you run fig up in daemon mode on the server to start your app on the server:

# fig up -d

That will pull the redis image from Docker Hub, build the image for your web service that is defined in Dockerfile, then start up the redis and web containers and link them together. If you go to http://[your server’s IP address]:5000 in your browser, you will see that your app is now running on your server.

Deploying new code

Let’s deploy new code to our server. Make a change to the message in app.py on your local machine, and check the change is correct by running fig up and opening up your local development from the previous blog post in your browser.

If the change looks good, commit it to Git:

$ git commit -m "Update message" app.py
$ git push

Then, on your server, pull the changes down:

# git pull

You then need to build a new Docker image with these changes in them and recreate the containers with fig up:

# fig build
# fig up -d

You should now see the changes reflected on http://[your server’s IP address]:5000! One thing to note is that it has remembered how many times you have viewed the page. This is because the data stored in Redis is persisted in a Docker volume.

Next steps

That’s the basics of deploying an app to production using Docker. If you want to do more complex setups, you can create a separate fig.yml for your production environment, e.g. fig-production.yml, and tell Fig to use this file when running fig up:

$ fig up -d -f fig-production.yml

If you’re using a separate file for production, this will let you do things like:

  • Expose your web app on port 80 by replacing 8000:8000 with 80:8000 in your ports definition.
  • Remove the volumes statement for injecting code into your container. This exists so code can update immediately in your development environment, but is unnecessary in production when you are building images.
  • Use the Docker Hub to ship code to your server as an image. If you can set up an automated build on Docker Hub to build an image from your code, you could replace the build statement in your web service with an image that points to that repository.

Those are just some ideas – we’d love to hear of other things you have come up with in the comments.

Learn More


Orchestrating Docker containers in production using Fig


3 Responses to “Orchestrating Docker containers in production using Fig”

  1. matt

    Hey I’ve been needing to see that screencast of a while, really helped put things together.

    I am working on this project with the idea of learning / teaching how micro-services systems can be developed and deployed. Would be cool to understand further how fig could help with that.

    http://github.com/stackmates

    Reply
  2. Dean

    This is a neat idea! It definitely helps deal with the complexity of mile-long `docker run` statements.

    I am curious on what exactly happens the second time `fig up` is run. Does it bring down the first container and the bring up the new one? It seems like this would forcibly kill any existing connections, and leave a period of downtime during deployment.

    To get proper deploys (no downtime, no killed connections), is the assumption that there is a load balancer and there are multiple containers/hosts?

    Reply
    • hems

      That is exactly the question that came to my mind!

      I can imagine having something like haproxy / aqueduct, then spinning up the new docker ( and having both running in parallel ) and then swap versions on the aqueduct GUI ( which will update haproxy configuration )

      Still i did not try to see what happens with stabilished connections, maybe by diverging the traffic to the new registered docker you would end up with no connections on the previous one and then you can kill the previous one.

      For me that still the greyest area, maybe dropping a connection and having a client code that is ready for this “swap” situation would be the solution, still i got no perfect answer.

      Reply

Leave a Reply to hems

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.