Category: Latest Posts

Disclosure of Authorization-Bypass on the Docker Hub

Following the postmortem of a previous vulnerability announced on June 30th, the Docker team conducted a thorough audit of the platform code base and hired an outside consultancy to investigate the security of the Docker Registry and the Docker Hub. On the morning of 8/22 (all times PST), the security firm contacted our Security Team:

8/22 – Morning: Our Security Team was contacted regarding vulnerabilities that could be exploited to allow an attacker to bypass authorization constraints and modify container image tags stored on the Docker Hub Registry. Even though the reporting firm was unable to immediately provide a working proof of concept, our Security Team began to investigate.

8/22 – Afternoon: Our team confirms the vulnerabilities and begins preparing a fix.

8/22 – Evening: We roll out a hotfix release to production. Additional penetration tests are performed to assure resolution of these new vulnerabilities. Later, it is discovered this release introduced a regression preventing some authorized users from pulling their own private images.

8/23 – Morning: A new hotfix is deployed to production, addressing the regression and all known security issues. Our Security Team runs another set of penetration tests against the platform and confirm all issues have been resolved.

Follow-up & Postmortem:

We have begun an internal postmortem process to seek the improvement of our development and security processes. Immediately, we have established the following:

  • We have performed an audit of the repositories stored on the Docker Hub Registry to verify whether or not any known exploits have been used in the wild. We have not found any indication of exploitation, or of repositories being modified via authorization by-pass.
  • We have established an agreement with the outside security firm to audit every major release of the platform.
  • We will implement an automated suite of functional security tests. These would be established in addition to existing unit and integration tests.

Finally:

Our contributors have been hard at working making Docker better and better with each release, including important security improvements such as the addition of granular Linux capabilities management with the release of Docker 1.2. Likewise, since establishing our security and responsible disclosure policy, we have seen a substantial interest by researchers in contributing to the improvement of Docker.

If you discover any issues in Docker or the Hub, we encourage you to do the same by contacting security@docker.com.

Docker & VMware: 1 + 1 = 3

BLOG-POST-VMWARE Today at VMworld we’re excited to announce a broad partnership with VMware.  The objective is to provide enterprise IT customers with joint solutions that combine the application lifecycle speed and environment interoperability of the Docker platform with the security, reliability, and management of VMware infrastructure.  To deliver this “better together” solution to customers, Docker and VMware are collaborating on a wide range of product, sales, and marketing initiatives. Why join forces now?  In its first 12 months Docker usage rapidly spread among startups and early adopters who valued the platform’s ability to separate the concerns of application development management from those of infrastructure provisioning, configuration, and operations.  Docker gave these early users a new, faster way to build distributed apps as well as a “write once, run anywhere” choice of deployment from laptops to bare metal to VMs to private and public clouds.  These benefits have been widely welcomed and embraced, as reflected in some of our adoption metrics:

  • 13 million downloads of the Docker Engine
  • 30,000 “Dockerized” applications on Docker Hub
  • 14,000 stars on GitHub
  • 570 contributors

In its second year, Docker usage continues to spread and is now experiencing mass adoption by enterprise IT organizations.  These organizations span a wide range of industry verticals including finance, life sciences, media, and government. By leveraging the Docker platform, ecosystem, and the more than 30,000 “Dockerized” apps on Docker Hub, these enterprise IT organizations are radically reducing the time from develop to deploy – in most cases, from weeks to minutes.  In addition to pipeline acceleration, they get the flexibility and choice to run these apps unchanged across developer laptops, data center VMs, bare metal servers, and private and public clouds. Not surprisingly, Docker’s enterprise IT customers have been making significant investments in VMware infrastructure for years across their application lifecycle environments, from developer laptops to QA servers to production data centers.  They’ve come to trust and rely on its reliability, security, and quality.  Through this partnership, now they can realize the agility and choice benefits of Docker on top of the VMware infrastructure they know and trust. Better Together The partnership spans a wide range of product, sales, and marketing initiatives, and today we’re excited to share early details with the Docker community.

  • Docker-on-VMware.  The companies are working together to ensure that the Docker Engine runs as a first-class citizen on developer workstations using VMware Fusion, data center servers with VMware vSphere, and vCloud Air, VMware’s public cloud.
  • Contributing to Community’s Core Technologies.  To support the joint product initiatives, VMware and Docker will collaborate on the Docker community’s core technology standards, in particular libcontainer and libswarm, the community’s orchestration interoperability technology.
  • Interoperable Management Tooling.  So as to provide developers and sysadmins with consistent deployment and management experiences, the companies are collaborating on interoperability between Docker Hub and VMware’s management tools, including VMware vCloud Air, VMware vCenter Server, and VMware vCloud Automation Center.

In addition to the above product-related initiatives, you’ll start to see VMware introducing Docker to its users through its marketing and sales channels.  In parallel, Docker will begin introducing VMware to the Docker community. There’s obviously a lot more to come from the Docker and VMware relationship, so today’s announcement is just the first step of what will be a fantastic journey.  Please join us in welcoming VMware to the Docker community and working together with them to spread the goodness of Docker to even more users and platforms. Dockerize early and often, – The Docker Team Docker & VMware:  VMworld Sessions There are several VMworld sessions discussing Docker + VMware.  We look forward to seeing you!

Learn More

Your Docker agenda for VMworld 2014

Next week starts the gigantic VMworld conference at the Moscone Center in San Francisco, California. If you are attending the conference, come visit us at the Docker booth #230 and make sure to attend the following Docker-related talks, demos, discussions and meetups where you can meet and chat with fellow Dockerites:

docker-talks

Monday, August 25th:

3:30 PM – 4:30 PM, Moscone West, Room 2014

VMware NSX for Docker, Containers & Mesos by Aaron Rosen (Staff Engineer, VMware) and Somik Behera (NSX Product Manager, VMware)

This session will provide a recipe for architecting massively elastic applications, be it big data applications or developer environments such as Jenkins on top of VMware SDDC Infrastructure. We will describe the use of app isolation technologies such as LxC & Docker together with Resource Managers such as Apache Mesos & Yarn to deliver an Open Elastic Applications & PaaS for mainstream apps such as Jenkins as well as specialized big data applications. We will cover a customer case study that leverages VMware SDDC to create an Open Elastic PaaS leveraging VMware NSX for Data communication fabric.

 

5:30 PM – 6:30 PM, Moscone West, Room 2006

VMware and Docker – Better Together by Ben Golub (CEO, Docker, Inc) and Chris Wolf (VP & Americas CTO, VMware)

Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions answered.

This breakout session will begin with an overview of the key elements of the Docker platform, Docker Engine and Docker Hub.  We review the similarities and differences between Docker and VMware and illustrate through use cases.  We will then discuss how to use Docker and VMware together to take advantage of both technologies’ strengths and demo the lifecycle of a simple application.  We conclude with an overview of the product roadmaps for both.

 

Tuesday, August 26th:

9:00 AM – 10:00 AM, Online

Docker Online Meetup #5: Docker and VMware – Better Together by Aaron Huslage (Solution Architect, Docker, Inc)

This webinar will cover the key elements of the Docker platform, Docker Engine and Docker Hub. We will then discuss the similarities and differences between Docker and VMware as well as the advantages of using them together. Presentation will be followed by a Q&A session. Register on our meetup page.

 

12:30 PM – 1:30 PM, Marriott, Yerba Buena Level, Salon 6

VMware and Docker – Better Together by Ben Golub (CEO, Docker, Inc) and Chris Wolf (VP & Americas CTO, VMware)

Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions answered.

This breakout session will begin with an overview of the key elements of the Docker platform, Docker Engine and Docker Hub.  We review the similarities and differences between Docker and VMware and illustrate through use cases.  We will then discuss how to use Docker and VMware together to take advantage of both technologies’ strengths and demo the lifecycle of a simple application.  We conclude with an overview of the product roadmaps for both.

 

3:30 PM – 4:30 PM, Moscone West, Room 3003

DevOps Demystified! Proven Architectures to Support DevOps Initiatives by Ryan Shondell (Director, Solutions Architecture, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common DevOps tools and everything needed to fully support DevOps initiatives using VMware technologies.

 

5:00 PM – 6:00 PM, Marriott, Yerba Buena Level, Salon 6

VMware NSX for Docker, Containers & Mesos by Aaron Rosen (Staff Engineer, VMware) and Somik Behera (NSX Product Manager, VMware)

This session will provide a recipe for architecting massively elastic applications, be it big data applications or developer environments such as Jenkins on top of VMware SDDC Infrastructure. We will describe the use of app isolation technologies such as LxC & Docker together with Resource Managers such as Apache Mesos & Yarn to deliver an Open Elastic Applications & PaaS for mainstream apps such as Jenkins as well as specialized big data applications. We will cover a customer case study that leverages VMware SDDC to create an Open Elastic PaaS leveraging VMware NSX for Data communication fabric.

 

5:30 PM – 6:30 PM, Moscone West, Room 3007

A DevOps Story: Unlocking the Power of Docker with VMware platform and its ecosystem by George Hicken (Staff Engineer, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

Docker is creating quite a bit of industry buzz right now.  Many of us in IT are starting to get questions from our developers about Docker.  We are starting to see discussions in social media around Docker, and potential integrations with management tools like vCAC.  And many of us are starting to ponder deeper architectural questions, and how deployment models and management paradigms might change with containerization in the mix.  In this session we plan to discuss how to best integrate Docker with the VMware platform, and we will demonstrate that it is in the combination of Docker and VMware that a true “better together” Enterprise grade DevOps solution actually emerges.

 

6:42 PM – 7:30 PM, Moscone Center South (747 Howard St) near the stairs (see map on our meetup page)

Docker PUSH & RUN (5K) special VMworld – with the Docker Team

Meet the Docker team during a fun ~5K RUN around San Francisco. After the run, you are welcome to chat and have a drink with us at the Docker HQ. More details on the meetup page.

 

Wednesday, August 27th:

3:30 PM – 4:30 PM, Moscone West, Room 2014

DevOps Demystified! Proven Architectures to Support DevOps Initiatives by Ryan Shondell (Director, Solutions Architecture, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common DevOps tools and everything needed to fully support DevOps initiatives using VMware technologies.

 

Every day during the Conference

Come visit us at the Docker booth #230

We think you’ll find our talks stimulating and interesting, and hopefully they’ll answer some of your questions about the Docker platform and containerization. Let us know by stopping by the Docker booth #230. We hope to see you at the Conference!

Dockerize early and often,

- The Docker Team

Orchestrating Docker containers in production using Fig

In the last blog post about Fig we showed how you could define and run a multi-container app locally.

We’re now going to show you how you can deploy this app to production. Here’s a screencast of the whole process:

Let’s continue from where we left off in the last blog post. First, we want to put the code we wrote up onto GitHub. You’ll need to initialize and commit your code into a new Git repository.

$ git init
$ git add .
$ git commit -m "Initial commit"

Then create a new repository on GitHub and follow the instructions for how to set up a remote on your local GitHub repository. For example, if your repository were called bfirsh/figdemo, you’d run these commands:

$ git remote add origin git@github.com:bfirsh/figdemo.git
$ git push -u origin master

Next, you’ll need to get yourself a server to host your app. Any cloud provider will work, so long as it is running Ubuntu and available on a public IP address.

Log on to your server using SSH and follow the instructions for installing Docker and Fig on Ubuntu.

$ ssh root@[your server’s IP address]
# curl -sSL https://get.docker.io/ubuntu/ | sudo sh
# curl -L https://github.com/docker/fig/releases/download/0.5.2/linux > /usr/local/bin/fig
# chmod +x /usr/local/bin/fig

Now you’ll want to clone your GitHub repository to your server. You can find the clone URL on the right hand side of your repository page. For example:

# git clone https://github.com/bfirsh/figdemo.git
# cd figdemo

With your code now on the server, you run fig up in daemon mode on the server to start your app on the server:

# fig up -d

That will pull the redis image from Docker Hub, build the image for your web service that is defined in Dockerfile, then start up the redis and web containers and link them together. If you go to http://[your server’s IP address]:5000 in your browser, you will see that your app is now running on your server.

Deploying new code

Let’s deploy new code to our server. Make a change to the message in app.py on your local machine, and check the change is correct by running fig up and opening up your local development from the previous blog post in your browser.

If the change looks good, commit it to Git:

$ git commit -m "Update message" app.py
$ git push

Then, on your server, pull the changes down:

# git pull

You then need to build a new Docker image with these changes in them and recreate the containers with fig up:

# fig build
# fig up -d

You should now see the changes reflected on http://[your server’s IP address]:5000! One thing to note is that it has remembered how many times you have viewed the page. This is because the data stored in Redis is persisted in a Docker volume.

Next steps

That’s the basics of deploying an app to production using Docker. If you want to do more complex setups, you can create a separate fig.yml for your production environment, e.g. fig-production.yml, and tell Fig to use this file when running fig up:

$ fig up -d -f fig-production.yml

If you’re using a separate file for production, this will let you do things like:

  • Expose your web app on port 80 by replacing 8000:8000 with 80:8000 in your ports definition.
  • Remove the volumes statement for injecting code into your container. This exists so code can update immediately in your development environment, but is unnecessary in production when you are building images.
  • Use the Docker Hub to ship code to your server as an image. If you can set up an automated build on Docker Hub to build an image from your code, you could replace the build statement in your web service with an image that points to that repository.

Those are just some ideas – we’d love to hear of other things you have come up with in the comments.

Learn More

Announcing Docker 1.2.0

The hardworking folk at Docker, Inc. are proud to announce the release of version 1.2.0 of Docker. We’ve made improvements throughout the Docker platform, including updates to Docker Engine, Docker Hub, and our documentation.

1.2.0

Highlights include these new features:

restart policies

We added a --restart flag to docker run to specify a restart policy for your container. Currently, there are three policies available:

  • no – Do not restart the container if it dies. (default)
  • on-failure – Restart the container if it exits with a non-zero exit code.
    • Can also accept an optional maximum restart count (e.g. on-failure:5).
  • always – Always restart the container no matter what exit code is returned.

This deprecates the --restart flag on the Docker daemon.

A few examples:
  • Redis will endlessly try to restart if the container exits
docker run --restart=always redis
  • If redis exits with a non-zero exit code, it will try to restart 5 times before giving up:
docker run --restart=on-failure:5 redis

–cap-add –cap-drop

Currently, Docker containers can either be given complete capabilities or they can all follow a whitelist of allowed capabilities while dropping all others. Further, previously, using --privileged would grant all capabilities inside a container, rather than applying a whitelist. This was not recommended for production use because it’s really unsafe; it’s as if you were directly in the host.

This release introduces two new flags for docker run --cap-add and --cap-drop that give you fine grain control over the capabilities you want grant to a particular container.

A few examples:
  • To change the status of the container’s interfaces:
docker run --cap-add=NET_ADMIN ubuntu sh -c "ip link eth0 down"
  • To prevent any `chown` in the container:
docker run --cap-drop=CHOWN ...
  • To allow all capabilities except `mknod`:
docker run --cap-add=ALL --cap-drop=MKNOD ...

–device

Previously, you could use devices inside your containers by bind mounting them ( with `-v`) in a --privileged container. In this release, we introduce the --device flag to `docker run` which lets you use a device without requiring a --privileged container.

Example:
  • To use the sound card inside your container:
docker run --device=/dev/snd:/dev/snd ...

Writable `/etc/hosts`, `/etc/hostname` and `/etc/resolv.conf`

You can now edit /etc/hosts/etc/hostname and /etc/resolve.conf in a running container. This is useful if you need to install bind or other services that might override one of those files.

Note, however, that changes to these files are not saved during a docker build and so will not be preserved in the resulting image. The changes will only “stick” in a running container.

Docker proxy in a separate process

The Docker userland proxy that routes outbound traffic to your containers now has its own separate process (one process per connection). This greatly reduces the load on the daemon, which considerably increases stability and efficiency.

Other Improvements & Changes

  • When using docker rm -f, Docker now kills the container (instead of stopping it) before removing it . If you intend to stop the container cleanly the container, you can use docker stop.
  • Add support for IPv6 addresses in --dns
  • Search on private registries

We hope you enjoy this release and find it useful. As always, please don’t hesitate to contact us with questions, comments or kudos.

Learn More

Announcing DockerCon Europe 2014

Flag_of_Europe.svg

Today we are very happy to announce DockerCon Europe 2014, the first official Docker conference organized in Europe, by both Docker, Inc. and members of the community. The conference will take place in Amsterdam, at the NEMO science center, December 4th and 5th.

Nemo_Science_Center_1

We will also have a full day or training prior to the conference, led by James Turnbull on December 3rd.

The official website is still under construction as we are finalizing the last details, but today we can announce that the Docker team will be present as well as incredible speakers from the Docker community including:

Call for papers opens today, you can submit your talk here. If you are interested in our sponsorship options, please contact us at dockercon-sponsor-eu@docker.com.

We also want to give a special thanks to Pini ReznikHarm BoertienMark ColemanMaarten Dirkse and the Docker Amsterdam community, who are working with us to bring the best of Docker to Europe.

Save the dates and stay tuned for more announcements!

Automagical Deploys from Docker Hub

I want the speed and other advantages of a static site generator, but with the flexibility of a database-backed CMS.

I want performance, flexibility, and ease of maintenance.

From cars to computers, getting both flexibility and performance all too often requires a carefully weighed set of trade-offs. Generating content for your readers and fans on the web is no exception. On the one hand, techies have recently embraced static site generators such as Jekyll, and for good reason, as these systems provide a lot of advantages (e.g., deploying straight to Github pages, high performance, and ease of keeping your content in version control). However, they are not without their own challenges such as steep learning curves and slow, cumbersome workflows.

On the other hand, flexible, database-backed content management system such as WordPress can be a better choice in some situations. It’s very nice to have the flexibility to allow non-technical people to edit and update content, and for authors to edit online from anywhere without needing a special suite of software and skills. However, CMSs such as WordPress can also be slow, temperamental, and hard to optimize.

Lately, I’ve been trying to find a good balance for my website. Currently, it takes the techie-approved approach: serving static pages via Jekyll. There are lots of things to recommend this approach. I LOVE that people from the community can make pull requests to the site from Github, which has helped me clean it up tremendously. I also value the performance and general ease of maintenance of just serving up static files using Nginx. However, using Jekyll (especially on new computers) can be slow and cumbersome — my stack is based on Octopress and it gives me a lot of heartache due to my noob status in the Ruby ecosystem and because of some not-so-great design decisions I made early on. Additionally, if I merge in a minor change on Github, then I have to fetch the changes to a local computer where Octopress has been set up to perform correctly, re-generate the site using some rake commands and then deploy it again. Not immensely difficult, but not trivial either, and if I am catching small mistakes every day and I want to keep the blog in sync instead of letting it slip, the time to regenerate and re-deploy the site starts to add up quickly. Usually I just let things slip, including keeping the changes up to date on Github.

Additionally, Github’s online markdown editor is nice (and fast), and I wouldn’t mind writing whole articles on there from time to time. If I could write using only Github and deploy on commit, a world of possibilities would open up. Yes there is Github Pages, but if I decide to switch static site generators later on I am hosed (plus, I want to eventually finish migrating to hugo).

Game on.

So what to do? Well, lately I’ve been thinking that I could reduce a lot of pain by chaining together some automation systems and deploying directly from an automated build on Docker Hub by using the great Web Hooks feature. This would allow me to trigger a re-build and re-deploy of the blog whenever there is a change in source control on master, and it would all run asynchronously without needing my attention. Better still, this technique could be applied generally to other stacks and other static site generators, letting anyone roll out a solution that fits their needs no matter what they’re building.

To accomplish this, I did the following:

  1. Built a Dockerfile to compile the latest static site from source using our chosen stack (Octopress in my case)
  2. Set up an automated build on Docker Hub which will re-build the image from scratch whenever a change is made on Github (including merges and the online editor)
  3. Used Docker Hub’s Web Hooks to make a POST request to a small “hook listener” server running on my Linode which re-deploys the new image (props to cpuguy83 for helping me with this)

Step 1: Build a Dockerfile for our static site generator

This is my Dockerfile for this Octopress build, it installs dependencies and then creates the site itself:

from debian:wheezy

run apt-get update && \
    apt-get install -y curl build-essential

run apt-get install -y ruby1.9.3
run apt-get install -y lsb-release && \
    curl -sL https://deb.nodesource.com/setup | bash
run apt-get install -y nodejs npm
run apt-get install -y nginx
run gem install bundler

add Gemfile /blog/Gemfile
workdir /blog
run bundle install -j8

add . /blog

run rake install['pageburner'] && rake generate
run rm /etc/nginx/sites-available/default
add nginx/nathanleclaire.com /etc/nginx/sites-available/nathanleclaire.com
run ln -s /etc/nginx/sites-available/nathanleclaire.com /etc/nginx/sites-enabled/nathanleclaire.com

run echo "daemon off;" >>/etc/nginx/nginx.conf

expose 80

cmd ["service", "nginx", "start"]

Apparently, Jekyll has a Node.js dependency these days. Who knew? (Side note: Writing my Dockerfiles in all lowercase like this makes me feel like e e cummings. A really geeky e e cummings.)

This Dockerfile is really cool because the bundle install gets cached as long as the Gemfile doesn’t get changed. So, the only part that takes a non-trivial amount of time during the docker build of the image is the rake generate command that spits out the final static site, so the whole process runs quite quickly (unfortunately, though, Highland, Docker’s automated build robot, doesn’t cache builds).

I would love to see some more of these for various static site generating stacks, and I intend to contribute just a vanilla Octopress / Jekyll one at some point soon.

Octopress is pretty finicky about only working with Ruby 1.9.3, so I was fortunate to be able to find a Debian package that fulfills those requirements. The static files get served up with nginx on port 80 of the container (which I just proxy to the host for now), which works well enough for my purposes. In fact, I just have all the gzip and other per-site (caching headers etc.) settings in the nginx config in the container, so I can deploy that stuff this way too (just change the source in the repo and push to Github!). I like this kind of high-level-ops knowledge PaaS fusion mutated weirdness. Yum.

This approach cuts my “native” sites-available file for the websites down to something like:

server {
  server_name nathanleclaire.com;

  location / {
       proxy_pass http://localhost:8000;
  }

  location /hubhook {
      proxy_pass https://localhost:3000;
  }
}

The /hubhook is some proxy-matic goodness, which farms out the task to re-deploy the site to a simple but effective “Docker Hub Listener” worker that my colleague Brian Goff originally wrote (and which I twisted to my own nefarious purposes, muahaha). Okay, on to the next steps.

Step 2: Set up Automated Build for this repo on Docker Hub

This step is crucial, and really illustrates the power and flexibility of Hub’s automated builds (which if you haven’t tried them already, you totally should). When a change (commit, merge or otherwise) hits the dockerize branch on Github (though it could be any branch, and eventually it will be master for me), it triggers a re-build of the images with the most up-to-date Dockerfile. This means that new articles I have written or content that I have added will be re-built asynchronously by Highland without needing any attention from me. So, even if I merge in a small revision from another user on Github or make a quick edit with the online editor, the site will be rebuilt from source (which is mostly Markdown files and a “theme” template). Note that automated builds work with Bitbucket too if you prefer Bitbucket!!

And, critically, this method takes advantage of a powerful Docker Hub feature called Web Hooks which allows you to make a POST request to the endpoint of your choice whenever a new build is complete. This is what I use to re-deploy the website.

Step 3: Post to the hook listener server and re-deploy!

I had been kicking around the idea of implementing something like this for a while, but I was missing a piece. I had no server to listen for the request from Docker Hub when the build was completed. Then, serendipitously, my colleague Brian Goff (also known as super-helpful community member cpuguy83) demoed a “webhook listener” that was the very thing I was thinking of writing myself (only his was better thought out, to be be honest). It’s a tiny little Golang program which allows you to register handlers that run when the hook hits, and which has support for both self-signed SSL (so you can send the request with encryption / https from Docker Hub) and for API keys (so that even if black-hats know the endpoint to hit, they won’t know the API key to pass to actually get it to do anything).

Link to the repo here:

To get it to work, I generated an OpenSSL key and cert (which I linked to in a config.ini file passed to Brian’s server program).

I wrote this script to automate that key/cert generation:

#!/bin/bash

openssl genrsa -des3 -out server.key 1024 && \
  openssl req -new -key server.key -out server.csr && \
  cp server.key server.key.org && \
  openssl rsa -in server.key.org -out server.key && \
  openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Then I generated a random API key and also added it to the config file. So, in the end, the config.ini file that I use lives in the same directory as the dockerhub-webhook-listener binary and it looks like this:

[apiKeys]
key = bigLongRandomApiKeyString

[tls]
key = ../server.key
cert = ../server.crt

Lastly, I wrote a simple shell script to run whenever the hub hook listener received a valid request, and wrote a Go handler to invoke it from Brian’s server program.

The shell script looks like this:

#!/bin/bash

sudo docker pull nathanleclaire/octoblog:latest
docker stop blog
docker rm blog
docker run --name blog -d -p 8000:80 nathanleclaire/octoblog

Just keeping it simple for now.

The Go code looks like this:

func reloadHandler(msg HubMessage) {
  log.Println("received message to reload ...")
  out, err := exec.Command("../reload.sh").Output()
  if err != nil {
    log.Println("ERROR EXECUTING COMMAND IN RELOAD HANDLER!!")
    log.Println(err)
    return
  }
  log.Println("output of reload.sh is", string(out))
}

As you can see, there’s nothing too fancy here. It’s just Plain Old Golang and Shell Script. In fact, it could be a lot more sophisticated, but this works just fine- which is part of what pleases me a lot about this setup.

Finally, we use the Docker Hub webhooks configuration to make the POST request to the endpoint exposed on the public Internet by this middleware server. In my case, I added an endpoint called /hubhook to my nginx configuration that proxies the outside request to the dockerhub-webhook-listener running on localhost:3000. The API key is passed as a query string parameter, i.e., the request is to https://nathanleclaire.com/hubhook?apikey=bigLongRandomApiKeyString.

So, pieced together, this is how this all works:

  1. Commit hits Github
  2. Docker Hub builds image
  3. Docker Hub hits middleware server with hook
  4. Server pulls image, and restarts the server

Automagical.

Now my deploys are launched seamlessly from source control push. I really enjoy this. Now that everything is set up, it will work smoothly without needing any manual intervention from me (though I need additional logging and monitoring around the systems involved to ensure their uptime and successful operation, in particular, the hub hook listener – oh god, am I slowly turning into a sysadmin? NAH)

There is still a lot of room for improvement in this setup (specifically around how Docker images get moved around and the ability to extract build artifacts from them, both of which should improve in the future), but I hope I have stimulated your imagination with this setup. I really envision the future of application portability as being able to work and edit apps anywhere, without needing your hand-crafted pet environment, and being able to rapidly deploy them without having to painstakingly sit through every step of the process yourself.

So go forth and create cool (Dockerized) stuff!

Docker launches public training courses

training

 

Together with the Docker 1.0 release at DockerCon we also announced the launch of commercial services to support Docker. One of these services is education and we’re thrilled to announce the first dates for public Docker training. We’re running training in San Francisco and New York initially.

The course is called Introduction to Docker and is a two-day classroom-based training course. It will introduce you to the Docker platform and take you through installing, integrating, and running it in your development and operations environments.

We’ll explain why Docker exists and why you should care about it. We’ll then take you through a variety of hands-on exercises designed to help you quickly grow from a beginner into a seasoned user including:

  • Installing the Docker Engine
  • Creating our first Docker container
  • Building Docker images
  • Storing and retrieving Docker images from Docker Hub
  • Building containers from images
  • Using Docker for sandboxing and testing
  • Deploying applications with Docker

By the end of the course you will be familiar with the “why” of Docker. You will also be able to perform the basic tasks needed to get started with Docker and integrate it into your working environment.

Our first training dates are September 17 (San Francisco), October 6 (New York City), and October 20 (San Francisco) at a cost of $1,599 per person.

Sign up fast to avoid missing out!

Getting Started with Docker Orchestration using Fig

Last month we announced Docker, Inc.’s acquisition of Orchard, builders of a Docker hosting service as well as an orchestration tool, Fig.  If you’ve started using Docker and have been wondering how to define and control a multi-container service – for example, a web app in one container and a database in another – then we think you’ll find Fig really helpful.

(more…)

Happy SysAdmin Day!

The last Friday in July is the day the world celebrates sysadmins and all that they do.  And as more and more apps dominate our personal and professional waking hours and with the non-stop growth in the number of servers – physical, virtual, and cloud – the role is only growing in importance.

Sysadmins have a special role in the history of Docker: back-in-the-day, Solomon was a sysadmin, and it was his frustration with trying to manage and maintain golden tarballs that triggered his thinking, “There’s gotta be a better way….”

Several years and an iteration or two later, Docker was born. The clean separation of concerns that Docker provides between the developer – responsible for the app inside of the container – and the sysadmin – responsible for the “outside,” or deploying, scaling, and managing the container – have made it popular with both professions. As one of the many great examples out there, check-out how the sysadmins at New Relic created standardized environments for their development teams.

On this 15th annual celebration of SysAdmin Day, our small gift to the sysadmins out there is to highlight a couple Docker Hub repos that might help make their day-to-day a little easier:

A big THANK-YOU to all sysadmins!
- The Docker Team

SysAdmin Day Lunch Celebration

Join Sys Admins from both the Docker and Rackspace teams for a lunch and an informal Q&A session at Geekdom SF.

Learn More