Category: Latest Posts

Announcing DockerCon Europe 2014

Flag_of_Europe.svg

Today we are very happy to announce DockerCon Europe 2014, the first official Docker conference organized in Europe, by both Docker, Inc. and members of the community. The conference will take place in Amsterdam, at the NEMO science center, December 4th and 5th.

Nemo_Science_Center_1

We will also have a full day or training prior to the conference, led by James Turnbull on December 3rd.

The official website is still under construction as we are finalizing the last details, but today we can announce that the Docker team will be present as well as incredible speakers from the Docker community including:

Call for papers opens today, you can submit your talk here. If you are interested in our sponsorship options, please contact us at dockercon-sponsor-eu@docker.com.

We also want to give a special thanks to Pini ReznikHarm BoertienMark ColemanMaarten Dirkse and the Docker Amsterdam community, who are working with us to bring the best of Docker to Europe.

Save the dates and stay tuned for more announcements!

Automagical Deploys from Docker Hub

I want the speed and other advantages of a static site generator, but with the flexibility of a database-backed CMS.

I want performance, flexibility, and ease of maintenance.

From cars to computers, getting both flexibility and performance all too often requires a carefully weighed set of trade-offs. Generating content for your readers and fans on the web is no exception. On the one hand, techies have recently embraced static site generators such as Jekyll, and for good reason, as these systems provide a lot of advantages (e.g., deploying straight to Github pages, high performance, and ease of keeping your content in version control). However, they are not without their own challenges such as steep learning curves and slow, cumbersome workflows.

On the other hand, flexible, database-backed content management system such as WordPress can be a better choice in some situations. It’s very nice to have the flexibility to allow non-technical people to edit and update content, and for authors to edit online from anywhere without needing a special suite of software and skills. However, CMSs such as WordPress can also be slow, temperamental, and hard to optimize.

Lately, I’ve been trying to find a good balance for my website. Currently, it takes the techie-approved approach: serving static pages via Jekyll. There are lots of things to recommend this approach. I LOVE that people from the community can make pull requests to the site from Github, which has helped me clean it up tremendously. I also value the performance and general ease of maintenance of just serving up static files using Nginx. However, using Jekyll (especially on new computers) can be slow and cumbersome — my stack is based on Octopress and it gives me a lot of heartache due to my noob status in the Ruby ecosystem and because of some not-so-great design decisions I made early on. Additionally, if I merge in a minor change on Github, then I have to fetch the changes to a local computer where Octopress has been set up to perform correctly, re-generate the site using some rake commands and then deploy it again. Not immensely difficult, but not trivial either, and if I am catching small mistakes every day and I want to keep the blog in sync instead of letting it slip, the time to regenerate and re-deploy the site starts to add up quickly. Usually I just let things slip, including keeping the changes up to date on Github.

Additionally, Github’s online markdown editor is nice (and fast), and I wouldn’t mind writing whole articles on there from time to time. If I could write using only Github and deploy on commit, a world of possibilities would open up. Yes there is Github Pages, but if I decide to switch static site generators later on I am hosed (plus, I want to eventually finish migrating to hugo).

Game on.

So what to do? Well, lately I’ve been thinking that I could reduce a lot of pain by chaining together some automation systems and deploying directly from an automated build on Docker Hub by using the great Web Hooks feature. This would allow me to trigger a re-build and re-deploy of the blog whenever there is a change in source control on master, and it would all run asynchronously without needing my attention. Better still, this technique could be applied generally to other stacks and other static site generators, letting anyone roll out a solution that fits their needs no matter what they’re building.

To accomplish this, I did the following:

  1. Built a Dockerfile to compile the latest static site from source using our chosen stack (Octopress in my case)
  2. Set up an automated build on Docker Hub which will re-build the image from scratch whenever a change is made on Github (including merges and the online editor)
  3. Used Docker Hub’s Web Hooks to make a POST request to a small “hook listener” server running on my Linode which re-deploys the new image (props to cpuguy83 for helping me with this)

Step 1: Build a Dockerfile for our static site generator

This is my Dockerfile for this Octopress build, it installs dependencies and then creates the site itself:

from debian:wheezy

run apt-get update && \
    apt-get install -y curl build-essential

run apt-get install -y ruby1.9.3
run apt-get install -y lsb-release && \
    curl -sL https://deb.nodesource.com/setup | bash
run apt-get install -y nodejs npm
run apt-get install -y nginx
run gem install bundler

add Gemfile /blog/Gemfile
workdir /blog
run bundle install -j8

add . /blog

run rake install['pageburner'] && rake generate
run rm /etc/nginx/sites-available/default
add nginx/nathanleclaire.com /etc/nginx/sites-available/nathanleclaire.com
run ln -s /etc/nginx/sites-available/nathanleclaire.com /etc/nginx/sites-enabled/nathanleclaire.com

run echo "daemon off;" >>/etc/nginx/nginx.conf

expose 80

cmd ["service", "nginx", "start"]

Apparently, Jekyll has a Node.js dependency these days. Who knew? (Side note: Writing my Dockerfiles in all lowercase like this makes me feel like e e cummings. A really geeky e e cummings.)

This Dockerfile is really cool because the bundle install gets cached as long as the Gemfile doesn’t get changed. So, the only part that takes a non-trivial amount of time during the docker build of the image is the rake generate command that spits out the final static site, so the whole process runs quite quickly (unfortunately, though, Highland, Docker’s automated build robot, doesn’t cache builds).

I would love to see some more of these for various static site generating stacks, and I intend to contribute just a vanilla Octopress / Jekyll one at some point soon.

Octopress is pretty finicky about only working with Ruby 1.9.3, so I was fortunate to be able to find a Debian package that fulfills those requirements. The static files get served up with nginx on port 80 of the container (which I just proxy to the host for now), which works well enough for my purposes. In fact, I just have all the gzip and other per-site (caching headers etc.) settings in the nginx config in the container, so I can deploy that stuff this way too (just change the source in the repo and push to Github!). I like this kind of high-level-ops knowledge PaaS fusion mutated weirdness. Yum.

This approach cuts my “native” sites-available file for the websites down to something like:

server {
  server_name nathanleclaire.com;

  location / {
       proxy_pass http://localhost:8000;
  }

  location /hubhook {
      proxy_pass https://localhost:3000;
  }
}

The /hubhook is some proxy-matic goodness, which farms out the task to re-deploy the site to a simple but effective “Docker Hub Listener” worker that my colleague Brian Goff originally wrote (and which I twisted to my own nefarious purposes, muahaha). Okay, on to the next steps.

Step 2: Set up Automated Build for this repo on Docker Hub

This step is crucial, and really illustrates the power and flexibility of Hub’s automated builds (which if you haven’t tried them already, you totally should). When a change (commit, merge or otherwise) hits the dockerize branch on Github (though it could be any branch, and eventually it will be master for me), it triggers a re-build of the images with the most up-to-date Dockerfile. This means that new articles I have written or content that I have added will be re-built asynchronously by Highland without needing any attention from me. So, even if I merge in a small revision from another user on Github or make a quick edit with the online editor, the site will be rebuilt from source (which is mostly Markdown files and a “theme” template). Note that automated builds work with Bitbucket too if you prefer Bitbucket!!

And, critically, this method takes advantage of a powerful Docker Hub feature called Web Hooks which allows you to make a POST request to the endpoint of your choice whenever a new build is complete. This is what I use to re-deploy the website.

Step 3: Post to the hook listener server and re-deploy!

I had been kicking around the idea of implementing something like this for a while, but I was missing a piece. I had no server to listen for the request from Docker Hub when the build was completed. Then, serendipitously, my colleague Brian Goff (also known as super-helpful community member cpuguy83) demoed a “webhook listener” that was the very thing I was thinking of writing myself (only his was better thought out, to be be honest). It’s a tiny little Golang program which allows you to register handlers that run when the hook hits, and which has support for both self-signed SSL (so you can send the request with encryption / https from Docker Hub) and for API keys (so that even if black-hats know the endpoint to hit, they won’t know the API key to pass to actually get it to do anything).

Link to the repo here:

To get it to work, I generated an OpenSSL key and cert (which I linked to in a config.ini file passed to Brian’s server program).

I wrote this script to automate that key/cert generation:

#!/bin/bash

openssl genrsa -des3 -out server.key 1024 && \
  openssl req -new -key server.key -out server.csr && \
  cp server.key server.key.org && \
  openssl rsa -in server.key.org -out server.key && \
  openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Then I generated a random API key and also added it to the config file. So, in the end, the config.ini file that I use lives in the same directory as the dockerhub-webhook-listener binary and it looks like this:

[apiKeys]
key = bigLongRandomApiKeyString

[tls]
key = ../server.key
cert = ../server.crt

Lastly, I wrote a simple shell script to run whenever the hub hook listener received a valid request, and wrote a Go handler to invoke it from Brian’s server program.

The shell script looks like this:

#!/bin/bash

sudo docker pull nathanleclaire/octoblog:latest
docker kill blog
docker rm blog
docker run --name blog -d -p 8000:80 nathanleclaire/octoblog

Just keeping it simple for now.

The Go code looks like this:

func reloadHandler(msg HubMessage) {
  log.Println("received message to reload ...")
  out, err := exec.Command("../reload.sh").Output()
  if err != nil {
    log.Println("ERROR EXECUTING COMMAND IN RELOAD HANDLER!!")
    log.Println(err)
    return
  }
  log.Println("output of reload.sh is", string(out))
}

As you can see, there’s nothing too fancy here. It’s just Plain Old Golang and Shell Script. In fact, it could be a lot more sophisticated, but this works just fine- which is part of what pleases me a lot about this setup.

Finally, we use the Docker Hub webhooks configuration to make the POST request to the endpoint exposed on the public Internet by this middleware server. In my case, I added an endpoint called /hubhook to my nginx configuration that proxies the outside request to the dockerhub-webhook-listener running on localhost:3000. The API key is passed as a query string parameter, i.e., the request is to https://nathanleclaire.com/hubhook?apikey=bigLongRandomApiKeyString.

So, pieced together, this is how this all works:

  1. Commit hits Github
  2. Docker Hub builds image
  3. Docker Hub hits middleware server with hook
  4. Server pulls image, and restarts the server

Automagical.

Now my deploys are launched seamlessly from source control push. I really enjoy this. Now that everything is set up, it will work smoothly without needing any manual intervention from me (though I need additional logging and monitoring around the systems involved to ensure their uptime and successful operation, in particular, the hub hook listener – oh god, am I slowly turning into a sysadmin? NAH)

There is still a lot of room for improvement in this setup (specifically around how Docker images get moved around and the ability to extract build artifacts from them, both of which should improve in the future), but I hope I have stimulated your imagination with this setup. I really envision the future of application portability as being able to work and edit apps anywhere, without needing your hand-crafted pet environment, and being able to rapidly deploy them without having to painstakingly sit through every step of the process yourself.

So go forth and create cool (Dockerized) stuff!

Docker launches public training courses

training

 

Together with the Docker 1.0 release at DockerCon we also announced the launch of commercial services to support Docker. One of these services is education and we’re thrilled to announce the first dates for public Docker training. We’re running training in San Francisco and New York initially.

The course is called Introduction to Docker and is a two-day classroom-based training course. It will introduce you to the Docker platform and take you through installing, integrating, and running it in your development and operations environments.

We’ll explain why Docker exists and why you should care about it. We’ll then take you through a variety of hands-on exercises designed to help you quickly grow from a beginner into a seasoned user including:

  • Installing the Docker Engine
  • Creating our first Docker container
  • Building Docker images
  • Storing and retrieving Docker images from Docker Hub
  • Building containers from images
  • Using Docker for sandboxing and testing
  • Deploying applications with Docker

By the end of the course you will be familiar with the “why” of Docker. You will also be able to perform the basic tasks needed to get started with Docker and integrate it into your working environment.

Our first training dates are September 17 (San Francisco), October 6 (New York City), and October 20 (San Francisco) at a cost of $1,599 per person.

Sign up fast to avoid missing out!

Getting Started with Docker Orchestration using Fig

Last month we announced Docker, Inc.’s acquisition of Orchard, builders of a Docker hosting service as well as an orchestration tool, Fig.  If you’ve started using Docker and have been wondering how to define and control a multi-container service – for example, a web app in one container and a database in another – then we think you’ll find Fig really helpful.

(more…)

Happy SysAdmin Day!

The last Friday in July is the day the world celebrates sysadmins and all that they do.  And as more and more apps dominate our personal and professional waking hours and with the non-stop growth in the number of servers – physical, virtual, and cloud – the role is only growing in importance.

Sysadmins have a special role in the history of Docker: back-in-the-day, Solomon was a sysadmin, and it was his frustration with trying to manage and maintain golden tarballs that triggered his thinking, “There’s gotta be a better way….”

Several years and an iteration or two later, Docker was born. The clean separation of concerns that Docker provides between the developer – responsible for the app inside of the container – and the sysadmin – responsible for the “outside,” or deploying, scaling, and managing the container – have made it popular with both professions. As one of the many great examples out there, check-out how the sysadmins at New Relic created standardized environments for their development teams.

On this 15th annual celebration of SysAdmin Day, our small gift to the sysadmins out there is to highlight a couple Docker Hub repos that might help make their day-to-day a little easier:

A big THANK-YOU to all sysadmins!
- The Docker Team

SysAdmin Day Lunch Celebration

Join Sys Admins from both the Docker and Rackspace teams for a lunch and an informal Q&A session at Geekdom SF.

Learn More

libswarm demo – logging

At Dockercon, we announced a new project being worked on called “libswarm”. I wanted to clarify what exactly libswarm is, what it does, and what it doesn’t do.

First, libswarm is not itself an orchestration tool. It does not and will not replace any orchestration tools.

Libswarm is a library first and foremost and not an end-user tool. It is a library that helps make it relatively trivial to compose other disparate tools together, including but not limited to orchestration tools.

Just a quick demo showing off what libswarm can do with logging. I will be using code from this gist: https://gist.github.com/cpuguy83/b7c0f42e903bc13c46d6

Demo time!

# start a container that prints to stdout
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'

# fire up swarmd
./swarmd 'logforwarder tcp://10.0.0.2:2375' stdoutlogger
Getting logs tcp://10.0.0.2:2375 [agitated_yonath]
2014-07-17 19:04:22.42915222 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:04:23.43114032 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

So we told swarmd to fire up the logforwarder backend and connect to the docker daemon on tcp://10.0.0.2:2375, attach to each of the containers in the daemon, convert the stdout/stderr streams to log messages and forward them into the stdoutlogger (which is a backend made simply for demo purposes) which prints to the terminal’s stdout.

# Now lets connect to multiple daemons with multiple containers
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'

docker -H tcp://10.0.0.3:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is also a log message; sleep 1; done'


./swarmd 'logforwarder tcp://10.0.0.2:2375 tcp://10.0.0.3:2375' stdoutlogger
Getting logs tcp://10.0.0.2:2375 [agitated_yonath romantic_wozniak]
Getting logs tcp://10.0.0.3:2375 [hopeful_babbage]
2014-07-17 19:40:22.93898444 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:40:23.26841138 +0000 UTC	tcp://10.0.0.3:2375	hopeful_babbage	INFO	this is also a log message

2014-07-17 19:40:23.63765218 +0000 UTC	tcp://10.0.0.2:2375	romantic_wozniak	INFO	this too is a log message

2014-07-17 19:40:23.94244022 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:40:24.27086067 +0000 UTC	tcp://10.0.0.3:2375	hopeful_babbage	INFO	this is also a log message

2014-07-17 19:40:24.64303259 +0000 UTC	tcp://10.0.0.2:2375	romantic_wozniak	INFO	this too is a log message

Here we have the logforwarder connecting to 2 docker backends, attaching to each of the containers and forwarding the stdout/stderr streams to the stdoutlogger.

Instead of stdoutlogger, this could be swapped out for syslog, logstash, whatever… it just needs to implement the libswarm “Log” verb.

Libswarm in Docker

I see various pieces of Docker core being broken into smaller libswarm services that come together to make Docker.
I see tools that hook into this libswarm API to extend native Docker functionality. No more bind-mounting Docker sockets into containers (which, btw, is super dangerous).

Libswarm is the API you will use in order to in order to interact with Docker, and not the traditional REST API (though this will probably be available in one form or another).

Want to talk more about libswarm? Join us on IRC @ #libswarm on freenode or at our github repo: github.com/docker/libswarm

Welcoming the Orchard and Fig team

Today I am extremely proud to announce that the creators of Orchard and Fig – two of the most polished and exciting projects to come out of the Docker ecosystem – are joining the Docker team. Fig is by far the easiest way to orchestrate the deployment of multi-container applications, and has been called “the perfect Docker companion for developers”. As it turns out, these are currently the two most important questions for the tens of thousands of people building applications on the Docker platform:

  1. How to orchestrate Docker containers in a standard way?
  2. How to make Docker awesome for developers?

With Fig, Ben and Aanand got closer to an answer than anybody else in the ecosystem. They have a natural instinct for building awesome developer tools, with just the right blend of simplicity and flexibility. They understand the value of a clean, minimal design, but they know from hard-earned experience that every real project needs its share of duct tape and temporary hacks – and you don’t want to be standing between an engineer and their duct tape. By incorporating that experience upstream, we have an opportunity to deliver an awesome solution to these problems, in a standardized and interoperable way, for every Docker user.

First, in parallel to maintaining Fig, Ben and Aanand will help incorporate into Docker the orchestration interfaces that they wished were available when building their own tools on top of it. “There are a thousand ways to implement orchestration. But those should be a thousand plugins, exposed to the developer through a unified, elegant interface”. We agree, and can’t wait to build this interface together.

Second, they will lead a new Developer Experience group – or DX for short. The goal of DX is to make Docker awesome to use for developers. This means anything from fixing UI details, improving Mac and Windows support, providing more tutorials, integrating with other popular developer tools, or simply using Docker a lot and reporting problems.

As usual, all development on Docker happens in the open, and we’re always looking for volunteer contributors and maintainers! If you want to join the Orchestration or DX groups, come say hi on IRC – #docker-dev / Freenode is where all the design discussions happen.

If you’re an Orchard user, there is a detailed post on what this means for you, and what to do next.

Lastly, since Orchard is proudly based in the UK, we are happy to announce that Docker is opening its first European office in London. If you’ve been considering joining Docker but don’t want to move to California – get in touch! We offer both on-site and remote positions.

Welcome Ben and Aanand – let’s go build it!

Additional Resources

Read more about this news in the press

vb  iw  gigaom  forbes   logo_infoq   zdnet siliconANGLEtwittervg

Docker Recognized by CRN as a 2014 Emerging Vendor

EmergingVendors-2014

 

To add to the excitement of this summer, we are pleased to announce that Docker has been recognized as a 2014 Emerging Vendor by CRN, a top news source for high-margin tech solutions. Having earned this type of recognition is an honor and truly demonstrates the over-whelming response towards the “Dockerize” movement. We would like to thank our supporters in the open source community and our dedicated team members.

The annual Emerging Vendors list recognizes technology solution providers who have influenced the tech space by providing innovative products and ideas which not only increase tech business and sales, but also create further opportunities for channel partners.

With the support of our outstanding ecosystem, we are now becoming known as the standard for containerization by providing an open source platform that enables rapid composition, collaborative iteration, and efficient distribution throughout the application lifecycle on any host – laptop, data center and the cloud.

After less than a year-and-a-half, Docker has reached five million downloads and over 500 project contributors. The ever-growing Docker ecosystem, including contributors, partners and companies built on the Docker platform, deserves a great deal of credit for perpetuating this growth.

We are privileged to be recognized among the IT channel and technology industries and look forward to continuing the momentum with upcoming company, partner and product announcements.

This award goes to our incredible community of contributors worldwide who are continuously making Docker and its ecosystem stronger. Once again, we’d like to thank you all for your contribution and hope that together we’ll have the opportunity to celebrate more awards in the future.

Thank you,

The Docker Team

Your Docker agenda at OSCON 2014 in Portland

This week is the giant OSCON conference in Portland, Oregon. Representing the Docker team are author and VP of Services James Turnbull, Solutions Engineer Jérôme Petazzoni, and Solutions Architect Aaron Huslage. They’ll be discussing everything related to Docker and containers, from security to containerizing desktop apps. In addition to these awesome talks, we also have some more informal events scheduled this week. If you’re attending the conference, here is where and when to see Docker-related discussions and demos, and where you can meet up and chat with fellow Dockerites:

Tuesday, July 22nd:

10:40am – EXPO HALL (TABLE C)

Office Hour with Docker’s VP of Services, James Turnbull
James will be on hand to answer questions and help you get familiar with Docker use cases and integrations.

11:30am – PORTLAND 256

Is it Safe to Run Applications in Linux Containers? (Jérôme Petazzoni, Docker, Inc.)

1:40pm – PORTLAND 251

Shipping Applications to Production in Containers with Docker (Jérôme Petazzoni, Docker, Inc.)

Wednesday July 23rd:

10:30am – EXPO HALL (TABLE A)

Office Hour with Docker Solution Engineer Jérôme Petazzoni
Jérôme will be on hand to answer questions and help you get familiar with Docker security, orchestration and containerizing desktop apps.

6pm to 9pm – New Relic

Docker and CoreOS OSCON meet-up brought to you by New Relic and Rackspace 

Please join us in the spectacular New Relic offices for Portland craft beers, tasty snacks, and lots of talk about Docker.

We think you’ll find our talks stimulating and interesting, and hopefully they’ll answer some of your questions about the Docker platform and containerization. Let us know by stopping by Office Hours or at the meet-up and saying hello. We hope to see you at the Conference!

Dockerize early and often,

- The Docker Team

Ten Docker Tips and Tricks That Will Make You Sing A Whale Song of Joy

whales

As a Solutions Engineer at Docker Inc., I’ve been able to accumulate all sorts of good Docker tips and tricks.  The sheer quantity of information available in the community is pretty overwhelming, and there are a lot of good tips and tricks that can make your workflow easier (or provide a little fun) which you could easily miss.

Once you’ve mastered the basics, the creative possibilities are pretty endless.  The “Cambrian Explosion” of creativity that Docker is provoking is extremely exciting.

So I’m going to share ten of my favorite tips and tricks with you guys. Ready?

  1. Run Docker on a VPS for extra speed
  2. Bind mount the docker socket on docker run
  3. Use containers as highly disposable dev environments
  4. bash is your friend
  5. Insta-nyan
  6. Edit /etc/hosts/ with the boot2docker IP address on OSX
  7. docker inspect -f voodoo
  8. Super easy terminals in-browser with wetty
  9. nsenter
  10. #docker

Alright, let’s do this!

Run Docker on a VPS for extra speed

This one’s pretty straightforward. If, like me, your home internet’s bandwidth is pretty lacking, you can run Docker on Digital Ocean or Linode and get much better bandwidth on pulls and pushes. I get around 50mbps download with Comcast, on my Linode my speed tests run an order of magnitude faster than that.

So if you have the need for speed, consider investing in a VPS for your own personal Docker playground.  This is a lifesaver if you’re on, say, coffee shop WiFi or anywhere else that the connection is less than ideal.

Bind mount the docker socket on docker run

What if you want to do Docker-ey things inside a container but you don’t want to go full Docker in Docker (dind) and run in --privileged mode? Well, you can use a base image that has the Docker client installed and bind-mount your Docker socket with -v.

docker run -it -v /var/run/docker.sock:/var/run/docker.sock nathanleclaire/devbox

Now you can send docker commands to the same instance of the docker daemon you are using on the host – inside your container!

This is really fun because it gives you all the advantages of being able to mess around with Docker containers on the host, with the flexibility and light weight of containers. Which leads into my next tip….

Use containers as highly disposable dev environments

How many times have you needed to quickly isolate an issue to see if it was related to a specific factors in particular, and nothing else? Or just wanted to pop onto a new branch, make some changes and experiment a little bit with what you have running/installed in your environment, without accidentally screwing something up big time?

Docker lets you do this in a a portable way.

Simply create a Dockerfile that defines your ideal development environment on the CLI (including ack, autojump, Go, etc. if you like those – whatever you need) and kick up a new instance of that image whenever you want to pop into a totally new box and try some stuff out. For instance, here’s Docker founder Solomon Hykes’s dev box.

FROM ubuntu:14.04

RUN apt-get update -y
RUN apt-get install -y mercurial
RUN apt-get install -y git
RUN apt-get install -y python
RUN apt-get install -y curl
RUN apt-get install -y vim
RUN apt-get install -y strace
RUN apt-get install -y diffstat
RUN apt-get install -y pkg-config
RUN apt-get install -y cmake
RUN apt-get install -y build-essential
RUN apt-get install -y tcpdump
RUN apt-get install -y screen
# Install go
RUN curl https://go.googlecode.com/files/go1.2.1.linux-amd64.tar.gz | tar -C /usr/local -zx

ENV GOROOT /usr/local/go
ENV PATH /usr/local/go/bin:$PATH
# Setup home environment
RUN useradd dev
RUN mkdir /home/dev && chown -R dev:/home/dev
RUN mkdir -p /home/dev/go /home/dev/bin /home/dev/lib /home/dev/include
ENV PATH /home/dev/bin:$PATH
ENV PKG_CONFIG_PATH /home/dev/lib/pkgconfig
ENV LD_LIBRARY_PATH /home/dev/lib
ENV GOPATH /home/dev/go:$GOPATH

RUN go get github.com/dotcloud/gordon/pulls
# Create a shared data volume
# We need to create an empty file, otherwise the volume will
# belong to root.
# This is probably a Docker bug.
RUN mkdir /var/shared/RUN touch /var/shared/placeholder
RUN chown -R dev:dev /var/shared
VOLUME /var/shared
WORKDIR /home/dev
ENV HOME /home/dev
ADD vimrc /home/dev/.vimrc
ADD vim /home/dev/.vim
ADD bash_profile /home/dev/.bash_profile
ADD gitconfig /home/dev/.gitconfig

# Link in shared parts of the home directory
RUN ln -s /var/shared/.ssh
RUN ln -s /var/shared/.bash_history
RUN ln -s /var/shared/.maintainercfg
RUN chown -R dev:/home/dev
USER dev

This set-up is especially deadly if you use vim/emacs as your editor ;) You can use /bin/bash as your CMD and docker run -it my/devbox right into a shell.

When you run the container, you can also bind-mount the Docker client binary and socket (as mentioned above) inside the container to get access to the host’s Docker daemon, which allows for all sorts of container antics!

Similarly, you can easily bootstrap a development environment on a new computer this way. Just install Docker and download your dev box image.

Bash is your friend

Or, more broadly, “the shell is your friend”.

Just as many of you probably have aliases in git to save keystrokes, you’ll likely want to create little shortcuts for yourself if you start to use Docker heavily. Just add these to your ~/.bashrc or equivalent and off you go.

There are some easy ones:

alias drm="docker rm"
alias dps="docker ps"

I will add one of these whenever I find myself typing the same command over and over.  Automation for the win!

You can also mix and match in all kinds of fun ways. For instance, you can do

$ drm -f $(dps -aq)

to remove all containers (including those which are running). Or you can do:

function da () {  
    docker start $1 && docker attach $1
}

to start a stopped container and attach to it.

I created a fun one to enable my rapid-bash-container-prompt habit mentioned in the previous tip:

function newbox () {    
    docker run -it --name $1 \
    --volumes-from=volume_container \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -e BOX_NAME=$1 nathanleclaire/devbox
}

Insta-nyan

Pretty simple. You want a nyan-cat in your terminal and you have Docker. You need only one command to activate the goodness.

docker run -it supertest2014/nyan

Edit /etc/hosts with the boot2docker IP on OSX

The newest (read: BEST) version of boot2docker includes a host-only network where you can access ports exposed by containers using the boot2docker virtual machine’s IP address. The boot2docker ip command makes access to this value easy. However, usually it is simply 192.168.59.103. I find this specific address a little hard to remember and cumbersome to type, so I add an entry to my /etc/hosts file for easy access of boot2docker:port when I’m running applications that expose ports with Docker. It’s handy, give it a shot!

Note: Do remember that it is possible for the boot2docker VM’s IP address to change, so make sure to check for that if you are encountering network issues using this shortcut. If you are not doing something that would mess with your network configuration (setting up and tearing down multiple virtual machines including boot2docker’s, etc.), though, you will likely not encounter this issue.

While you’re at it you should probably tweet @SvenDowideit and thank him for his work on boot2docker, since he is an absolute champ for delivering, maintaining, and documenting it.

Docker inspect -f voodoo

You can do all sorts of awesome flexible things with the docker inspect command’s -f (or --format) flag, if you’re willing to learn a little bit about Go templates.

Normally docker inspect $ID outputs a big JSON dump, while you can access individual properties with templating like:

docker inspect -f '{{ .NetworkSettings.IPAddress }}' $ID

The argument to -f is a Go (the language that Docker is written in) template. If you try something like:

$ docker inspect -f '{{ .NetworkSettings }}' $ID
map[Bridge:docker0 Gateway:172.17.42.1IPAddress:172.17.0.4IPPrefixLen:16PortMapping:<nil>Ports:map[5000/tcp:[map[HostIp:0.0.0.0HostPort:5000]]]]

You will not get JSON since Go will actually just dump the data type that Docker is marshalling into JSON for the output you see without -f. But you can do:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID
{"Bridge":"docker0","Gateway":"172.17.42.1","IPAddress":"172.17.0.4","IPPrefixLen":16,"PortMapping":null,"Ports":{"5000/tcp":[{"HostIp":"0.0.0.0","HostPort":"5000"}]}}

To get JSON! And to prettify it, you can pipe it into a Python builtin:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID | python -mjson.tool
{
    "Bridge": "docker0",
    "Gateway": "172.17.42.1",
    "IPAddress": "172.17.0.4",
    "IPPrefixLen": 16,
    "PortMapping": null,
    "Ports": {
        "5000/tcp": [
            {
                "HostIp": "0.0.0.0",
                "HostPort": "5000"
            }
        ]
    }
}

You can also do other fun tricks like access object properties which have non-alphanumeric keys.  Here, again, it helps to know some Golang

docker inspect -f '{{ index .Volumes "/host/path" }}' $ID

This is a very powerful tool for quickly extracting information about your running containers, and is extremely helpful for troubleshooting because it provides a ton of detail.

Super easy terminals in-browser with wetty

I really foresee people making extremely FUN web applications with this kind of functionality. You can spin up a container which is running an instance of wetty (a JavaScript-powered, in-browser terminal emulator).

Try it for yourself with:

docker run -p 3000:3000 -dt nathanleclaire/wetty

Wetty only works in Chrome, unfortunately, but there are other JavaScript terminal emulators begging to be Dockerized and, if you are using it for a presentation or something (imagine embedding interactive CLI snapshots in your Reveal.js slideshow – nice), you control the browser anyway. Now you can embed isolated terminal applications in web applications wherever you want, and you control the environment in which they execute with an excruciating amount of detail. No pollution from host to container, and vice versa.

  • The creative possibilities of this are just mind-boggling to me. I REALLY want to see someone make a version of TypeRacer where you compete with other contestants in real time to type code into vim or emacs as quickly as possible. That would be pure awesome. Or a real-time coding challenge where your code competes with other code in an arena for dominance a la Core Wars.

Nsenter

Docker engineer Jérôme Petazzoni wrote an opinionated article a few weeks ago that shook things up a bit. There, he argued that you should not need to run sshd (the daemon for getting a remote terminal prompt) in your containers and, in fact, if you are doing so you are violating a Docker principle (one concern per container). It’s a good read, and he mentions nsenter as a fun trick to get a prompt inside of containers which have already been initialized with a process.

See here or here to learn how to do it.

#docker

I’m not talking about the hashtag!! I’m talking about the channel on Freenode on IRC. It’s hands-down the best place to meet with fellow Dockers online, ask questions (all levels welcome!), and seek truly excellent expertise. At any given time there are about 1,000 people or more sitting in, and it’s a great community as well as resource. Seriously, if you’ve never tried it before, go check it out. I know IRC can be scary if you’re not accustomed to using it, but the effort of setting it up and learning to use it a bit will pay huge dividends for you in terms of knowledge gleaned. I guarantee it. So if you haven’t come to hang out with us on IRC yet, do it!

To join:

  1. Download an IRC Client such as LimeChat
  2. Connect to the irc.freenode.net network
  3. Join the #docker channel

Welcome!

Conclusion

That’s all for now, folks.  Tweet at us @docker and tell us your favorite Docker tips and tricks !