Web Development with Docker, Docker-Machine, Docker-Compose, Tmux, Tmuxinator, and Watchdog

TL,DR

I’m a developer on the Hub team at Docker, Inc. My realm of responsibility spans three different projects: Docker Hub, Registry Hub and www.docker.com. Each of these are Django applications with their own PostgreSQL, Redis, and RabbitMQ instances. I want to be able to “start projects” from one command and not only have everything running, but also have logs, Python shells, file system monitoring, a shell at the root of each project, and git fetch --all without having to type it all myself over and over and over again. This post will describe the development environment I built to accomplish that.

I work on a Mac, while some of my co-workers use Linux boxes, but this development environment works (or should work) cross platform. Using Tmux, Tmuxinator, Docker, Compose, Machine, and Watchdog I can have an easy to use development environment.


Our development work at Docker is moving quickly into a microservices architecture. What this means is we have a bunch of small services dedicated to specific tasks: billing, user accounts, repositories, licenses, registry, authentication, etc.

Each of these services has their own database, their own caching system, their own messaging queues, and their own queue consumers. Our current development environment has 23 different containers, with more being added each day! Not surprisingly, all of this can get very hard to maintain by hand. Here’s how we deal with this complexity.

 

Step 1. Docker

First you have to dockerize your application(s). There are multiple tutorials out there on how to setup a Dockerfile, so we won’t cover that here. Why Docker? As a reader of our blog, you shouldn’t be surprised to learn that when you’re working with a microservices architecture and you want to be able to develop against other services that you own, using Docker makes things much easier. This page explains how to install Docker.

 

Step 2. Docker Machine

With Docker Machine you can create a new virtual machine (VM) running the Docker Engine. You can use Machine to create a new VM using VirtualBox (for local development) or you can use it to create new VM’s in AWS, Digital Ocean, etc. You can think of this similar to how Boot2Docker works for a Mac or Windows; machine just extends the process and makes it generic. Take a look at this guide to see how to install Machine.

 

Step 3. Docker-Compose

Docker-Compose (formerly known as Fig) is a utility for orchestrating Docker containers. You can specify which container is linked to another container, how many of a certain container you want to run, and so on.

Please follow this guide to install Compose.  To setup your docker-compose.yml file, take a look at the quick start guide.

 

Step 4. Tmux

Tmux is a terminal multiplexer. You might be familiar with GNU Screen; tmux is very similar to that. Essentially, tmux will allow you to access multiple terminal sessions from one window.

To install tmux on a Mac, run brew install tmux. There are plenty of instructions on how to install tmux for other operating systems.

After you have tmux installed you might want to spend some time configuring your .tmux.conf file. This is time well spent. For you Vim users, it’s like customizing your .vimrc file. It’s important to have an environment you like and feel comfortable with.

 

Step 5. Tmuxinator

This is where we start getting into the magic sauce. Tmuxinator is a way for you to create and save tmux sessions, so you can start a new dev session, and it will handle setting up the various environment variables as it recreates each window and panes. Tmuxinator can also run custom commands and it allows you to specify the layout of panes in each window. This gives you fine-grained control to set up your exact personal preferences and needs.

To learn more, check out https://github.com/tmuxinator/tmuxinator.

 

Step 6. Watchdog

Watchdog is a Python utility that watches and processes file system events. I use Watchdog to automatically restart my containers whenever code has been changed (i.e., when hitting “cmd-s”, or when checking out a different branch, and so on).

 

Putting it all together

Before tmuxinator, I’d have to run docker-compose up -d to get all the containers running. Then, to manage development, I’d have to open different tabs for different projects, and more tabs for logs, and more tabs for git, and more tabs for poking at the containers, and still more tabs for managing restarting Gunicorn or Celery. Eventually, I wrote a build utility for SublimeText that would allow me to easily restart services in containers, but it was still a pain to use. I had to save, then restart. I really just wanted the process to be fully automatic (and not dependent upon SublimeText).

With Tmuxinator I’ve been able to set up a window for each project and a window for logs.

You can see my tmuxinator project file here:

# ~/.tmuxinator/dockerhub.yml

name: dockerhub
root: ~/

# Runs before everything. Use it to start daemons etc.
pre: "boot2docker up && compose up -d"

# Runs in each window and pane before window/pane specific commands. Useful for setting up interpreter versions.
pre_window: alias compose="hub-compose"

windows:
  - hubdev:
    layout: main-horizontal
    root: ~/develop/docker/saas-config/compose/docker-io
    panes:
      - git fetch --all
      - python hubwatcher.py docker_io
      - docker exec -it compose_hub_1 python manage.py shell

  - reghubdev:
    layout: main-horizontal
    root: ~/develop/docker/saas-config/compose/docker-index
    panes:
      - git fetch --all
      - python hubwatcher.py docker_index
      - docker exec -it compose_reghub_1 python manage.py shell

  - wwwdev:
    layout: main-horizontal
    root: ~/develop/docker/saas-config/compose/www.docker.com
    panes:
      - git fetch --all
      - python hubwatcher.py docker_com
      - docker exec -it compose_www_1 python manage.py shell

  - logs:
    layout: tiled
    panes:
      - no_scroll_line top "Hub Web Logs" docker logs -f compose_hub_1
      - no_scroll_line top "Hub Worker Logs" docker logs -f compose_hubworker_1
      - no_scroll_line top "RegHub Web Logs" docker logs -f compose_reghub_1
      - no_scroll_line top "RegHub Worker Logs" docker logs -f compose_reghubworker_1
      - no_scroll_line top "WWW Logs" docker logs -f compose_www_1

You’ll notice there’s a few custom commands in here, so let’s go over those now.

alias compose=”hub-compose”

I don’t like having to cd to the Compose home directory every time I want to do a compose command so I’ve written this bash shell wrapper, hub-compose, that specifies the Docker Compose file.

#!/bin/bash
# A POSIX variable
OPTIND=1  # Reset in case getopts has been used previously in the shell.

# Initialize our own variables:
report=0
FILE=$HOME"/develop/docker/saas-config/compose/docker-compose.yml"
docker-compose --file=$FILE $@

Now, whenever I’m working in my tmux project I can just use the command compose from any directory.

hubwatcher.py

I eventually set up a build script in SublimeText that I could use to control restarting of procs in containers by key commands, but it was still annoying to have to remember to do that. Watchdog will watch the file system for changes and automatically run the commands to restart Gunicorn and Celery for me. This Watchdog script was originally written by Josh Hawn at Docker, but I’ve modified it slightly.

#! /usr/bin/python
import subprocess
import signal
import sys
from threading import Lock, Timer
from watchdog.observers import Observer
from watchdog.events import RegexMatchingEventHandler

class EventBurstHandler(RegexMatchingEventHandler):
    """
    Groups filesystem event bursts into one event to respond to.
    All filesystem events which occur within a configurable burst
    window, which should be set to a reasonably small length of
    time to allow for an editor to save a swap file, rename files,
    and do various other things.
    """

    def __init__(self, burst_window=0.1, *args, **kwargs):
        super(EventBurstHandler, self).__init__(*args, **kwargs)
        self.burst_window = burst_window
        self.burst_events = []
        self.burst_timer = None
        self.lock = Lock()

    def handle_event_burst(self, events):
        raise NotImplementedError('Please define in subclass')

    def on_any_event(self, event):
        with self.lock:
            self.burst_events.append(event)
            if self.burst_timer is not None:
                self.burst_timer.cancel()
            self.burst_timer = Timer(
                self.burst_window,
                self.on_burst
            )
            self.burst_timer.start()

    def on_burst(self):
        with self.lock:
            events = self.burst_events
            self.burst_events = []
            if len(events) == 0:
                return
       self.handle_event_burst(events)

class BuildDirectoryWatcher(EventBurstHandler):
    """
    A watchdog Filesystem Event Handler which watches a build
    directory for changes and, when an event occurs,
    restarts celery and gunicorn.
    """

    def __init__(self, build_path):
        handler_kwargs = {
            'ignore_directories': True,
            'ignore_regexes': [r'.*/.git/.*', r'.*/.idea/.*'],
        }
        super(BuildDirectoryWatcher, self).__init__(
            **handler_kwargs
        )
        self.build_path = build_path
        self.observer = None

    def watch(self):
        print(
            'Watching build dir for {0}.'.format(
                self.build_path
            )
        )
        self.observer = Observer()
        self.observer.schedule(
            event_handler=self,
            path=self.build_path,
            recursive=True,
        )
        self.observer.start()

    def handle_event_burst(self, events):
        for event in events:
            what = 'directory' if event.is_directory else 'file'
            print('{0} {1}: {2}'.format(
                event.event_type.capitalize(),
                what,
                event.src_path,
            ))
        print('Handling filesystem event burst.')
        proc = subprocess.Popen(
            [
                'docker', 'exec', 'compose_hub_1',
                'supervisorctl', 'restart', 'gunicorn'
            ],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE
        )
        with proc.stdout:
            for line in iter(proc.stdout.readline, b''):
                sys.stdout.write(line)
        if proc.returncode and proc.returncode != 0:
            raise Exception(
                "ERROR command exited with %r", proc.returncode
            )

        proc = subprocess.Popen(
            [
                'docker', 'exec', 'compose_hubworker_1',
                'supervisorctl', 'restart', 'celery'
            ],
            stdout=subprocess.PIPE, stderr=subprocess.PIPE
        )
        with proc.stdout:
            for line in iter(proc.stdout.readline, b''):
                sys.stdout.write(line)
            if proc.returncode and proc.returncode != 0:
                raise Exception(
                    "ERROR command exited with %r", proc.returncode
                )

    def stop(self):
        if self.observer is not None:
            print('Stopping watch for {0}'.format(
                self.build_path,
            ))
            self.observer.stop()

    def join(self):
        if self.observer is not None:
            self.observer.join()

    def watch_until_interrupt(watchers):
        # Start watching.
        for watcher in watchers:
            watcher.watch()

        # Define interrupt signal handler.
        def interrupt_handler(signum, frame):
            print('Got Interrupt! Stopping watchers...')
            for watcher in watchers:
                watcher.stop()
                watcher.join()

            sys.exit(0)

        # Register interrupt signal handler.
        signal.signal(signal.SIGINT, interrupt_handler)
        while True:
            signal.pause()

def watch(paths):
    """
    Uses watchdog to monitor the build directories and
    automatically restart serivices.
    """
    watchers = [
        BuildDirectoryWatcher(path) for path in paths
    ]
    watch_until_interrupt(watchers)

if __name__ == "__main__":
    paths = sys.argv[1:] if len(sys.argv) > 1 else '.'
    watch(paths)

no_scroll_line

This script I found on stackoverflow. This is a little shell script that creates a “title bar” for a tmux pane. Tmux panes do not support title bars, so when I’m looking at a window that has six panes of docker logs I want to know which container these logs are coming from.

#!/bin/sh
# usage: no_scroll_line top|bottom 'non-scrolling content' cmd with args
#
#     Set up a non-scrolling line at the top (or the bottom)
#     of the terminal, write the given text into it, then
#     (in the scrolling region) run the given command with
#     its arguments. When the command has finished, pause with
#     a prompt and reset the scrolling region.

get_size() {
    set -- $(stty size)
    LINES=$1
    COLUMNS=$2
}
set_nonscrolling_line() {
    get_size
    case "$1" in
        t|to|top)
            non_scroll_line=0
            first_scrolling_line=1
            scroll_region="1 $(($LINES - 1))"
            ;;
        b|bo|bot|bott|botto|bottom)
            first_scrolling_line=0
            scroll_region="0 $(($LINES - 2))"
            non_scroll_line="$(($LINES - 1))"
            ;;
        *)
            echo 'error: first argument must be "top" or "bottom"'
            exit 1
            ;;
    esac

    clear
    tput csr $scroll_region
    tput cup "$non_scroll_line" 0
    printf %s "$2"
    tput cup "$first_scrolling_line" 0
}
reset_scrolling() {
    get_size
    clear
    tput csr 0 $(($LINES - 1))
}
# Set up the scrolling region and write into the non-scrolling line
set_nonscrolling_line "$1" "$2"
shift 2
# Run something that writes into the scrolling region
"$@"
ec=$?

# Reset the scrolling region
printf %s 'Press ENTER to reset scrolling (will clear screen)'
read a_line
reset_scrolling
exit "$ec"

And that’s it. Now, when I wake up in the morning or after I reboot my computer I can run mux start dockerhub and then go make a cup of coffee. When I come back, all my containers are running and all my tmux windows and panes are set the way I like them. Now I can get to work and do the important things like making Docker Hub better while spending less time fiddling.

We all love screenshots right? Here’s what my Logs window looks like:

1

 

And here’s what one of my Dev windows looks like:

 

2

 

Learn More about Docker

, , , , , , , ,

Web Development with Docker, Docker-Machine, Docker-Compose, Tmux, Tmuxinator, and Watchdog


4 Responses to “Web Development with Docker, Docker-Machine, Docker-Compose, Tmux, Tmuxinator, and Watchdog”

  1. bkc

    Glad to have found your post. Is this workflow still holding up well six months later?

    Reply
  2. Eduardo Nunes

    It might be a bit late but how are you handling CPU and memory consumption to run all those 23 containers in a single machine (probably more than 23 now)? I’m considering take a monolith approach during development in order to simplify setup (this solution applies only if all microservices share the same stack)

    Reply
  3. Jaka Hudoklin

    This is way too simple for my use case, i'm developing on several projets at the same time, each of which consist of several microservices with different (service) dependencies. but you gave me an idea with tmuxinator.

    Reply

Leave a Reply to Jaka Hudoklin

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.