Category: Latest Posts

Happy SysAdmin Day!

The last Friday in July is the day the world celebrates sysadmins and all that they do.  And as more and more apps dominate our personal and professional waking hours and with the non-stop growth in the number of servers – physical, virtual, and cloud – the role is only growing in importance.

Sysadmins have a special role in the history of Docker: back-in-the-day, Solomon was a sysadmin, and it was his frustration with trying to manage and maintain golden tarballs that triggered his thinking, “There’s gotta be a better way….”

Several years and an iteration or two later, Docker was born. The clean separation of concerns that Docker provides between the developer – responsible for the app inside of the container – and the sysadmin – responsible for the “outside,” or deploying, scaling, and managing the container – have made it popular with both professions. As one of the many great examples out there, check-out how the sysadmins at New Relic created standardized environments for their development teams.

On this 15th annual celebration of SysAdmin Day, our small gift to the sysadmins out there is to highlight a couple Docker Hub repos that might help make their day-to-day a little easier:

A big THANK-YOU to all sysadmins!
- The Docker Team

SysAdmin Day Lunch Celebration

Join Sys Admins from both the Docker and Rackspace teams for a lunch and an informal Q&A session at Geekdom SF.

Learn More

libswarm demo – logging

At Dockercon, we announced a new project being worked on called “libswarm”. I wanted to clarify what exactly libswarm is, what it does, and what it doesn’t do.

First, libswarm is not itself an orchestration tool. It does not and will not replace any orchestration tools.

Libswarm is a library first and foremost and not an end-user tool. It is a library that helps make it relatively trivial to compose other disparate tools together, including but not limited to orchestration tools.

Just a quick demo showing off what libswarm can do with logging. I will be using code from this gist: https://gist.github.com/cpuguy83/b7c0f42e903bc13c46d6

Demo time!

# start a container that prints to stdout
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'

# fire up swarmd
./swarmd 'logforwarder tcp://10.0.0.2:2375' stdoutlogger
Getting logs tcp://10.0.0.2:2375 [agitated_yonath]
2014-07-17 19:04:22.42915222 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:04:23.43114032 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

So we told swarmd to fire up the logforwarder backend and connect to the docker daemon on tcp://10.0.0.2:2375, attach to each of the containers in the daemon, convert the stdout/stderr streams to log messages and forward them into the stdoutlogger (which is a backend made simply for demo purposes) which prints to the terminal’s stdout.

# Now lets connect to multiple daemons with multiple containers
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'
docker -H tcp://10.0.0.2:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is a log message; sleep 1; done'

docker -H tcp://10.0.0.3:2375 run -d --entrypoint /bin/sh debian:jessie -c \
    'while true; do echo this is also a log message; sleep 1; done'


./swarmd 'logforwarder tcp://10.0.0.2:2375 tcp://10.0.0.3:2375' stdoutlogger
Getting logs tcp://10.0.0.2:2375 [agitated_yonath romantic_wozniak]
Getting logs tcp://10.0.0.3:2375 [hopeful_babbage]
2014-07-17 19:40:22.93898444 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:40:23.26841138 +0000 UTC	tcp://10.0.0.3:2375	hopeful_babbage	INFO	this is also a log message

2014-07-17 19:40:23.63765218 +0000 UTC	tcp://10.0.0.2:2375	romantic_wozniak	INFO	this too is a log message

2014-07-17 19:40:23.94244022 +0000 UTC	tcp://10.0.0.2:2375	agitated_yonath	INFO	this is a log message

2014-07-17 19:40:24.27086067 +0000 UTC	tcp://10.0.0.3:2375	hopeful_babbage	INFO	this is also a log message

2014-07-17 19:40:24.64303259 +0000 UTC	tcp://10.0.0.2:2375	romantic_wozniak	INFO	this too is a log message

Here we have the logforwarder connecting to 2 docker backends, attaching to each of the containers and forwarding the stdout/stderr streams to the stdoutlogger.

Instead of stdoutlogger, this could be swapped out for syslog, logstash, whatever… it just needs to implement the libswarm “Log” verb.

Libswarm in Docker

I see various pieces of Docker core being broken into smaller libswarm services that come together to make Docker.
I see tools that hook into this libswarm API to extend native Docker functionality. No more bind-mounting Docker sockets into containers (which, btw, is super dangerous).

Libswarm is the API you will use in order to in order to interact with Docker, and not the traditional REST API (though this will probably be available in one form or another).

Want to talk more about libswarm? Join us on IRC @ #libswarm on freenode or at our github repo: github.com/docker/libswarm

Welcoming the Orchard and Fig team

Today I am extremely proud to announce that the creators of Orchard and Fig – two of the most polished and exciting projects to come out of the Docker ecosystem – are joining the Docker team. Fig is by far the easiest way to orchestrate the deployment of multi-container applications, and has been called “the perfect Docker companion for developers”. As it turns out, these are currently the two most important questions for the tens of thousands of people building applications on the Docker platform:

  1. How to orchestrate Docker containers in a standard way?
  2. How to make Docker awesome for developers?

With Fig, Ben and Aanand got closer to an answer than anybody else in the ecosystem. They have a natural instinct for building awesome developer tools, with just the right blend of simplicity and flexibility. They understand the value of a clean, minimal design, but they know from hard-earned experience that every real project needs its share of duct tape and temporary hacks – and you don’t want to be standing between an engineer and their duct tape. By incorporating that experience upstream, we have an opportunity to deliver an awesome solution to these problems, in a standardized and interoperable way, for every Docker user.

First, in parallel to maintaining Fig, Ben and Aanand will help incorporate into Docker the orchestration interfaces that they wished were available when building their own tools on top of it. “There are a thousand ways to implement orchestration. But those should be a thousand plugins, exposed to the developer through a unified, elegant interface”. We agree, and can’t wait to build this interface together.

Second, they will lead a new Developer Experience group – or DX for short. The goal of DX is to make Docker awesome to use for developers. This means anything from fixing UI details, improving Mac and Windows support, providing more tutorials, integrating with other popular developer tools, or simply using Docker a lot and reporting problems.

As usual, all development on Docker happens in the open, and we’re always looking for volunteer contributors and maintainers! If you want to join the Orchestration or DX groups, come say hi on IRC – #docker-dev / Freenode is where all the design discussions happen.

If you’re an Orchard user, there is a detailed post on what this means for you, and what to do next.

Lastly, since Orchard is proudly based in the UK, we are happy to announce that Docker is opening its first European office in London. If you’ve been considering joining Docker but don’t want to move to California – get in touch! We offer both on-site and remote positions.

Welcome Ben and Aanand – let’s go build it!

Additional Resources

Read more about this news in the press

vb  iw  gigaom  forbes   logo_infoq   zdnet siliconANGLEtwittervg

Docker Recognized by CRN as a 2014 Emerging Vendor

EmergingVendors-2014

 

To add to the excitement of this summer, we are pleased to announce that Docker has been recognized as a 2014 Emerging Vendor by CRN, a top news source for high-margin tech solutions. Having earned this type of recognition is an honor and truly demonstrates the over-whelming response towards the “Dockerize” movement. We would like to thank our supporters in the open source community and our dedicated team members.

The annual Emerging Vendors list recognizes technology solution providers who have influenced the tech space by providing innovative products and ideas which not only increase tech business and sales, but also create further opportunities for channel partners.

With the support of our outstanding ecosystem, we are now becoming known as the standard for containerization by providing an open source platform that enables rapid composition, collaborative iteration, and efficient distribution throughout the application lifecycle on any host – laptop, data center and the cloud.

After less than a year-and-a-half, Docker has reached five million downloads and over 500 project contributors. The ever-growing Docker ecosystem, including contributors, partners and companies built on the Docker platform, deserves a great deal of credit for perpetuating this growth.

We are privileged to be recognized among the IT channel and technology industries and look forward to continuing the momentum with upcoming company, partner and product announcements.

This award goes to our incredible community of contributors worldwide who are continuously making Docker and its ecosystem stronger. Once again, we’d like to thank you all for your contribution and hope that together we’ll have the opportunity to celebrate more awards in the future.

Thank you,

The Docker Team

Your Docker agenda at OSCON 2014 in Portland

This week is the giant OSCON conference in Portland, Oregon. Representing the Docker team are author and VP of Services James Turnbull, Solutions Engineer Jérôme Petazzoni, and Solutions Architect Aaron Huslage. They’ll be discussing everything related to Docker and containers, from security to containerizing desktop apps. In addition to these awesome talks, we also have some more informal events scheduled this week. If you’re attending the conference, here is where and when to see Docker-related discussions and demos, and where you can meet up and chat with fellow Dockerites:

Tuesday, July 22nd:

10:40am – EXPO HALL (TABLE C)

Office Hour with Docker’s VP of Services, James Turnbull
James will be on hand to answer questions and help you get familiar with Docker use cases and integrations.

11:30am – PORTLAND 256

Is it Safe to Run Applications in Linux Containers? (Jérôme Petazzoni, Docker, Inc.)

1:40pm – PORTLAND 251

Shipping Applications to Production in Containers with Docker (Jérôme Petazzoni, Docker, Inc.)

Wednesday July 23rd:

10:30am – EXPO HALL (TABLE A)

Office Hour with Docker Solution Engineer Jérôme Petazzoni
Jérôme will be on hand to answer questions and help you get familiar with Docker security, orchestration and containerizing desktop apps.

6pm to 9pm – New Relic

Docker and CoreOS OSCON meet-up brought to you by New Relic and Rackspace 

Please join us in the spectacular New Relic offices for Portland craft beers, tasty snacks, and lots of talk about Docker.

We think you’ll find our talks stimulating and interesting, and hopefully they’ll answer some of your questions about the Docker platform and containerization. Let us know by stopping by Office Hours or at the meet-up and saying hello. We hope to see you at the Conference!

Dockerize early and often,

- The Docker Team

Ten Docker Tips and Tricks That Will Make You Sing A Whale Song of Joy

whales

As a Solutions Engineer at Docker Inc., I’ve been able to accumulate all sorts of good Docker tips and tricks.  The sheer quantity of information available in the community is pretty overwhelming, and there are a lot of good tips and tricks that can make your workflow easier (or provide a little fun) which you could easily miss.

Once you’ve mastered the basics, the creative possibilities are pretty endless.  The “Cambrian Explosion” of creativity that Docker is provoking is extremely exciting.

So I’m going to share ten of my favorite tips and tricks with you guys. Ready?

  1. Run Docker on a VPS for extra speed
  2. Bind mount the docker socket on docker run
  3. Use containers as highly disposable dev environments
  4. bash is your friend
  5. Insta-nyan
  6. Edit /etc/hosts/ with the boot2docker IP address on OSX
  7. docker inspect -f voodoo
  8. Super easy terminals in-browser with wetty
  9. nsenter
  10. #docker

Alright, let’s do this!

Run Docker on a VPS for extra speed

This one’s pretty straightforward. If, like me, your home internet’s bandwidth is pretty lacking, you can run Docker on Digital Ocean or Linode and get much better bandwidth on pulls and pushes. I get around 50mbps download with Comcast, on my Linode my speed tests run an order of magnitude faster than that.

So if you have the need for speed, consider investing in a VPS for your own personal Docker playground.  This is a lifesaver if you’re on, say, coffee shop WiFi or anywhere else that the connection is less than ideal.

Bind mount the docker socket on docker run

What if you want to do Docker-ey things inside a container but you don’t want to go full Docker in Docker (dind) and run in --privileged mode? Well, you can use a base image that has the Docker client installed and bind-mount your Docker socket with -v.

docker run -it -v /var/run/docker.sock:/var/run/docker.sock nathanleclaire/devbox

Now you can send docker commands to the same instance of the docker daemon you are using on the host – inside your container!

This is really fun because it gives you all the advantages of being able to mess around with Docker containers on the host, with the flexibility and light weight of containers. Which leads into my next tip….

Use containers as highly disposable dev environments

How many times have you needed to quickly isolate an issue to see if it was related to a specific factors in particular, and nothing else? Or just wanted to pop onto a new branch, make some changes and experiment a little bit with what you have running/installed in your environment, without accidentally screwing something up big time?

Docker lets you do this in a a portable way.

Simply create a Dockerfile that defines your ideal development environment on the CLI (including ack, autojump, Go, etc. if you like those – whatever you need) and kick up a new instance of that image whenever you want to pop into a totally new box and try some stuff out. For instance, here’s Docker founder Solomon Hykes’s dev box.

FROM ubuntu:14.04

RUN apt-get update -y
RUN apt-get install -y mercurial
RUN apt-get install -y git
RUN apt-get install -y python
RUN apt-get install -y curl
RUN apt-get install -y vim
RUN apt-get install -y strace
RUN apt-get install -y diffstat
RUN apt-get install -y pkg-config
RUN apt-get install -y cmake
RUN apt-get install -y build-essential
RUN apt-get install -y tcpdump
RUN apt-get install -y screen
# Install go
RUN curl https://go.googlecode.com/files/go1.2.1.linux-amd64.tar.gz | tar -C /usr/local -zx

ENV GOROOT /usr/local/go
ENV PATH /usr/local/go/bin:$PATH
# Setup home environment
RUN useradd dev
RUN mkdir /home/dev && chown -R dev:/home/dev
RUN mkdir -p /home/dev/go /home/dev/bin /home/dev/lib /home/dev/include
ENV PATH /home/dev/bin:$PATH
ENV PKG_CONFIG_PATH /home/dev/lib/pkgconfig
ENV LD_LIBRARY_PATH /home/dev/lib
ENV GOPATH /home/dev/go:$GOPATH

RUN go get github.com/dotcloud/gordon/pulls
# Create a shared data volume
# We need to create an empty file, otherwise the volume will
# belong to root.
# This is probably a Docker bug.
RUN mkdir /var/shared/RUN touch /var/shared/placeholder
RUN chown -R dev:dev /var/shared
VOLUME /var/shared
WORKDIR /home/dev
ENV HOME /home/dev
ADD vimrc /home/dev/.vimrc
ADD vim /home/dev/.vim
ADD bash_profile /home/dev/.bash_profile
ADD gitconfig /home/dev/.gitconfig

# Link in shared parts of the home directory
RUN ln -s /var/shared/.ssh
RUN ln -s /var/shared/.bash_history
RUN ln -s /var/shared/.maintainercfg
RUN chown -R dev:/home/dev
USER dev

This set-up is especially deadly if you use vim/emacs as your editor ;) You can use /bin/bash as your CMD and docker run -it my/devbox right into a shell.

When you run the container, you can also bind-mount the Docker client binary and socket (as mentioned above) inside the container to get access to the host’s Docker daemon, which allows for all sorts of container antics!

Similarly, you can easily bootstrap a development environment on a new computer this way. Just install Docker and download your dev box image.

Bash is your friend

Or, more broadly, “the shell is your friend”.

Just as many of you probably have aliases in git to save keystrokes, you’ll likely want to create little shortcuts for yourself if you start to use Docker heavily. Just add these to your ~/.bashrc or equivalent and off you go.

There are some easy ones:

alias drm="docker rm"
alias dps="docker ps"

I will add one of these whenever I find myself typing the same command over and over.  Automation for the win!

You can also mix and match in all kinds of fun ways. For instance, you can do

$ drm -f $(dps -aq)

to remove all containers (including those which are running). Or you can do:

function da () {  
    docker start $1 && docker attach $1
}

to start a stopped container and attach to it.

I created a fun one to enable my rapid-bash-container-prompt habit mentioned in the previous tip:

function newbox () {    
    docker run -it --name $1 \
    --volumes-from=volume_container \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -e BOX_NAME=$1 nathanleclaire/devbox
}

Insta-nyan

Pretty simple. You want a nyan-cat in your terminal and you have Docker. You need only one command to activate the goodness.

docker run -it supertest2014/nyan

Edit /etc/hosts with the boot2docker IP on OSX

The newest (read: BEST) version of boot2docker includes a host-only network where you can access ports exposed by containers using the boot2docker virtual machine’s IP address. The boot2docker ip command makes access to this value easy. However, usually it is simply 192.168.59.103. I find this specific address a little hard to remember and cumbersome to type, so I add an entry to my /etc/hosts file for easy access of boot2docker:port when I’m running applications that expose ports with Docker. It’s handy, give it a shot!

Note: Do remember that it is possible for the boot2docker VM’s IP address to change, so make sure to check for that if you are encountering network issues using this shortcut. If you are not doing something that would mess with your network configuration (setting up and tearing down multiple virtual machines including boot2docker’s, etc.), though, you will likely not encounter this issue.

While you’re at it you should probably tweet @SvenDowideit and thank him for his work on boot2docker, since he is an absolute champ for delivering, maintaining, and documenting it.

Docker inspect -f voodoo

You can do all sorts of awesome flexible things with the docker inspect command’s -f (or --format) flag, if you’re willing to learn a little bit about Go templates.

Normally docker inspect $ID outputs a big JSON dump, while you can access individual properties with templating like:

docker inspect -f '{{ .NetworkSettings.IPAddress }}' $ID

The argument to -f is a Go (the language that Docker is written in) template. If you try something like:

$ docker inspect -f '{{ .NetworkSettings }}' $ID
map[Bridge:docker0 Gateway:172.17.42.1IPAddress:172.17.0.4IPPrefixLen:16PortMapping:<nil>Ports:map[5000/tcp:[map[HostIp:0.0.0.0HostPort:5000]]]]

You will not get JSON since Go will actually just dump the data type that Docker is marshalling into JSON for the output you see without -f. But you can do:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID
{"Bridge":"docker0","Gateway":"172.17.42.1","IPAddress":"172.17.0.4","IPPrefixLen":16,"PortMapping":null,"Ports":{"5000/tcp":[{"HostIp":"0.0.0.0","HostPort":"5000"}]}}

To get JSON! And to prettify it, you can pipe it into a Python builtin:

$ docker inspect -f '{{ json .NetworkSettings }}' $ID | python -mjson.tool
{
    "Bridge": "docker0",
    "Gateway": "172.17.42.1",
    "IPAddress": "172.17.0.4",
    "IPPrefixLen": 16,
    "PortMapping": null,
    "Ports": {
        "5000/tcp": [
            {
                "HostIp": "0.0.0.0",
                "HostPort": "5000"
            }
        ]
    }
}

You can also do other fun tricks like access object properties which have non-alphanumeric keys.  Here, again, it helps to know some Golang

docker inspect -f '{{ index .Volumes "/host/path" }}' $ID

This is a very powerful tool for quickly extracting information about your running containers, and is extremely helpful for troubleshooting because it provides a ton of detail.

Super easy terminals in-browser with wetty

I really foresee people making extremely FUN web applications with this kind of functionality. You can spin up a container which is running an instance of wetty (a JavaScript-powered, in-browser terminal emulator).

Try it for yourself with:

docker run -p 3000:3000 -dt nathanleclaire/wetty

Wetty only works in Chrome, unfortunately, but there are other JavaScript terminal emulators begging to be Dockerized and, if you are using it for a presentation or something (imagine embedding interactive CLI snapshots in your Reveal.js slideshow – nice), you control the browser anyway. Now you can embed isolated terminal applications in web applications wherever you want, and you control the environment in which they execute with an excruciating amount of detail. No pollution from host to container, and vice versa.

  • The creative possibilities of this are just mind-boggling to me. I REALLY want to see someone make a version of TypeRacer where you compete with other contestants in real time to type code into vim or emacs as quickly as possible. That would be pure awesome. Or a real-time coding challenge where your code competes with other code in an arena for dominance a la Core Wars.

Nsenter

Docker engineer Jérôme Petazzoni wrote an opinionated article a few weeks ago that shook things up a bit. There, he argued that you should not need to run sshd (the daemon for getting a remote terminal prompt) in your containers and, in fact, if you are doing so you are violating a Docker principle (one concern per container). It’s a good read, and he mentions nsenter as a fun trick to get a prompt inside of containers which have already been initialized with a process.

See here or here to learn how to do it.

#docker

I’m not talking about the hashtag!! I’m talking about the channel on Freenode on IRC. It’s hands-down the best place to meet with fellow Dockers online, ask questions (all levels welcome!), and seek truly excellent expertise. At any given time there are about 1,000 people or more sitting in, and it’s a great community as well as resource. Seriously, if you’ve never tried it before, go check it out. I know IRC can be scary if you’re not accustomed to using it, but the effort of setting it up and learning to use it a bit will pay huge dividends for you in terms of knowledge gleaned. I guarantee it. So if you haven’t come to hang out with us on IRC yet, do it!

To join:

  1. Download an IRC Client such as LimeChat
  2. Connect to the irc.freenode.net network
  3. Join the #docker channel

Welcome!

Conclusion

That’s all for now, folks.  Tweet at us @docker and tell us your favorite Docker tips and tricks !

Dockercon video: DockerCon Hackathon winners

On the weekend before DockerCon, the team at Docker, Inc decided to host a 24-hour Docker-centric hackathon. Big thanks to Chef for sponsoring this event. While participants were not allowed to start the project in advance, they were encouraged to brainstorm prior to the hackathon. Follow these links to view all the photos from day 1 and day 2.

 

dockercon hackathon
 

Each team (1 to 3 people) had exactly 24 hours to complete the project, including the time required to create all materials needed for their presentation in front of the hackathon jury. Here are the selection criteria used by the jury:

  1. Only applications that actually run were judged.

  2. Each project was given 0-4 points in each of the following areas:

    • Novelty. Has anyone ever done this using Docker before?

    • Fit. Does Docker improve the project or fundamentally enable it?

    • Efficiency. Is this implementation small in size, easy to transport, quick to start up and run? Higher scores for more functionality in smaller images and faster start times.

    • Integration. Does the project fit well into other systems, or is it sufficiently complex itself to be its own system? More (useful) interconnection gets more points.

    • Transparency. Can other people easily recreate your project now that you’ve shown how?

    • Presentation: How well did you present your project? Did you speak clearly, cover all the important points, and generally impress people?

    • Utility. Popular vote on how many would use each of the tied projects. So keep your audience in mind!

The 3 winning teams (below) were offered free tickets for DockerCon and a chance to present their projects during the conference. 

allteamhackathon

Below are their video presentations during the conference and links to their slide decks and GitHub repos when available.

Many thanks to all the participants who made this event a success! We all had a really good time thanks to you and hope to see you again at the next DockerCon hackathon.

Team Dockerana

Team members: Charlie Lewis and George Lewis

Abstract: Instrumentation and logging of docker hosts and their containers.

Check out their GitHub repo

 

Dockerana

Team Gist-reveal.it

Team members: Ryan Jarvinen and Frederick F. Kautz

Abstract:docker image that helps facilitate open source slideshow authoring by templating gist.github.com content using reveal.js

Check out their GitHub repo and slide deck built with gist-reveal.it

 Gist-reveal

Team Electric Cloud

Team members: Nikhil VazeTanay Nagjee and Siddhartha Gupta

Abstract: Orchestration and workflow with containers, taking a sample web application from commit to production.

Check out their GitHub repo and their DockerCon Hackathon blog post

 

ElectricCloud
 

Docker Events and Meetup

Try Docker and stay up-to-date

 

Dockercon video: Docker on Google App Engine

In this session, Ekaterina Volkova from Google talks about how Docker as an open container standard creates powerful new tooling experiences for building and deploying applications that run on traditional PaaS Platforms like Google App Engine.

 

kate2©CurtyPhotography-1436

Learn More

Docker Events and Meetup

Try Docker and stay up-to-date

 

Continued Community Momentum Around Orchestration

One of the great aspects of the Docker community is its ever-growing ecosystem of tools, technologies, and services built on the Docker platform. Today, we’re excited to join with Google to highlight the momentum of their Docker orchestration and workload scheduling tool, Kubernetes.  Based on tools Google uses internally to run large workloads like Gmail and Search, Kubernetes was first announced at last month’s DockerCon, the Docker community’s inaugural conference.

6a034604d10d1632a0172e1ccb08d1a2

Orchestration is an important category of tooling for distributed applications built on the Docker platform, and Kubernetes joins Mesos, Consul, Fleet, Geard, ZooKeeper, and others, each addressing a particular use case or niche. In Kubernetes’ case, it coordinates Docker workloads so as to take advantage of Google Compute’s underlying operations and infrastructure. Given Google’s expertise in large scale operations, Kubernetes is a welcome addition to this tool category.

51cf2acd7f65b6f0a09313090d137358

This proliferation of orchestration tools puts the user in the position of having to evaluate the field and select one.  And yet as each user’s requirements are different and each tool has its strengths, the decision is a complex one: Should they prioritize for service discovery? Clustering? Composition? Workload scheduling? And what new requirements will the user face as their app needs evolve?  In a world where the technology is iterating and improving rapidly, wouldn’t it be awesome if the user didn’t get locked-in to one vendor’s solution?

dde5275fceb22a6eca57c564259ae4fa

Thanks to libswarm, a new community project announced at DockerCon in Solomon’s keynote address, now they won’t.  libswarm is a standard interface to combine and organize services in a distributed system.  It provides the building blocks or primitives for orchestration services like composition, clustering, service registration and discovery, and more.  Much like the “write once, run anywhere” promise for apps in Docker containers, libswarm’s “define once, run anywhere” promise for distributed systems has inspired the community, and we’re already seeing community-contributed libswarm adapters for Mesos, Geard, Fleet, AWS EC2, Google Compute, Rackspace, Microsoft Azure, Tutum, Orchard, and others.  While libswarm and Kubernetes don’t work together yet, we are excited to work with Google to make this a reality.  This groundswell of support of libswarm’s building blocks provides users with interoperability across orchestration tools and service providers, freeing users from lock-in and enabling multi-cloud, multi-environment deployments and workload migration.

fd894200e46c6094eed131b668e3815c

There’s a brave new world ahead for distributed systems enabled by the libswarm community, and we’re excited to partner with Google and others on this journey.  Watch this space!

Dockerize early and often,

- The Docker Team

Learn More