Category: Latest Posts

DockerCon EU: The future of the Docker project by Solomon Hykes

In this post, Solomon Hykes talks about the people and the numbers behind the Docker project. From the role of the Docker Governance Advisory Board (DGAB) to the review of the metrics associated with pull requests (PRs), Solomon reaffirmes his commitment to a federated, open-design project. By breaking down the project into sub-systems like Networking, Storage, Image Distribution, Trust, etc Solomon invites the “Human Stack” of contributors, maintainers, architects and other stakeholders to collaborate and define a new model for open source at large scale.


DockerCon Amsterdam - December 2014 - Photography © Joni Israel




Learn More


DockerCon video: Migrating a large code-base to Docker containers

In this session, Sage UK employees Doug Johnson (Head of Architecture) and Jonathan Lozinski (Technical Architect) discuss the process of moving a set of large, monolithic Rails applications into a collection of Docker containers. Their talk highlights some of the challenges they faced and the techniques they used to overcome them.




Learn More


Introducing user invites for Docker Hub

Docker Hub is becoming a central part of your dev workflow and the home of your private Dockerized apps.  Docker Hub organizations allows you to securely share private repos with everyone on your team. This organizations function allows you to structure your team members into groups that align with different security permission levels for your repositories.

Simplified Organization Setup

Docker Hub email invite

To make setting up and managing your team’s new users and access control permissions– we’ve introduced email invitations to Docker Hub. Add anyone to your organization by email address whether they have a Docker Hub account or not. Those new to Docker Hub will receive an invite to sign up in seconds and get automatically added to your team.

Get Started Now

Join the growing number of individuals and teams who are discovering how Docker and Docker Hub accelerates how quickly you can ship any app, anywhere.

DockerCon EU: Keynote on Docker Hub and Docker Hub Enterprise

This post covers the second half of the DockerCon EU evening keynote, given on December 4th, 2014. While the first part of the keynote focused on the Docker platform’s new orchestration services, this second part focuses on Docker Hub and its new features. It also covers Docker Hub Enterprise, which was announced during the conference.

For your convenience, we’ve divided this part of the keynote into four discrete sections:

1. Introduction to Docker Hub by Scott Johnston

2. The future of Docker Hub by Justen Stepka

3. Bleacher Report case study by Tung Nguyen

4. Introducing Docker Hub Enterprise by Jon Chu and Brian Bland





Learn More

DockerCon Europe Keynote: Continuous Delivery in the Enterprise by Henk Kolk (ING)

In this keynote, Henk Kolk, Chief Architect at ING, talks about Continuous Delivery in a large financial services company. Kolk details the business benefits of ING’s DevOps transformation and highlights the role Docker played in speeding up innovation cycles.







Learn More

DockerCon EU: Keynote on Orchestration

In this post, we’re pleased to present you with video of the first half of the DockerCon EU evening keynote, given on December 4th.  This part of the keynote focused on the Docker platform’s new orchestration services.  The keynote debuted three new platform services designed to cover all aspects of the dynamic lifecycle of distributed applications. They help you cope with a development environment where new code or a new Dockerized service that changes application functionality can be put into production in minutes instead of months.

While Docker’s orchestration services are the most comprehensive on the market, their unique modular structure makes them accessible not only to end-users – like developers and sysadmins – but also to our ecosystem partners through robust APIs.

For your convenience, we’ve divided the keynote into four discrete sections:

  1. Introduction: The future of the Docker project by Solomon Hykes
  2. Docker Machine by Ben Firshman
  3. Docker Swarm by Victor Vieux and Andrea Luzzardi
  4. Docker Compose by Aanand Prasad


1. The Future of Docker


distributed apps



Docker Machine: source code and Alpha build





Docker Swarm: source code and Alpha build




Docker Compose: proposal, source code and Alpha build



Learn More

DockerCon Europe keynote: State of the Art in Microservices by Adrian Cockcroft (Battery Ventures)

In this keynote, Adrian Cockcroft (Technology Fellow at Battery Ventures and former Cloud Architect at Netflix) shares his thoughts about the future of microservices, summarizes the differences and commonalities across various microservices architectures and shows how they are evolving.




Learn More

DockerCon Europe Revisited

Last week was the first European edition of the official Docker users conference, DockerCon Europe, at the Nemo Science Museum in Amsterdam (photos below).

DockerCon Amsterdam - December 2014 - Photography © Joni Israel DockerCon Amsterdam - December 2014 - Photography © Joni Israel

This is event was organized through incredible collaboration with volunteers from the Docker Amsterdam community. Once again, we would like to voice our gratitude to Steven Geerts, Pini Reznik, Harm Boertien, Mark Coleman, Marteen Dirkse, Jaroslav Holub, Catalin Jora and Container Solutions for their contribution and support.


 Screen Shot 2014-12-12 at 6.06.27 PM

In addition to the volunteers we are also grateful for our amazing sponsors who helped make this conference a success.


Screen Shot 2014-12-12 at 5.57.27 PM


We’re pleased to announce that all the keynotes and breakout sessions have been recorded and will be published over the next few days starting today. For now, we are excited to share with you the first 2 videos from the event:


OPENING WORDS BY SOLOMON HYKES ( CTO and Founder of Docker, Inc. and the Chief Maintainer of the Docker Project) and BEN GOLUB (CEO of Docker, Inc.) 

DockerCon Amsterdam - December 2014 - Photography © Joni Israel DockerCon Amsterdam - December 2014 - Photography © Joni Israel






 DockerCon Amsterdam - December 2014 - Photography © Joni Israel DockerCon Amsterdam - December 2014 - Photography © Joni Israel



For those of you who attended the conference, we hope you had as much fun as we did playing with the museum exhibitions, building lego whales and meeting other members of the Docker community.


 DockerCon Amsterdam - December 2014 - Photography © Joni Israel DockerCon Amsterdam - December 2014 - Photography © Joni Israel

DockerCon Amsterdam - December 2014 - Photography © Joni Israel


Thank you all for coming and we hope to see you at DockerCon 2015 in San Francisco:

  • Venue: San Francisco Marriott Marquis, 780 Mission St, San Francisco, CA 94103
  • Hackathon dates: June 20 – 21, 2015
  • Conference dates: June 22 – 23, 2015
  • Training date: June 24th, 2015
  • CFP: You can submit your talk here.
  • Sponsorship: If you are interested in our sponsorship options, please contact us at


We invite you to follow the official twitter account: @DockerCon and hashtag #dockercon in order to get the latest updates about the official Docker conferences.

Stay tuned for the next videos!


– The Docker Team


Learn More

Advancing Docker Security: Docker 1.4.0 and 1.3.3 Releases

We’re pleased to announce that we have today released Docker Engine 1.4. What’s in it? As Solomon Hykes described last week at DockerCon Europe, its major “feature” is an emphasis on bug fixes and platform stability. Over 180 commits for fixes were merged! Docker 1.4 also adds the Overlay Filesystem as a new, experimental, storage driver. (see release notes and bump branch).

Today, we also release Docker Engine 1.3.3 fixing three vulnerabilities (see advisory). These fixes are also in version 1.4.0. Here is a bit more info on the fixes:

  • On November 24, 2014, we released Docker 1.3.2 to remedy two critical issues which could be exploited by a malicious image to break out of Docker container. Please see our security advisory for more details.
  • After releasing 1.3.2, we discovered additional vulnerabilities that could be exploited by a malicious Dockerfile, image, or registry to compromise a Docker host, or spoof official images.
  • The remediations have been released today in both Docker Engine 1.3.3 and Docker Engine 1.4.  All users should plan to upgrade to Docker Engine 1.3.3 or higher.  Please see the docs for upgrade instructions.
  • Please note that these vulnerabilities only affect users downloading or running malicious images, or building from malicious Dockerfiles. Users may protect themselves from malicious content by only downloading, building, or running images from trusted sources.  In addition, we recommend that you:

° Run Docker Engine with AppArmor or SELinux to provide additional containment.

° Map groups of mutually-trusted containers to separate machines and VMs.

Advancing Docker Security

Following on our security advisory and subsequent Docker Engine 1.3.3 release, I would like to share some of our thoughts and plans regarding security. Security is of paramount importance to Docker which is reflected in our

  1. Efforts to rapidly address vulnerabilities when they are identified
  2. Enhancements to the security of the platform, our users, and their applications through our roadmap
  3. Collaboration with our contributors and ecosystem partners to define a set of best practices for Docker security.

Specifically, our efforts have focused on the following:

  1. Product & Ecosystem – Docker Engine takes advantage of the security mechanisms and isolation provided by the OS. This is pluggable, with support on Linux for namespaces, capabilities, and cgroups implemented through either libcontainer or lxc. In the future, we expect new execution engine plugins to offer more choice and greater granularity for our security-focused users. These mechanisms are part of what define a container and running in a container is safer than running without. On systems where supported, Docker has incorporated SELinux and AppArmor integration. Red Hat, Canonical, and other companies have been active members of the Docker community to help us drive security forward.
    Furthermore, we have added signed Docker images in our Docker Hub Official Repos starting in release 1.3. This is the first step towards a more robust chain of trust that allows users to have confidence in the origin of their images. However, do note that untrustworthy sources may still create signed images and it will be up to users to trust, or not trust, the developers of those images.  You can read more about our Trust System proposal here and you are encouraged to add feedback.  We continue to receive great input from all around the community on ideas for security features, and as these come together we’ll be sure to share the proposals and roadmaps here – stay tuned for more!
  2. Security auditing, reporting, and response – we perform our own security testing as well as engaging a private security firm to audit and perform penetration testing. Issues are also received by our active user and developer community. All issues found or reported are promptly triaged, with critical issues initiating an immediate response. Our goal is to have security fixes for the current stable release in the hands of our users absolutely as quickly as possible. Fixes, once prepared, are initially sent to an early disclosure notification list for review and for vendor preparedness in advance of public disclosure. This list includes Linux distributions and cloud providers. We continue to develop and update our practices as we learn.
  3. Disclosure & Transparency – we practice responsible disclosure. Without compromising users, we disclose and provide updates on security issues in a timely manner by issuing security releases and associated security advisories. We further plan to enhance our security page where we will be providing a historical accounting of published advisories and will provide a hall of fame for researchers. We value and welcome input from the broader community and will acknowledge the security contribution in our hall of fame! If you have any feedback or would like to report a possible security issue simply email

As we grow, we will continue our investment in our security team, contributions, tooling and processes. This investment will make Docker safer, helping it become a secure and trusted partner for our users.

You can help! Please report issues to

For more, see


Marianna Tessel
Docker SVP of Engineering

Announcing Docker Machine, Swarm, and Compose for Orchestrating Distributed Apps

As users start exploring Docker and Docker Hub, they typically start by Dockerizing some apps, incorporating Docker into their build-test pipeline, creating a Docker-based development environment, or trying out one of the other half-dozen common use cases.  In these use cases and others, we’ve heard from many community members about how Docker has sped up development-to-deploy cycle times and eliminated the friction of moving apps from laptops to data centers, or from one cloud to another.


Da capo

After getting their feet wet with simple, single-container apps, we see many users start to employ Docker as a platform for more complex, distributed applications (a.k.a., cloud native, microservices architecture, twelve-factor apps), or for apps composed of multiple containers, typically running across multiple hosts.  Users’ motivations for exploring distributed apps vary, but some common themes are emerging. Users find them appealing because they are:

  • Portable.  Application owners want an application stack that is independent of the underlying infrastructure so they’ll have the freedom to deploy on the infrastructure of their choosing – and then move or scale without friction as conditions change;
  • Composable.  Application owners want apps composed of multiple smaller services so they can keep teams small and fast-moving as well as assemble apps from trusted, secure components;
  • Dynamic.  Development-to-deployment cycles measured in months are no longer acceptable – application owners want cycle times measured in hours or even minutes.  Breaking down monolithic applications into multiple smaller services that can be updated independently reduces change management risk and compresses cycle time.
  • Scalable.  To simultaneously meet changes in demand while efficiently using infrastructure resources, application owners want to frictionlessly scale-up and -down across hosts, data centers, and even public clouds.

However, as users in the Docker community take the early steps on their path to distributed apps, we often hear questions like these:

  • “My singleton Docker containers are 100% portable to any infrastructure; but, how do I ensure my multi-container distributed app is also 100% portable, whether moving from staging to production, or across data centers, or between public clouds?”
  • “My organization has standardized on Docker; how do I retain this standard while still taking advantage of the breadth and depth of the tools in the Docker ecosystem?”
  • “How should I refactor my app delivery pipeline to support rapid iterations on the containers providing the services?”
  • “Breaking up my monolithic apps into microservices means there are many more components to keep track of and manage; what’s available in the Docker ecosystem to help with this?”

To help users along their distributed apps journey, today at DockerCon Europe we’re excited to announce three new Docker orchestration services: Docker Machine, Docker Swarm, and Docker Compose.  Each one covers a different aspect of the lifecycle for distributed apps. Each one is implemented with a “batteries included, but removable” approach which, thanks to our orchestration APIs, means they may be swapped-out for alternative implementations from ecosystem partners designed for particular use cases.

dockercon blog - orchestra conductor

Docker Machine

Docker Machine takes you from zero-to-Docker in seconds with a single command.

Before Docker Machine, a developer would need to log in to the host and follow installation and configuration instructions specifically for that host and its OS.  With Docker Machine, whether provisioning the Docker daemon on a new laptop, on virtual machines in the data center, or on a public cloud instance, the same, single command …

% machine create -d [infrastructure provider] [provider options] [machine name]

… gets the target host ready to run Docker containers.  Then, from the same interface, you can manage multiple Docker hosts regardless of their location and run any Docker command on them.

Furthermore, the pluggable backend of Docker Machine allows users to take full advantage of ecosystem partners providing Docker-ready infrastructure, while still accessing everything through the same interface.  This driver API works for provisioning Docker on a local machine, on a virtual machine in the data center, or on a public cloud instance.  In this Alpha release, Docker Machine ships with drivers for provisioning Docker locally with Virtualbox as well as remotely on Digital Ocean instances; more drivers are in the works for AWS, Azure, VMware, and other infrastructure.

Note that Docker Machine is a separate project from the Docker Engine.  To try out the Alpha build and contribute to Docker Machine and its drivers, go to its repository.


Docker Swarm

Docker Swarm is native clustering for Dockerized distributed apps. It picks-up where Docker Machines leaves off by optimizing host resource utilization and providing failover services.  Specifically, Docker Swarm allows users to create resource pools of hosts running Docker daemons and then schedule Docker containers to run on top, automatically managing workload placement and maintaining cluster state.

For input, the default scheduler uses the resource requirements of the Docker container workloads and the resource availabilities of the hosts in the cluster and then uses bin pack to automatically optimize placement of workloads.  For example, here is how a user would schedule a redis container requiring 1 gig of memory:

% docker run -d -P -m 1g redis

To support specific requirements and policy-based scheduling, Docker Swarm provides standard and custom constraints.  For example, say that to ensure good I/O performance you want to run your MySQL container on a host with SSD storage.  You could express this as a constraint when scheduling the MySQL workload as follows:

% docker run -d -P -e constraint:storage=ssd mysql

In addition to resource optimization, Docker Swarm provides high-availability and failover.  Docker Swarm continuously health-checks the Docker daemon’s hosts and, should one suffer an outage, automatically rebalances by moving and re-starting the Docker containers from the failed host to a new one.

One of the unique aspects of Docker Swarm is that it can scale with the lifecycle of the app.  This means that the developer can start with a “cluster” consisting of a single host and maintain a consistent interface as the app scales from one host to two, 20, or 200 hosts.

Finally, Docker Swarm has a pluggable architecture and ships “batteries included” with a default scheduler.  To this end, we’re excited to announce a partnership with Mesosphere to make it a “first class citizen” in Docker Swam for landing Docker container workloads.  Stay tuned for the public API in the first half of 2015 which will allow swapping-in a scheduler implemented by an ecosystem partner or even your own custom implementation.  Nevertheless, regardless of the underlying scheduler implementation, the interface to the app remains consistent, meaning that the app remains 100% portable.

The above just scratches the surface.  Like Docker Machine, Docker Swarm is a separate project from the Docker Engine.  If you want to learn more about Docker Swarm, including getting your hands on an Alpha build, head on over to its repo.

Docker Compose

Docker Compose is the last piece of the orchestration puzzle.  After provisioning Docker daemons on any host in any location with Docker Machine and clustering them with Docker Swarm, users can employ Docker Compose to assemble multi-container distributed apps that run on top of these clusters.

The first step to employing Docker Compose is to use a simple YAML file to declaratively define the desired state of the multi-container app:

     build: .
     command: python
     - "5000:5000"
     - .:/code
     - redis
     image: redis:latest
     command: redis-server --appendonly yes

This example shows how Docker Compose takes advantage of existing containers.  Specifically, in this simple two-container app declaration, the first container is a Python app built each time from the Dockerfile in the current directory.  The second container is built from the redis Official Repo on the Docker Hub Registry.  The links directive declares that the Python app container is dependent on the redis container.

Not that it’s defined, starting your app is as easy as …

% docker up

With this single command, the Python container is automatically built from its Dockerfile and the redis container is pulled from the Docker Hub Registry.  Then, thanks to the links directive expressing the dependency between the Python and redis containers, the redis container is started *first*, followed by the Python container.

Docker Compose is still a work-in-progress and we want your help to design it. In particular, we want to know whether or not you think this should be a part of the Docker binary or a separate tool. Head over to the proposal on GitHub to try out an alpha build and have your say.


All this is just the briefest introduction to Docker Machine, Docker Swarm, and Docker Compose.  We hope you’ll take a moment to try them out and give us feedback – these projects are moving quickly and we welcome your input!

We also wish to thank the many community members who have contributed their experience, feedback, and pull requests during the pre-Alpha iterations of these projects.  It’s thanks to you that we were able to make so much progress so quickly, and in the right direction.

Distributed apps offer many benefits to users – portability, scalability, dynamic development-to-deployment acceleration – and we’re excited by the role the Docker platform, community, and ecosystem are playing in making these apps easier to build, ship, and run.  We’ve got a ways to go, but we’re psyched by this start – join us and help us get there faster!

Happy Hacking,

– The Docker Team

Learn More


Read more in the news

forbes vb   Zdnet-lightbg-200px   tc-logo thevarguy