Category: Latest Posts

Fig 1.0: boot2docker compatibility and more

Heart fig

Today’s a big day for Fig, our Docker-based development environment tool: we’re releasing version 1.0. It’s the first and last major version increment, as we’re already hard at work building the functionality Fig provides into Docker itself.

There’s an absolute ton of improvements in this one, but the most wonderful is that Fig now works out-of-the-box with boot2docker on OS X, because volumes on the host now work the way you expect them to (for more on how that works, see the Docker 1.3 announcement). This means Mac users need suffer unofficial solutions no more: run the standard Docker installer, then download Fig and you’re off.

Beyond that, we’ve got new commands, .dockerignore support and many more improvements too numerous to list here. Take a look at the release notes and bask in the goodness.

If you’re already a user, you’ll find a lot to love in this upgrade – if you’re not, now’s the perfect time to try Fig. It’s delicious.

Learn More

Docker 1.3: signed images, process injection, security options, Mac shared directories


Today we’re pleased to announce the availability of Docker Engine 1.3.  With over 750 commits from 45 contributors, this release includes new capabilities as well as lots of quality enhancements.  You can get more details in the release notes, but we’ll highlight four of the new features here.

Tech Preview: Digital Signature Verification

First up, in this release, the Docker Engine will now automatically verify the provenance and integrity of all Official Repos using digital signatures. Official Repos are Docker images curated and optimized by the Docker community to be the best building blocks for assembling distributed applications.  A valid signature provides an added level of trust by indicating that the Official Repo image has not been tampered with.

With Official Repos representing one out of every five downloads from the Docker Hub Registry, this cryptographic verification will provide users with an additional assurance of security. Furthermore, it represents the first of several features we’ll be shipping in the coming months for both publishers and consumers of repos, features that will support publisher authentication, image integrity and authorization, PKI management, and more.  Watch this space.

Note that this feature is still work in progress: for now, if an official image is corrupted or tampered with, Docker will issue a warning but will not prevent it from running. And non-official images are not verified either. This will change in future versions as we harden the code and iron out the inevitable usability quirks. Until then, please don’t rely on this feature for serious security, just yet.

Inject new processes with docker exec

Next, when developing an application, you sometimes need to look at it while it’s running.  A number of tools, like nsinit and nsenter, have sprung up to help developers debug their Dockerized apps, but these are additional tools to find, learn, and manage.  Similarly, some users have taken to running an init process to spawn sshd along with their app to allow them access, which creates risk and overhead.

To make debugging easier, we’re introducing docker exec, which allows a user to spawn a process inside their Docker container via the Docker API and CLI.  For example…

$ docker exec -it ubuntu_bash bash

…will create a new Bash session inside the container ubuntu_bash.

To be clear, by providing this we’re not changing our recommended approach of “one app per container.”  Instead, we’re responding to users who’ve told us they sometimes need helper processes around the app. That’s what `docker exec` is about.

Tune container lifecycles with docker create

The docker run <image name> command creates a container and spawns a process to run it.  Many users have asked to break this apart for finer-grained management of their container lifecycles.  The docker create command makes this possible.  So for example…

$ docker create -t -i fedora bash

…creates a writable container layer (and prints the container’s ID to STDOUT), but doesn’t run it.  You could then do the following… 

$ docker start -a -i 6d8af538ec5

…to run the container.  That is, docker create gives the user and/or process supervisors the flexibility to use the docker start and docker stop CLI commands to manage the container’s lifecycle.

Security Options

With this release we’ve added a new flag to the CLI, --security-opt,  that allows users to set custom SELinux and AppArmor labels and profiles.  For example, suppose you had a policy that allowed a container process to listen only on Apache ports.  Assuming you had defined this policy in svirt_apache, you could apply it to the container as follows:

$ docker run --security-opt label:type:svirt_apache -i -t centos \ bash

One of benefits of this feature is that users will be able to run docker-in-docker without having to use docker run --privileged on those kernels supporting SELinux or AppArmor.  Not giving the running container all the host access and rights as --privileged significantly reduces the surface area of potential threats.

boot2docker: Shared directories on Mac OS X

Using Docker on Mac OS X has become much easier since we incorporated boot2docker, but the experience has had some usability quirks. With this release we are addressing the most common issue: sharing directories between your Mac and your containers. Using Docker 1.3 with the corresponding version of boot2docker, host-mounted volumes now work the way you expect them to.

For example, this command:

$ docker run -v /Users/bob/myapp/src:/src [...]

Will mount the directory /Users/bob/myapp/src from your Mac into the container. This makes it much easier to use Docker for a continuous development flow, where you benefit from a predictable containerized development environment, but don’t want to rebuild a new container every time you change a line in your source code. If you are using Fig for your development workflow, for example, the benefits are immediately obvious.

Note that there are still some limitations: for example this feature is limited to boot2docker’s virtualbox configuration, cannot be managed dynamically, and only works for directories in /Users . But we are receiving exciting contributions to improve volume management, so expect this area to improve drastically in the next few releases.

Many thanks to all 45 contributors who participated in this release.  In particular, we’d like to give shout-outs to @burke, @duglin, @hugoduncan, @rhatdan, @tianon, @vbatts, and to release captain, @crosbymichael.  Thanks everyone!

We hope the above gives a glimpse into Docker Engine 1.3.  For more details, please check-out the GitHub 1.3 milestone issues and pull requests. We look forward to your feedback!

Happy hacking,

– The Docker Team

Learn More

“Identity Penguin” cartoon by Laurel.

Docker and Microsoft Partner to Drive Adoption of Distributed Applications to Every Enterprise Operating System and Cloud


Today, we announced an exciting set of joint initiatives with Microsoft, including:

  • Extending Docker to Windows with Docker Engine for Windows Server
  • Microsoft’s support of Docker’s open orchestration APIs
  • Integration of Docker Hub with Microsoft Azure, and
  • Collaboration on the multi-Docker container model, including support for applications consisting of both Linux and Windows Docker containers

I’d like to provide some context for this announcement, and why we are so excited.

When Docker was launched as an open source project 18 months ago, we had a simple goal:

“To build the ‘button’ that enables any application to be built and deployed on any server, anywhere.”

Today, we feel we’ve largely succeeded…for a) Linux applications consisting of b) a limited number of Docker containers.

We need to make progress in two big areas over the next few months in order to achieve Docker’s original goal:

  1. Extend Docker to major architectures beyond 64-bit Linux, and
  2. Extend to support applications consisting of a large number of Docker containers, distributed across servers, clusters, and data centers.

Today’s announcement represents a big step to addressing both of these challenges.

Docker Engine for Windows Server

While Docker started with Linux and is a driving force behind the Linux containerization movement, over half of all enterprise workloads are Windows based.

With today’s announcement, we are essentially doubling the universe of developers and organizations that can participate in the Docker ecosystem, creating a uniform standard across the two largest enterprise application ecosystems.

Docker Engine for Windows Server will enable developers to build containerized Windows and Linux applications using the same Docker tooling and leveraging the same, huge Docker ecosystem. The effort will include:

  1. A Microsoft led initiative to add container capabilities (e.g. the equivalent of namespaces and cgroups) to Windows
  2. A new Docker Windows Daemon, which will be built in open source under the aegis and governance of the Docker project, with input from Microsoft, Docker, Inc, and the broader Docker community
  3. The overall Docker platform, which will also be extended (in the open) to support both the Docker Windows Daemon and the Docker Linux Daemon.

Microsoft Support for Docker’s Orchestration APIs

We’ve been making significant progress towards enabling multi-container, distributed applications in the past few months. Composing multi-Docker container applications will become significantly easier as we integrate Fig into Docker.  We’ve also been making progress on other critical capabilities for orchestration including provisioning and managing Docker hosts, creating clusters of Docker hosts, and inter-Docker container networking, all of which will be previewed this quarter.

Microsoft’s endorsement and early work with our orchestration APIs is hugely exciting.  Microsoft and Docker share a common vision that multi-container applications should be assembled using both Dockerized Windows and Dockerized Linux components.  The two companies will work with emergent infrastructure tools for multi-container applications like Kubernetes, Mesos, Helios etc. to provide a uniform Docker interface that provides developers with multi-platform orchestration capabilities leveraging Dockerized content from these two ecosystems.

The Partnership

At the heart of the Microsoft and Docker partnership is a shared view that there is great leverage when you provide developers a common approach to build their applications.  Microsoft viewed Docker for what it is at its core; an open-platform for distributed applications that can provide a uniform user interface to a modular set of tools for containerizing and then orchestrating these applications.

Unifying Windows Server and Linux through the Docker platform aligns with Microsoft CEO Satya Nadella’s strategy to be the “productivity and platform company for the mobile-first and cloud-first world.”  There is no greater productivity gain that I can think of than integrating two great development ecosystems and providing the means to collaborate by leveraging the best application “content” from each.

This new era of development that we are embarking on will be a far cry from today’s status quo.  Today the vast majority of enterprise applications are slow-evolving monoliths that are bound to a specific infrastructure, let-alone a specific server.

Dockerized distributed applications in contrast are composed of modular components providing for constant real-time innovations which can be ported across any infrastructure, whether on premise or in the cloud, without any modification.. Modularity means that Dockerized distributed applications are evolutionary; they can blend the old with the new.

Integrating application content is also central to this partnership, as we announce federation plans between Docker Hub and Azure Gallery, which includes a broad array of content from the Microsoft ISV ecosystem.  Docker Hub–which in just 4 months has seen its catalog grow to 45,000+ Dockerized applications–will become  place for developers seeking to find and share the best distributed applications and components, whether they are Windows or Linux based.

Great Beginnings; Welcome to Microsoft

Today, the Docker ecosystem extends a warm “Welcome!” to Microsoft, and its ecosystem. We cannot wait for the collaboration to begin, and will be sharing more details in the weeks to come.

In the meantime, check out one of our first collaborations at Docker Global Hack Day #2.

Learn More

Read More on the news


wsj-wallstreetjournal-converted   gigaom      forbes    tc-logo  wired venture-beat-logo     Zdnet-lightbg-200px    SDTIMES_Logo      pcworld      siliconANGLEtwitter eweek The_Register_r  IT-Business-Edge-logoinfoworld  InformationWeek-jpegdataEnterprise-Tech-logo

Read More in French

logo_jdn  fwinformaticien logo_jdnlogo_ictjournal Zdnet-lightbg-200px silicon lmi

Your Docker agenda for LinuxCon / CloudOpen Europe in Düsseldorf


linuxcon     cloudopen


This week, the LinuxCon Europe / CloudOpen Europe conference is taking place in Düsseldorf, Germany. Unfortunately, there won’t be any members of the Docker Team attending but we’re confident you’ll meet awesome Docker contributors and active members of the Docker Community. In addition to informal interactions, here’s a list of where and when to see Docker-related discussions and demos:


Tuesday, October 14th:

12:15pm – Room 19

Multi-OS Continuous Packaging with Docker and by Bruno Cornec, HP

Bruno will explain how to build a new container, setting it up for this usage, then preparing the delivery of the project content in order to finally build packages in it for the hosted distribution and publishing them for immediate consumption as part of the package management system.


2:30pm – Room 19

The future of PaaS with Docker by Marek Jelen, Red Hat

In his talk, Mark Jelen will offer a peek into the future of OpenShift and how we are integrating Docker as the base of the PaaS. In the presentation he will talk about Geard, Systemd and other Linux stuff and how it is all being integrated into a simple to use and powerful system.


3:30pm – Room 19

Building a DevOps PaaS with Docker, CoreOS, and Apache Stratos by Lakmal Warusawithana, WSO2 Inc

In this session Lakmal will dig deep into Apache Stratos. This will include installation and deploying sample applications using docker and CoreOS, showing how it can be extended to support new application containers. The session will include a demonstration of app deployment, provisioning, auto-scaling and more.


4:30pm – Room 19

Continuous Integration using Docker & Jenkins by Mattias Gieve, B1 Systems GmbH

This talk describes two scenarios where automatic integration testing with Docker increases the productivity of admins and developers. The first one describes how an admin may perform integration testing of Puppet modules, a second one implements integration testing of a web app consisting of a Web and database server.


Wednesday, October 15th:

11:15am – Room 19

Clocker – Migrating Complex Applications to Docker with Apache BrooklynAndrew Kennedy, Cloudsoft

Andrew will show how Clocker uses Apache Brooklyn’s cloud abstractions to simplify the deployment and management of a complex application to a virtual Docker infrastructure. Brooklyn will create and maintain the required Docker containers in the right locations for your application, and control and manage the software and services using policies to scale both the application and infrastructure based on their state.


11:15am – Room 19

Using Docker Containers as Your Admin ToolboxKaranbir Singh, CentOS

Most admins have a set of go-to tools to help do their job, but the toolbox metaphor falls down when it comes to carting the tools to a new system. In this talk, Karanbir will show how to use Docker to carry the tools you need to do your job and then leave the target system in a pristine state.


We think you’ll find these talks interesting, and hopefully they’ll answer some of your questions about the Docker platform and containerization. If you have any questions, please join the #docker IRC channel.

Dockerize early and often,

– The Docker Team

Learn More

Extending DockerCon Europe CFP for a week


The CFP for DockerCon Europe 2014 closed a couple of days ago. Since Wednesday, we have received a massive amount of emails from speakers, requesting that we make an exception so that they can submit their talk. In response to this request, we are happy to announce that we have reopened the CFP for one week. You now have until Oct 12th, 12:12am PST to submit your talk. This will be a hard deadline since we need adequate time to review the papers and select the best.

Tip of the day: The community has asked to see more use cases from actual users. If you are using Docker and want to talk about it, explain how it helped your team and company, what was the result of using Docker, please submit your talk here.

Note: DockerCon Europe is sold out, but we have reserved enough places for the speakers who are selected.

Please ping us on Twitter or email us if you have any questions.

Announcing Docker Global Hack Day #2

DockerCon Europe is sold out! But wait…

Here, at Docker HQ, since the announcement of DockerCon Europe 2014, we have been sprinting to keep up with the overwhelming response and today, we must inform you that the conference is sold out. Tickets went faster than expected so we want to give you one last opportunity to attend.

Today, we are super excited to announce Docker Global Hack Day #2 on October 30th! The prize will be full conference passes including roundtrip airfare for all members of the winning team. Last year, the event was a big success, and we expect this year to be even more awesome with more cities and more hackers around the world involved!

The San Francisco edition will kickoff with talks by Ben Golub, CEO of Docker, and Solomon Hykes, Founder and CTO of Docker, who will demonstrate the power and new features of Docker 1.3 and how they facilitate the creation of distributed applications.  The agenda will include a number of Docker customers who are building their next generation of applications based upon our open platform. In addition, the event will have a surprise announcement to the community. The talks and demo will be live-streamed and recorded, so that every Docker meetup group and Docker hacker participating will be able to learn about those new features and announcements.

This year we have already 20 cities committed to participate in this Global event.

View Docker Global Hack Day 2014 in a larger map

Most of those cities will be announced very shortly, but you can already register to:

Stay tuned for more cities to join the Global Hack Day, but in any case, everybody is welcome to join this special Docker day, from home or from a Docker Meetup near you.

If you do not find a Docker Meetup Group near you, you can participate by registering to the online edition.


The exact assignment for this Global Hack Day will be revealed on the D-Day, and every team (from 1 to 3 hackers) will be able to submit a short presentation (video) and repo (GitHub or BitBucket link) of their hack until Monday November 3rd – 9am PST.


You will be able to submit through this form until Monday November 3rd – 9am PST, with:

  • Title
  • Short abstract
  • Names of the team members (up to 3 members per team)
  • Twitter handles of the team members (Optional)
  • emails of the team members (up to 3 members per team)
  • Youtube url (2-minute video)
  • GitHub or BitBucket url of the project

The docker community will then vote on the best hack in two different ways.

Local Docker Meetup Winner

Every city will vote on the local winner based on these judging criteria:

  1. Only applications that actually run will be judged.
  2. Each project will be given 0-4 points in each of the following areas:
    1. Novelty. Has anyone ever done this using Docker before?
    2. Fit. Does Docker improve the project or fundamentally enable it?
    3. Efficiency. Is this implementation small in size, easy to transport, quick to start up and run? Higher scores for more functionality in smaller images and faster start times.
    4. Integration. Does the project fit well into other systems, or is it sufficiently complex itself to be its own system? More (useful) interconnection gets more points.
    5. Transparency. Can other people easily recreate your project now that you’ve shown how?
    6. Presentation: How well did you present your project? Did you speak clearly, cover all the important points, and generally impress people?
    7. Possible tie-breaker: Utility. Popular vote on how many would use each of the tied projects. So keep your audience in mind!

All projects will be featured on a dedicated page where the entire community will vote for the Global winner.

Winning prize for local winners is an edition-limited Docker Merit Badge!

Screen Shot 2014-10-02 at 8.58.46 AM

Global winner

We will build a page with all projects and videos, for the community to vote for the global winner. Voting will be done through social networks. Note that Local Docker Meetup winners will be featured on this page.

The Global winning team will be invited to attend DockerCon Europe (DockerCon EU tickets + plane tickets for the whole team).

Screen Shot 2014-10-02 at 9.02.24 AM


Stay connected with the Docker hackers from all around the World

During the day the Docker team and the Docker community will be on IRC helping you hack your project or answering questions about Docker. The official back channel for this event on IRC is #docker.

The official hash tag for the event on Twitter is #dockerhackday. Everybody will tweet using this hashtag. You can also follow us on Twitter to receive news real-time during the day.


This is going to be a lot of fun! See you at the end of October!

If you have any question about the Docker Global Hack Day #2, please email us or ping us on Twitter.

InfoWorld Bossies 2014


Today we are proud to announce that Docker was named a winner of the InfoWorld Bossies 2014 in two categories:

  • The best open source application development tools
  • The best open source data center and cloud software

We would like to thank our community and our partners for this award, as Docker would not exist without you!

Thank you!

Docker Hub Official Repos: Announcing Language Stacks


With Docker containers fast becoming the standard for building blocks for distributed apps, we’re working with the Docker community to make it easier for users to quickly code and assemble their projects.  Official Repos, publicly downloadable for free from the Docker Hub Registry, are curated images informed by user feedback and best practices.  They represent a focused community effort to provide great base images for applications, so developers and sysadmins can focus on building new features and functionality while minimizing repetitive work on commodity scaffolding and plumbing.

At DockerCon last June, we announced the first batch of Official Repos which covered many standard tools like OS distributions, web servers, and databases.  At the time, we had several organizations join us to curate Official Repos for their particular project, including Fedora, CentOS, and Canonical.  And the community responded enthusiastically as well: in the three months since they launched, Official Repos have so grown in popularity that they now account for almost 20% of all image downloads.


Based on the search queries on the Docker Hub Registry and discussions with many of you, we determined that the community wants pre-built stacks of their favorite programming languages.  Specifically, developers want to get working as quickly as possible writing code without wasting time wrestling with environments, scaffolding, and dependencies.  So we’ve spent the last several months building and curating Official Repos for the eleven most-searched-for programming language stacks. We’ve iterated, tested, and polished, and today we’re sharing them with a wider audience.

Without further ado, we’re pleased to announce the availability of these Official Repos:  c/c++ (gcc), clojure, go (golang), hy (hylang), java, node, perl, php, python, rails, and ruby.  We’re also happy to announce that Amazon Web Services and the Perl and Hy projects have joined the Official Repos program as contributors.

Under The Hood

For details of these language stacks, please check-out the descriptions and Dockerfiles on the individual repos, but below are some highlights:

First off, you’ll see that most of the language stacks are based on the buildpack-deps image, a collection of common build dependencies including development header packages.  This frees users from having to worry about these dependencies – you can just pull the relevant language stack and start coding.

Versions & Tags

Another thing you’ll notice is that each language stack’s Official Repo has multiple versions of the language with a tag for each version.  This lets you quickly pull the specific version you need for your project, for example:

$ docker pull java:8u40


Another cool feature of these language stacks is that, where applicable, they’ve been built with the ONBUILD Dockerfile instruction.  So when you use a language stack as your base image, your build will automatically add your application code to your new image.  This provides simple automation for your build pipeline while allowing for a clean separation between the language stack and your app’s code and its changes.  See the individual language stack descriptions on the Docker Hub Registry for more details about how to take advantage of this feature.

Got feedback?  Want to contribute?

While we’ve been working on these for the last couple months, we hope today’s release is just the start, not the end.  Like the Docker Engine itself, we think these Official Repos are a good opportunity for the Docker community to collaborate on best practices for these core software building blocks.  To this end, each language stack Official Repo has a comments section and its own GitHub repo for submitting bug fixes and feature ideas.  There’s also an IRC channel, #docker-library on the Freenode IRC network, where the Official Repo maintainers hang out.

Want your own Official Repo?

For those looking to create and maintain an Official Repo of their own project, today we’re happy to share guidelines for getting started, along with Dockerfile best practices.  After prepping your project according to the guidelines, please contact to coordinate the process of adding your repo to the collection.

In closing, we want to warmly thank all those in the community who have been providing input and contributions to Official Repos – without you this program wouldn’t and couldn’t exist.  We hope everyone finds this latest batch useful and we look forward to your feedback and comments.

Dockerize early and often,

– The Docker Team

Learn More

Report: Burrito Quest I

At Docker, we are lucky to be able to spend time exploring San Francisco, one of the world’s great cities in terms of culture, architecture and, of course, burritos. Forget about crabs or sourdough, what San Francisco does best is the burrito, that noble combination of beans, meat, cheese, salsa, and love, all in a convenient wrapper that let’s you eat it one-handed. And. like the City itself, the burrito is incredibly diverse. Do you prefer black beans or pintos? Are you a carnivore who craves the al pastor and the carne asada, or do you seek out the elusive perfect chile relleno burrito (the turducken of Mexico)?

So many options, so many questions. As an engineer-driven company, we needed to know the optimal solution. We had to know where to find the City’s finest burrito.

And so it came to be that Burrito Quest was born. We decided that once a month we would walk to another potential purveyor of the perfect burrito. In order to build a comprehensive test harness, we decided that each user would pursue their own story, be it a simple pollo or a bold lengua. or even a chili relleno. I myself went with a baseline carne asada. Based on a wide range of criteria—texture, flavor, distribution of ingredients, structural integrity, value—we would assign a 1-10 value to our dining experience.

In order to make it a proper Quest, it was determined that we had to trek to our burritos, even if this meant hiking an hour or more through potentially dangerous terrain (i.e., hilly and bar-strewn). We wanted to earn the calories and arrive motivated.

With a methodology and test suite developed, we set out for our first objective, Taqueria San Francisco, appropriately enough. We got lucky on our hike from Docker World HQ in that SF’s ubiquitous summer fog burned off enough to glimpse the sun and let us enjoy some of the City’s architecture.

Traversing the Metreon

After three miles or so, and a few stops to bicker about the best route to take, we arrived in the Mission.The taqueria has a lovely mural on the outside wall, and the inside was clean and bright and not at all new. There were a few construction workers and other neighborhood folk waiting in a short line when we got there, but it moved quickly. In our zeal to spread our test coverage widely, we chose a number of different burritos, and some guacamole, as an outlier.

Surprisingly, for a group of opinionated tech nerds, we came to a ready consensus: good, but not great, scoring a solid 7.07 out of 10 awesomeness points. Highlights included the chile relleno burrito (“My god, this is epic” sighed the delighted tester) and the rice, which had a lovely smoky flavor. Downsides were a greasy al pastor with off-flavors and a lengua with way too much meat. I mean, tongue is delicious, but there is such a thing as too much tongue. Service was fast and not overly friendly (perfect for nerdy introverts). The burritos were large and filling and an excellent bargain for this pricy town.

Taqueria San Francisco
Taqueria San Francisco

Overall, it was a solid start to our Quest: a good hike and some good food. The team is looking forward to the next challenge. But where should we go? Please help us out and leave us a comment below with your favorite SF burrito joint. We’ll report back. In the meantime, look for a Github repo with our test suite soon.


Note: after the composition of this post, I became aware of Nate Silver and 538’s recent work on burrito testing. We’ll be addressing that in a future post.

Docker Closes $40M Series C Led by Sequoia

Today is a great day for the Docker team and the whole Docker ecosystem.

We are pleased to announce that Docker has closed a $40M Series C funding round led by Sequoia Capital.  In addition to giving us significant financial resources, Docker now has the insights and support of a board that includes Benchmark, Greylock, Sequoia, Trinity, and Jerry Yang.

This puts us in a great position to invest aggressively in the future of distributed applications. We’ll be able to significantly expand and build the Docker platform and our ecosystem of developers, contributors, and partners, while developing a broader set of solutions for enterprise users. We are also very fortunate that we’ll be gaining the counsel of Bill Coughran, who was the SVP of Engineering at Google for eight years prior to joining Sequoia, and who helped spearhead the extensive adoption of container-based technologies in Google’s infrastructure.

While the size, composition, and valuation of the round are great, they are really a lagging indicator of the amazing work done by the Docker team and community. They demonstrate the amazing impact our open source project is having. Our user community has grown exponentially into the millions and we have a constantly expanding network of contributors, partners, and adopters. Search on GitHub, and you’ll now find over 13,000 projects with “Docker” in the title.

Docker’s 600 open source contributors can be proud that the Docker platform’s imprint has been so profound, so quickly.  Before Docker, containers were viewed as an infrastructure-centric technology that was difficult to implement and remained largely in the purview of web-scale companies.  Today, the Docker community has built that low-level technology into the basis of a whole new way to build, ship, and run applications.

Looking forward over the next 18 months, we’ll see another Docker-led transformation, this one aimed at the heart of application architecture.  This transformation will be a shift from slow-to-evolve, monolithic applications to dynamic, distributed ones.

Screen Shot 2014-09-16 at 5.18.04 AM


As we see it, apps  will increasingly be composed of multiple Dockerized components, capable of being deployed as a logical, Docker unit across any combination of servers, clusters, or data-centers.

Screen Shot 2014-09-16 at 5.20.40 AM


We’ve already seen large-scale web companies (such as GILT, eBay, New Relic, Spotify, Yandex, and Baidu) weaving this new flexibility into the fabric of their application teams. At Gilt, for example, Docker functions as a tool of organizational empowerment, allowing small teams to own discrete services which they use to create innovations they can build into production over 100 times a day. Similar initiatives are also underway in more traditional enterprise environments, including many of the largest financial institutions and government agencies.

This movement towards distributed applications is evident when we look at the activity within Docker Hub Registry, where developers can actively share and collaborate on Dockerized components.  In the three months since its launch, the registry has grown beyond 35,000 Dockerized applications, forming the basis for rapid and flexible composition of distributed applications leveraging a large library of stable, pre-built base images.

Future of Distributed Apps: 6 Easy Steps

Screen Shot 2014-09-16 at 5.34.25 AM


The  past 18 months have been largely about creating an interoperable, consistent format around containers, and building an ecosystem of users, tools, platforms, and applications to support that format (steps 2-4 in the diagram above). Over the next year, you’ll see that effort continue, as we put the proceeds of this round to use in driving  advances in multiple areas to fully support multi-Docker container applications. (Steps 5 & 6 in the diagram above). Look for significant advances in orchestration, clustering, scheduling, storage, and networking.  You’ll also see continued advances in the overall Docker platform–both Docker Hub and Docker Engine.

Screen Shot 2014-09-16 at 6.26.20 AM

The work and feedback we’ve gotten from our customers as they evolve through these Docker-led transformations has profoundly influenced how Docker itself has evolved. We are deeply grateful for those contributions.

The journey we’ve undertaken with our community over the past 18 months has been humbling and thrilling. We are excited and energized for what’s coming next.


Read more on the news

forbes  vb gigaom   wsj-wallstreetjournal-converted tc-logo