Category: Latest Posts

Report: Burrito Quest I

At Docker, we are lucky to be able to spend time exploring San Francisco, one of the world’s great cities in terms of culture, architecture and, of course, burritos. Forget about crabs or sourdough, what San Francisco does best is the burrito, that noble combination of beans, meat, cheese, salsa, and love, all in a convenient wrapper that let’s you eat it one-handed. And. like the City itself, the burrito is incredibly diverse. Do you prefer black beans or pintos? Are you a carnivore who craves the al pastor and the carne asada, or do you seek out the elusive perfect chile relleno burrito (the turducken of Mexico)?

So many options, so many questions. As an engineer-driven company, we needed to know the optimal solution. We had to know where to find the City’s finest burrito.

And so it came to be that Burrito Quest was born. We decided that once a month we would walk to another potential purveyor of the perfect burrito. In order to build a comprehensive test harness, we decided that each user would pursue their own story, be it a simple pollo or a bold lengua. or even a chili relleno. I myself went with a baseline carne asada. Based on a wide range of criteria—texture, flavor, distribution of ingredients, structural integrity, value—we would assign a 1-10 value to our dining experience.

In order to make it a proper Quest, it was determined that we had to trek to our burritos, even if this meant hiking an hour or more through potentially dangerous terrain (i.e., hilly and bar-strewn). We wanted to earn the calories and arrive motivated.

With a methodology and test suite developed, we set out for our first objective, Taqueria San Francisco, appropriately enough. We got lucky on our hike from Docker World HQ in that SF’s ubiquitous summer fog burned off enough to glimpse the sun and let us enjoy some of the City’s architecture.

SF-hiking
Traversing the Metreon

After three miles or so, and a few stops to bicker about the best route to take, we arrived in the Mission.The taqueria has a lovely mural on the outside wall, and the inside was clean and bright and not at all new. There were a few construction workers and other neighborhood folk waiting in a short line when we got there, but it moved quickly. In our zeal to spread our test coverage widely, we chose a number of different burritos, and some guacamole, as an outlier.

Surprisingly, for a group of opinionated tech nerds, we came to a ready consensus: good, but not great, scoring a solid 7.07 out of 10 awesomeness points. Highlights included the chile relleno burrito (“My god, this is epic” sighed the delighted tester) and the rice, which had a lovely smoky flavor. Downsides were a greasy al pastor with off-flavors and a lengua with way too much meat. I mean, tongue is delicious, but there is such a thing as too much tongue. Service was fast and not overly friendly (perfect for nerdy introverts). The burritos were large and filling and an excellent bargain for this pricy town.

Taqueria San Francisco
Taqueria San Francisco

Overall, it was a solid start to our Quest: a good hike and some good food. The team is looking forward to the next challenge. But where should we go? Please help us out and leave us a comment below with your favorite SF burrito joint. We’ll report back. In the meantime, look for a Github repo with our test suite soon.

 

Note: after the composition of this post, I became aware of Nate Silver and 538’s recent work on burrito testing. We’ll be addressing that in a future post.

Docker Closes $40M Series C Led by Sequoia

Today is a great day for the Docker team and the whole Docker ecosystem.

We are pleased to announce that Docker has closed a $40M Series C funding round led by Sequoia Capital.  In addition to giving us significant financial resources, Docker now has the insights and support of a board that includes Benchmark, Greylock, Sequoia, Trinity, and Jerry Yang.

This puts us in a great position to invest aggressively in the future of distributed applications. We’ll be able to significantly expand and build the Docker platform and our ecosystem of developers, contributors, and partners, while developing a broader set of solutions for enterprise users. We are also very fortunate that we’ll be gaining the counsel of Bill Coughran, who was the SVP of Engineering at Google for eight years prior to joining Sequoia, and who helped spearhead the extensive adoption of container-based technologies in Google’s infrastructure.

While the size, composition, and valuation of the round are great, they are really a lagging indicator of the amazing work done by the Docker team and community. They demonstrate the amazing impact our open source project is having. Our user community has grown exponentially into the millions and we have a constantly expanding network of contributors, partners, and adopters. Search on GitHub, and you’ll now find over 13,000 projects with “Docker” in the title.

Docker’s 600 open source contributors can be proud that the Docker platform’s imprint has been so profound, so quickly.  Before Docker, containers were viewed as an infrastructure-centric technology that was difficult to implement and remained largely in the purview of web-scale companies.  Today, the Docker community has built that low-level technology into the basis of a whole new way to build, ship, and run applications.

Looking forward over the next 18 months, we’ll see another Docker-led transformation, this one aimed at the heart of application architecture.  This transformation will be a shift from slow-to-evolve, monolithic applications to dynamic, distributed ones.

Screen Shot 2014-09-16 at 5.18.04 AM

SHIFT IN APPLICATIONS

As we see it, apps  will increasingly be composed of multiple Dockerized components, capable of being deployed as a logical, Docker unit across any combination of servers, clusters, or data-centers.

Screen Shot 2014-09-16 at 5.20.40 AM

DISTRIBUTED, DOCKERIZED APPS

We’ve already seen large-scale web companies (such as GILT, eBay, New Relic, Spotify, Yandex, and Baidu) weaving this new flexibility into the fabric of their application teams. At Gilt, for example, Docker functions as a tool of organizational empowerment, allowing small teams to own discrete services which they use to create innovations they can build into production over 100 times a day. Similar initiatives are also underway in more traditional enterprise environments, including many of the largest financial institutions and government agencies.

This movement towards distributed applications is evident when we look at the activity within Docker Hub Registry, where developers can actively share and collaborate on Dockerized components.  In the three months since its launch, the registry has grown beyond 35,000 Dockerized applications, forming the basis for rapid and flexible composition of distributed applications leveraging a large library of stable, pre-built base images.

Future of Distributed Apps: 6 Easy Steps

Screen Shot 2014-09-16 at 5.34.25 AM

 

The  past 18 months have been largely about creating an interoperable, consistent format around containers, and building an ecosystem of users, tools, platforms, and applications to support that format (steps 2-4 in the diagram above). Over the next year, you’ll see that effort continue, as we put the proceeds of this round to use in driving  advances in multiple areas to fully support multi-Docker container applications. (Steps 5 & 6 in the diagram above). Look for significant advances in orchestration, clustering, scheduling, storage, and networking.  You’ll also see continued advances in the overall Docker platform–both Docker Hub and Docker Engine.

Screen Shot 2014-09-16 at 6.26.20 AM

The work and feedback we’ve gotten from our customers as they evolve through these Docker-led transformations has profoundly influenced how Docker itself has evolved. We are deeply grateful for those contributions.

The journey we’ve undertaken with our community over the past 18 months has been humbling and thrilling. We are excited and energized for what’s coming next.

 

Read more on the news

forbes  vb gigaom   wsj-wallstreetjournal-converted tc-logo

DockerCon video: Docker deployments at New Relic

In this session, Paul Showalter & Karl Matthias from New Relic discuss how they succesfully leveraged Docker to have consistent, isolated, custom distributed environments over which they have centralized control; making their continuous deployment processes easy and scalable.

 

Learn More

Docker Events and Meetup

Try Docker and stay up-to-date

Only Two Weeks Left to Sign Up For The First Public Docker Training

JAMES_TEAM@CURTYPHOTOGRAPHY.COM-1

Just a quick reminder:  As of today you have only two weeks to sign up for the very first ever public Docker training class in San Francisco!  Here’s your chance to rapidly get up to speed on Docker’s container technology with plenty of first-hand attention.  The class, held in a small intimate setting in downtown San Francisco, will be led by myself and the legendary Jérôme Petazzoni.  We are both Solutions Engineers at Docker Inc. with strong backgrounds in development and operations.  The training will be held September 17th and 18th and will cover a wide range of topics from fundamentals to best practices to orchestration and beyond.

Click here to reserve your spot today!

Want to learn more and stay up-to-date?

Disclosure of Authorization-Bypass on the Docker Hub

Following the postmortem of a previous vulnerability announced on June 30th, the Docker team conducted a thorough audit of the platform code base and hired an outside consultancy to investigate the security of the Docker Registry and the Docker Hub. On the morning of 8/22 (all times PST), the security firm contacted our Security Team:

8/22 – Morning: Our Security Team was contacted regarding vulnerabilities that could be exploited to allow an attacker to bypass authorization constraints and modify container image tags stored on the Docker Hub Registry. Even though the reporting firm was unable to immediately provide a working proof of concept, our Security Team began to investigate.

8/22 – Afternoon: Our team confirms the vulnerabilities and begins preparing a fix.

8/22 – Evening: We roll out a hotfix release to production. Additional penetration tests are performed to assure resolution of these new vulnerabilities. Later, it is discovered this release introduced a regression preventing some authorized users from pulling their own private images.

8/23 – Morning: A new hotfix is deployed to production, addressing the regression and all known security issues. Our Security Team runs another set of penetration tests against the platform and confirm all issues have been resolved.

Follow-up & Postmortem:

We have begun an internal postmortem process to seek the improvement of our development and security processes. Immediately, we have established the following:

  • We have performed an audit of the repositories stored on the Docker Hub Registry to verify whether or not any known exploits have been used in the wild. We have not found any indication of exploitation, or of repositories being modified via authorization by-pass.
  • We have established an agreement with the outside security firm to audit every major release of the platform.
  • We will implement an automated suite of functional security tests. These would be established in addition to existing unit and integration tests.

Finally:

Our contributors have been hard at working making Docker better and better with each release, including important security improvements such as the addition of granular Linux capabilities management with the release of Docker 1.2. Likewise, since establishing our security and responsible disclosure policy, we have seen a substantial interest by researchers in contributing to the improvement of Docker.

If you discover any issues in Docker or the Hub, we encourage you to do the same by contacting security@docker.com.

Docker & VMware: 1 + 1 = 3

BLOG-POST-VMWARE Today at VMworld we’re excited to announce a broad partnership with VMware.  The objective is to provide enterprise IT customers with joint solutions that combine the application lifecycle speed and environment interoperability of the Docker platform with the security, reliability, and management of VMware infrastructure.  To deliver this “better together” solution to customers, Docker and VMware are collaborating on a wide range of product, sales, and marketing initiatives. Why join forces now?  In its first 12 months Docker usage rapidly spread among startups and early adopters who valued the platform’s ability to separate the concerns of application development management from those of infrastructure provisioning, configuration, and operations.  Docker gave these early users a new, faster way to build distributed apps as well as a “write once, run anywhere” choice of deployment from laptops to bare metal to VMs to private and public clouds.  These benefits have been widely welcomed and embraced, as reflected in some of our adoption metrics:

  • 13 million downloads of the Docker Engine
  • 30,000 “Dockerized” applications on Docker Hub
  • 14,000 stars on GitHub
  • 570 contributors

In its second year, Docker usage continues to spread and is now experiencing mass adoption by enterprise IT organizations.  These organizations span a wide range of industry verticals including finance, life sciences, media, and government. By leveraging the Docker platform, ecosystem, and the more than 30,000 “Dockerized” apps on Docker Hub, these enterprise IT organizations are radically reducing the time from develop to deploy – in most cases, from weeks to minutes.  In addition to pipeline acceleration, they get the flexibility and choice to run these apps unchanged across developer laptops, data center VMs, bare metal servers, and private and public clouds. Not surprisingly, Docker’s enterprise IT customers have been making significant investments in VMware infrastructure for years across their application lifecycle environments, from developer laptops to QA servers to production data centers.  They’ve come to trust and rely on its reliability, security, and quality.  Through this partnership, now they can realize the agility and choice benefits of Docker on top of the VMware infrastructure they know and trust. Better Together The partnership spans a wide range of product, sales, and marketing initiatives, and today we’re excited to share early details with the Docker community.

  • Docker-on-VMware.  The companies are working together to ensure that the Docker Engine runs as a first-class citizen on developer workstations using VMware Fusion, data center servers with VMware vSphere, and vCloud Air, VMware’s public cloud.
  • Contributing to Community’s Core Technologies.  To support the joint product initiatives, VMware and Docker will collaborate on the Docker community’s core technology standards, in particular libcontainer and libswarm, the community’s orchestration interoperability technology.
  • Interoperable Management Tooling.  So as to provide developers and sysadmins with consistent deployment and management experiences, the companies are collaborating on interoperability between Docker Hub and VMware’s management tools, including VMware vCloud Air, VMware vCenter Server, and VMware vCloud Automation Center.

In addition to the above product-related initiatives, you’ll start to see VMware introducing Docker to its users through its marketing and sales channels.  In parallel, Docker will begin introducing VMware to the Docker community. There’s obviously a lot more to come from the Docker and VMware relationship, so today’s announcement is just the first step of what will be a fantastic journey.  Please join us in welcoming VMware to the Docker community and working together with them to spread the goodness of Docker to even more users and platforms. Dockerize early and often, – The Docker Team Docker & VMware:  VMworld Sessions There are several VMworld sessions discussing Docker + VMware.  We look forward to seeing you!

Learn More

Your Docker agenda for VMworld 2014

Next week starts the gigantic VMworld conference at the Moscone Center in San Francisco, California. If you are attending the conference, come visit us at the Docker booth #230 and make sure to attend the following Docker-related talks, demos, discussions and meetups where you can meet and chat with fellow Dockerites:

docker-talks

Monday, August 25th:

3:30 PM – 4:30 PM, Moscone West, Room 2014

VMware NSX for Docker, Containers & Mesos by Aaron Rosen (Staff Engineer, VMware) and Somik Behera (NSX Product Manager, VMware)

This session will provide a recipe for architecting massively elastic applications, be it big data applications or developer environments such as Jenkins on top of VMware SDDC Infrastructure. We will describe the use of app isolation technologies such as LxC & Docker together with Resource Managers such as Apache Mesos & Yarn to deliver an Open Elastic Applications & PaaS for mainstream apps such as Jenkins as well as specialized big data applications. We will cover a customer case study that leverages VMware SDDC to create an Open Elastic PaaS leveraging VMware NSX for Data communication fabric.

 

5:30 PM – 6:30 PM, Moscone West, Room 2006

VMware and Docker – Better Together by Ben Golub (CEO, Docker, Inc) and Chris Wolf (VP & Americas CTO, VMware)

Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions answered.

This breakout session will begin with an overview of the key elements of the Docker platform, Docker Engine and Docker Hub.  We review the similarities and differences between Docker and VMware and illustrate through use cases.  We will then discuss how to use Docker and VMware together to take advantage of both technologies’ strengths and demo the lifecycle of a simple application.  We conclude with an overview of the product roadmaps for both.

 

Tuesday, August 26th:

9:00 AM – 10:00 AM, Online

Docker Online Meetup #5: Docker and VMware – Better Together by Aaron Huslage (Solution Architect, Docker, Inc)

This webinar will cover the key elements of the Docker platform, Docker Engine and Docker Hub. We will then discuss the similarities and differences between Docker and VMware as well as the advantages of using them together. Presentation will be followed by a Q&A session. Register on our meetup page.

 

12:30 PM – 1:30 PM, Marriott, Yerba Buena Level, Salon 6

VMware and Docker – Better Together by Ben Golub (CEO, Docker, Inc) and Chris Wolf (VP & Americas CTO, VMware)

Attend this session to gain deep insights into the VMware and Docker collective strategy. As technology evolves, use cases will abound for VMs, containers, and combinations of each. Key elements of the Docker platform – Docker Engine and Docker Hub – are explored, along with specific vCloud Suite integrations. Attendees will leave this session with knowledge of highly differentiated VMware and Docker integration points that provide leading flexibility, performance, security, scalability, and management capabilities. Ample time for Q&A is provided to have your most pressing questions answered.

This breakout session will begin with an overview of the key elements of the Docker platform, Docker Engine and Docker Hub.  We review the similarities and differences between Docker and VMware and illustrate through use cases.  We will then discuss how to use Docker and VMware together to take advantage of both technologies’ strengths and demo the lifecycle of a simple application.  We conclude with an overview of the product roadmaps for both.

 

3:30 PM – 4:30 PM, Moscone West, Room 3003

DevOps Demystified! Proven Architectures to Support DevOps Initiatives by Ryan Shondell (Director, Solutions Architecture, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common DevOps tools and everything needed to fully support DevOps initiatives using VMware technologies.

 

5:00 PM – 6:00 PM, Marriott, Yerba Buena Level, Salon 6

VMware NSX for Docker, Containers & Mesos by Aaron Rosen (Staff Engineer, VMware) and Somik Behera (NSX Product Manager, VMware)

This session will provide a recipe for architecting massively elastic applications, be it big data applications or developer environments such as Jenkins on top of VMware SDDC Infrastructure. We will describe the use of app isolation technologies such as LxC & Docker together with Resource Managers such as Apache Mesos & Yarn to deliver an Open Elastic Applications & PaaS for mainstream apps such as Jenkins as well as specialized big data applications. We will cover a customer case study that leverages VMware SDDC to create an Open Elastic PaaS leveraging VMware NSX for Data communication fabric.

 

5:30 PM – 6:30 PM, Moscone West, Room 3007

A DevOps Story: Unlocking the Power of Docker with VMware platform and its ecosystem by George Hicken (Staff Engineer, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

Docker is creating quite a bit of industry buzz right now.  Many of us in IT are starting to get questions from our developers about Docker.  We are starting to see discussions in social media around Docker, and potential integrations with management tools like vCAC.  And many of us are starting to ponder deeper architectural questions, and how deployment models and management paradigms might change with containerization in the mix.  In this session we plan to discuss how to best integrate Docker with the VMware platform, and we will demonstrate that it is in the combination of Docker and VMware that a true “better together” Enterprise grade DevOps solution actually emerges.

 

6:42 PM – 7:30 PM, Moscone Center South (747 Howard St) near the stairs (see map on our meetup page)

Docker PUSH & RUN (5K) special VMworld – with the Docker Team

Meet the Docker team during a fun ~5K RUN around San Francisco. After the run, you are welcome to chat and have a drink with us at the Docker HQ. More details on the meetup page.

 

Wednesday, August 27th:

3:30 PM – 4:30 PM, Moscone West, Room 2014

DevOps Demystified! Proven Architectures to Support DevOps Initiatives by Ryan Shondell (Director, Solutions Architecture, VMware) and Aaron Sweemer (Principal Systems Engineer, VMware)

DevOps is the most demanded use-case architecture by VMware customers. Numerous VMware engineers conducted and reviewed a field validated DevOps architecture and best practice methodology in early 2014. This session highlights key findings from the VMware field exercise and provides highly detailed architecture diagrams and a step-by-step methodology for supporting the DevOps initiatives through the vCloud Suite and open standards such as OpenStack. Attendees will leave the session with detailed integrations for common DevOps tools and everything needed to fully support DevOps initiatives using VMware technologies.

 

Every day during the Conference

Come visit us at the Docker booth #230

We think you’ll find our talks stimulating and interesting, and hopefully they’ll answer some of your questions about the Docker platform and containerization. Let us know by stopping by the Docker booth #230. We hope to see you at the Conference!

Dockerize early and often,

– The Docker Team

Orchestrating Docker containers in production using Fig

In the last blog post about Fig we showed how you could define and run a multi-container app locally.

We’re now going to show you how you can deploy this app to production. Here’s a screencast of the whole process:

Let’s continue from where we left off in the last blog post. First, we want to put the code we wrote up onto GitHub. You’ll need to initialize and commit your code into a new Git repository.

$ git init
$ git add .
$ git commit -m "Initial commit"

Then create a new repository on GitHub and follow the instructions for how to set up a remote on your local GitHub repository. For example, if your repository were called bfirsh/figdemo, you’d run these commands:

$ git remote add origin git@github.com:bfirsh/figdemo.git
$ git push -u origin master

Next, you’ll need to get yourself a server to host your app. Any cloud provider will work, so long as it is running Ubuntu and available on a public IP address.

Log on to your server using SSH and follow the instructions for installing Docker and Fig on Ubuntu.

$ ssh root@[your server’s IP address]
# curl -sSL https://get.docker.io/ubuntu/ | sudo sh
# curl -L https://github.com/docker/fig/releases/download/0.5.2/linux > /usr/local/bin/fig
# chmod +x /usr/local/bin/fig

Now you’ll want to clone your GitHub repository to your server. You can find the clone URL on the right hand side of your repository page. For example:

# git clone https://github.com/bfirsh/figdemo.git
# cd figdemo

With your code now on the server, you run fig up in daemon mode on the server to start your app on the server:

# fig up -d

That will pull the redis image from Docker Hub, build the image for your web service that is defined in Dockerfile, then start up the redis and web containers and link them together. If you go to http://[your server’s IP address]:5000 in your browser, you will see that your app is now running on your server.

Deploying new code

Let’s deploy new code to our server. Make a change to the message in app.py on your local machine, and check the change is correct by running fig up and opening up your local development from the previous blog post in your browser.

If the change looks good, commit it to Git:

$ git commit -m "Update message" app.py
$ git push

Then, on your server, pull the changes down:

# git pull

You then need to build a new Docker image with these changes in them and recreate the containers with fig up:

# fig build
# fig up -d

You should now see the changes reflected on http://[your server’s IP address]:5000! One thing to note is that it has remembered how many times you have viewed the page. This is because the data stored in Redis is persisted in a Docker volume.

Next steps

That’s the basics of deploying an app to production using Docker. If you want to do more complex setups, you can create a separate fig.yml for your production environment, e.g. fig-production.yml, and tell Fig to use this file when running fig up:

$ fig up -d -f fig-production.yml

If you’re using a separate file for production, this will let you do things like:

  • Expose your web app on port 80 by replacing 8000:8000 with 80:8000 in your ports definition.
  • Remove the volumes statement for injecting code into your container. This exists so code can update immediately in your development environment, but is unnecessary in production when you are building images.
  • Use the Docker Hub to ship code to your server as an image. If you can set up an automated build on Docker Hub to build an image from your code, you could replace the build statement in your web service with an image that points to that repository.

Those are just some ideas – we’d love to hear of other things you have come up with in the comments.

Learn More

Announcing Docker 1.2.0

The hardworking folk at Docker, Inc. are proud to announce the release of version 1.2.0 of Docker. We’ve made improvements throughout the Docker platform, including updates to Docker Engine, Docker Hub, and our documentation.

1.2.0

Highlights include these new features:

restart policies

We added a --restart flag to docker run to specify a restart policy for your container. Currently, there are three policies available:

  • no – Do not restart the container if it dies. (default)
  • on-failure – Restart the container if it exits with a non-zero exit code.
    • Can also accept an optional maximum restart count (e.g. on-failure:5).
  • always – Always restart the container no matter what exit code is returned.

This deprecates the --restart flag on the Docker daemon.

A few examples:
  • Redis will endlessly try to restart if the container exits
docker run --restart=always redis
  • If redis exits with a non-zero exit code, it will try to restart 5 times before giving up:
docker run --restart=on-failure:5 redis

–cap-add –cap-drop

Currently, Docker containers can either be given complete capabilities or they can all follow a whitelist of allowed capabilities while dropping all others. Further, previously, using --privileged would grant all capabilities inside a container, rather than applying a whitelist. This was not recommended for production use because it’s really unsafe; it’s as if you were directly in the host.

This release introduces two new flags for docker run --cap-add and --cap-drop that give you fine grain control over the capabilities you want grant to a particular container.

A few examples:
  • To change the status of the container’s interfaces:
docker run --cap-add=NET_ADMIN ubuntu sh -c "ip link eth0 down"
  • To prevent any `chown` in the container:
docker run --cap-drop=CHOWN ...
  • To allow all capabilities except `mknod`:
docker run --cap-add=ALL --cap-drop=MKNOD ...

–device

Previously, you could use devices inside your containers by bind mounting them ( with `-v`) in a --privileged container. In this release, we introduce the --device flag to `docker run` which lets you use a device without requiring a --privileged container.

Example:
  • To use the sound card inside your container:
docker run --device=/dev/snd:/dev/snd ...

Writable `/etc/hosts`, `/etc/hostname` and `/etc/resolv.conf`

You can now edit /etc/hosts/etc/hostname and /etc/resolve.conf in a running container. This is useful if you need to install bind or other services that might override one of those files.

Note, however, that changes to these files are not saved during a docker build and so will not be preserved in the resulting image. The changes will only “stick” in a running container.

Docker proxy in a separate process

The Docker userland proxy that routes outbound traffic to your containers now has its own separate process (one process per connection). This greatly reduces the load on the daemon, which considerably increases stability and efficiency.

Other Improvements & Changes

  • When using docker rm -f, Docker now kills the container (instead of stopping it) before removing it . If you intend to stop the container cleanly the container, you can use docker stop.
  • Add support for IPv6 addresses in --dns
  • Search on private registries

We hope you enjoy this release and find it useful. As always, please don’t hesitate to contact us with questions, comments or kudos.

Learn More

Announcing DockerCon Europe 2014

Flag_of_Europe.svg

Today we are very happy to announce DockerCon Europe 2014, the first official Docker conference organized in Europe, by both Docker, Inc. and members of the community. The conference will take place in Amsterdam, at the NEMO science center, December 4th and 5th.

Nemo_Science_Center_1

We will also have a full day or training prior to the conference, led by Jérôme Petazzoni on December 3rd.

The official website is still under construction as we are finalizing the last details, but today we can announce that the Docker team will be present as well as incredible speakers from the Docker community including:

Call for papers opens today, you can submit your talk here. If you are interested in our sponsorship options, please contact us at dockercon-sponsor-eu@docker.com.

We also want to give a special thanks to Pini ReznikHarm BoertienMark ColemanMaarten Dirkse and the Docker Amsterdam community, who are working with us to bring the best of Docker to Europe.

Save the dates and stay tuned for more announcements!