Victor Coisne

Announcing the 2nd batch of DockerCon 15 speakers!

DC-website-main-news (1)

In case you missed it, last week we announced the first speakers for DockerCon. Today, we’re excited to share with you the 2nd batch of speakers selected by the DockerCon Community Review Committee. Once again, we’d like to thank everyone who took the time to both submit and review the proposals. Before we dive into the speakers lineup and abstracts, here are a few numbers to put things into perspective:

  • 2 days of conference + 1 day of training
  • 1 afterparty
  • 24-hour hackathon
  • 2000 attendees
  • 6 Breakout tracks
  • 3h40 of General Session
  • 9h of Breakouts Sessions
  • 45 sponsors (and counting)

vincent-battsVincent Batts – Software Engineer

Red Hat



Contribute and Collaborate 101

Gain inspiration and confidence to contribute in a mutually beneficial way. To become more than just a consumer of the ecosystem, develop the project yourself and profit your singular initiative. Whether you are looking for enterprise ready solutions, to make development life easier, or you’d like to see certain new features, making contributions to the greater community with a public spirit ensures the continued growth and health of the Docker project. Through personal stories of acceptance and concessions, I will share practical tips and lessons learned as a regular open source contributor and particularly involved Docker collaborator.


brendan-brunsBrendan Burns – Software Engineer




The distributed system toolkit: Container patterns for modular distributed system design

People often adopt containers for the dramatic improvements in application packaging and deployment that they provide. However, possibly more important, is the abstraction layer that containers provide. By encouraging users to build their distributed applications through containerized modules, rather than monolithic systems, developers are building composable, reusable distributed applications. In this talk, we will explore both the development of abstract application patterns for distributed systems, as well as introduce a set of reusable, composable containers that radically simplify the process of distributed application design and construction. This talk covers the technologies and patterns for success to engineer a service discovery and routing system to withstand the hard test of production.


sarah-novotnySarah Novotny – Evangelist




Interconnecting containers at scale with NGINX

Or, how NGINX can act as your stevedores properly routing and accelerating HTTP and TCP traffic to pods of containers across a globally distributed environment. NGINX can be used to manage and route your traffic across your distributed micro services architecture, offering a seamless interface to your customers and giving you granular management of backend service scaling and versions. Add in some caching and load balancing and the efficiencies of an application delivery platform become apparent.


jeff-valeoJeff Valeo – Site Reliability Engineer

GrubHub, Inc




Docker – Enabling Continuous (Food) Delivery

The merger of the two biggest restaurant delivery companies, Seamless and GrubHub, set the stage for a rethink of how we write, deliver and maintain our services. Early on (in 2014), we made the decision to use Docker to help enable continuous delivery. We’ve incorporated Docker into our CI platform not only for packing our Java services but packaging our tests built on Gatling into consistent, easily deployable units. We’ve built our entire pipeline around Docker which allows our teams to automatically deploy to our environments over 100 times a day. Our talk will focus around how Docker makes this not only possible but easy. We’ll go over the pipeline we’ve built, some lessons learned and what our plans our to expand this system.


eric-feliksikEric Feliksik – Director of the Cloud Orchestration




Using Docker to Keep Houses Warm

Would you believe me if I told you Docker containers were keeping Dutch houses warm this spring? Nerdalize is a Dutch start-up that has developed an innovative approach to heating homes. We have built a home heater that runs four computing servers and leverages the heat generated by the processors to warm homes. The houses are connected by glassfiber internet connections, and together make up a massive distribute computing system, which Nerdalize uses as a data processing platform for our customers around the world.

Each of our heaters runs Docker to isolate workloads and allow customers to package their code. In this talk, I will provide a detailed overview of Nerdalize and how we are using Docker, Rancher and other tools to change the environmental impact of the computing industry.


santosh-bardwajSantosh Bardwaj – Senior Director, Technology

Capital One



Analytic garage on Docker

Capital One’s Data analysts have traditionally used leading analytic tools to prototype new insights and build stat models. To improve analyst productivity and innovation, Cap1 has embarked on a reinvention of the Data technology stack by deploying a Big Data Hub consisting of a central Hadoop Data Lake and a large suite of Open source tools and SW packages.

The platform & engineering team had to come up with a solution to enable fast prototyping of tools, isolate the workload in a contained environment and integrate it into a self-service portal. After evaluating different options, we chose Docker to build an ‘Analytic garage’ for the enterprise.

We’ll walk through some of the challenges we faced and techniques we used to integrate a wide variety of technologies into a single Docker container, access management, security & audit. As we expand the user base within the organization, we’ll share future plans to progress innovations from the garage to a production ready Docker Analytic platform.


diptanu-Gon-ChoudhuryDiptanu Gon Choudhury – Distributed systems and Infrastructure Engineer




Reliably shipping containers in a resource rich world using Titan

Netflix has a complex microservices architecture that is operated in an active-active manner from multiple geographies on top of AWS. Amazon gives us the flexibility to tap into massive amounts of resources, but how we use and manage those is a constantly evolving and ever-growing task. We have developed Titan to make cluster management, application deployments using Docker and process supervision much more robust and efficient in terms of CPU/memory utilization across all of our servers in different geographies.

Titan, a combination of Docker and Apache Mesos, is an application infrastructure gives us a highly resilient and dynamic PAAS, that is native to public clouds and runs across multiple geographies. It makes it easy for us to manage applications in our complex infrastructure and gives us the ability to make changes in the IAAS layer without impacting developer productivity or sacrificing insight into our production infrastructure.

Want to get involved and help at DockerCon? We have exciting opportunities for Docker community volunteers! Send an email to to find out more.

We invite you to follow the official twitter account: @DockerCon and hashtag #dockercon in order to get the latest updates.

Stay tuned for more announcements!


– The DockerCon Team

Learn More about DockerCon

Learn More about Docker

, , , , ,

Victor Coisne

Announcing the 2nd batch of DockerCon 15 speakers!

Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.