Victor Coisne

Open Forum Track at DockerCon 2016 Includes Curated and Open BoF Sessions and Panels!

Ready for another new addition to DockerCon this year?

The Open Forum track is brand new to this year’s conference agenda! This room is our unique version of hybrid Birds-of-a-Feather sessions and interactive panel discussions. The goal is for a highly interactive conversational room around some guided topics. Be sure to stop in at some point during the conference and let us know what you think!



Day 1: Curated and Open BoFs

These sessions are intended to be interactive and collaborative – more whiteboarding and less slide-based talks.

From discussions about Docker in Open Science Data Analysis to demos of running a Docker Swarm cluster on ARM, we’ve selected a few lightning talks to get the conversations going on a broad range of topics. These curated BoFs gives us the opportunity to accept a few more proposals from the CFP submissions.

We also have a number of open slots available for speakers to sign up for sessions! Folks who want to suggest a topic will list their discussion idea on a whiteboard so DockerCon attendees will then be able to reference the whiteboard to see which BoF they want to join.

This is the space to connect with the community and learn more about specific topics through group discussions! Below is the list of curated sessions selected by the DockerCon Review Committee.


Using Docker for GPU-accelerated applications by Felix Abecassis


In addition of being used for visualization, the highly parallel architecture of GPUs also make them a natural fit for accelerating data-parallel and throughput oriented computations such as machine learning or numerical simulations. When GPUs applications are deployed inside data centers they suffer from the same packaging issues as CPU applications, aggravated by a strong need to get reproducible performance results. The Docker ecosystem is mostly CPU-centric and aims to be hardware-agnostic. This is not the case for GPUs applications since specialized hardware and a specific kernel device driver are now required. We will show how we reconciled those seemingly opposed requirements to enable containerization and execution of GPU applications with Docker.


Using Containers and HPC to solve the mysteries of the Universe by Deborah Bard


Container technology is being used to answer some of the biggest questions in science today – what is the Universe made of? How has it evolved over time? Scientists use vast quantities of data to study these questions, and analyzing this data requires Big Data solutions on high performance computing resources. In this talk we discuss why containers are being deployed on the Cori supercomputer at NERSC (the National Energy Research Scientific Computing center) to answer fundamental scientific questions. We will give examples of the use of Docker in simulating complex physical processes and analyzing experimental data in fields as diverse as particle physics, cosmology, astronomy, genomics and material science. We will demonstrate how container technology is being used to facilitate access to scientific computing resources by scientists from around the globe. Finally, we will discuss how container technology has the potential to revolutionize scientific publishing, and could solve the problem of scientific reproducibility.


Understanding Containers through Gaming by Brendan Fosberry


Programming games and competitions can be a great way to introduce software development, and motivate people to hone their skills. During the third Global Docker Hackday, we prototyped a simple game platform called “Docker Than Light”. Our goal was to create a fun and competitive exercise to help introduce people to the concept of stateful microservices in containers and gain familiarity with the Docker toolset, all packaged as a competitive “Faster Than Light” style free-for-all.

In this talk we’ll discuss a more unusual implementation of containers; allowing game participants to utilize any programming language to build artificially intelligent actors in a distributed simulation. Docker Swarm and network plugins help distribute resources, maintain isolation and provide a realistic analogy for the different components in the system. We’ll talk about the design and orchestration of the simulation, as well as how the various actor languages and platforms were allowed to interact in a homogeneous way.


Tyrion Cannister neural styles by Dora Korpar and Siphan Bou


Understanding deep learning is a real challenge, and even getting started installing software on your machine is difficult. In creating our Docker “hack”, our goal was to try to make the deep learning algorithm Neural Style accessible to everyone by creating a user-friendly GUI that can be launched with one command and that optimizes the entire experience.


Building a Docker Swarm cluster on ARM by Dieter Reuter and Stefan Scherer


In this training you’ll learn how to build a physical Docker Swarm cluster with Raspberry Pi’s. We’ll guide you through the setup process and you’ll learn how to use Docker and Docker Swarm to complete the cluster. You’ll learn how to build and deploy a distributed application and ship it as Docker containers to your cluster.

In the end you have built a portable datacenter which can be used for testing and live demos as well. Here is the outline for this talk:

  • building a hardware cluster with Raspberry Pi’s
  • Install and set up HypriotOS, a Debian-based Linux system
  • Install Docker and Docker Swarm to connect all cluster nodes
  • Build a distributed application, the Docker voting app
  • Ship the app with Docker Compose to your cluster
  • Run and test your app


Meet the Docker Captains! with Alex Ellis, Laura Frank and Nirmal Mehta


The Docker Captains are here to help you cross the oceans of application packaging!

During this interactive session, members of the Docker Captain program will discuss their Docker journey along with how they became industry-recognized Docker experts. Alex, Laura and Nirmal will share their experiences as Docker Captains and provide recommendations on how to become more involved with your local community.


How to Successfully Build a Local Docker Community by Matthias Renner


A community is one of the key components of an open source software project. The success of an open source project like Docker is highly dependent on a large and active community. The speakers will share their experience of how to successfully build a local community by the example of how they raised a Docker community at their University (Univ. of Bamberg, Germany). They summon their best practices as a result of the mistakes they made, illustrated by story telling. This is a talk from me as a student, which is an underrepresented group at DockerCon.

Docker in Open Science Data Analysis Challenges by Bruce Hoff


Typically in predictive data analysis challenges, participants are provided a dataset and asked to make predictions. Participants include with their prediction the scripts/code used to produce it. Challenge administrators validate the winning model by reconstructing and running the source code.

Often data cannot be provided to participants directly, e.g. due to data sensitivity (data may be from living human subjects) or data size (tens of terabytes). Further, predictions must be reproducible from the code provided by particpants. Containerization is an excellent solution to these problems: Rather than providing the data to the participants, we ask the participants to provided a Dockerized “trainable” model. We run the both the training and validation phases of machine learning and guarantee reproducibility ‘for free’.

We use the Docker tool suite to spin up and run servers in the cloud to process the queue of submitted containers, each essentially a batch job. This fleet can be scaled to match the level of activity in the challenge. We have used Docker successfully in our 2015 ALS Stratification Challenge and our 2015 Somatic Mutation Calling Tumour Heterogeneity (SMC-HET) Challenge, and are starting an implementation for our 2016 Digitial Mammography Challenge.


Day 2: Panels

Industry Q&A: Media and Analyst Perspective on Docker


Moderator: Rachel Chalmers, Ignition Partners


Bring your questions and join a panel of top media and analysts covering containers for what is expected to be a truly informative and interesting perspective on Docker, the container ecosystem, and best (and worst) practices when talking containers. You will hear about the present and future of Docker from the perspective of those who have seen many a technology wave, and have heard from hundreds of companies building on, with, or for Docker. The session will also talk about the evolving community, the role of open source, container standards, and what we should prepare for in 2016.


The Open Container Initiative at 12 Months


Moderator: Rob Dolin, Microsoft, OCI Cert WG Chair



In the past few years, there has been rapid growth in both interest in and usage of container-based solutions. To help with the massive growth, the Open Container Initiative was established to help promote a set of common, minimal, open standards and specifications around container formats and runtime.

In this panel discussion, technical leaders of OCI will discuss a certification program focused on the OCI Runtime Spec. The session will also establish:

  • What has the OCI done in the past 12 months?
  • What is the latest state of the runtime and image format specifications?
  • What open source code is available as far as reference implementation and tooling?
  • How is the OCI currently organized?
  • What does the certification working group do and what value can a certification program bring?
  • What are key factors for establishing a certification program for container technology?
  • Demonstration of the OCI testing tools
  • What are the opportunities to get involved with the OCI community?


Learn More about Docker

, ,

Victor Coisne

Open Forum Track at DockerCon 2016 Includes Curated and Open BoF Sessions and Panels!

Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.