Chris Hines

Webinar Q&A: Docker Networking

Many enterprises have moved from monolithic to microservices architectures, and are using Docker to help manage these services. With loosely coupled services existing in multiple containers, these containers must be able to communicate with one another, and sometimes even exist in multiple networks at a time.

Docker makes this extremely easy for developers and IT operations teams. We have found that enterprises today are using Docker and our pluggable architecture (plugins with Microsoft, Cisco, Vmware, Weave, and other) for multi-segmentation, multi-tenancy, as well as cloud portability.

In last week’s webinar, Technical Marketing Engineer Mike Coleman gave an introduction to Docker Networking, and explained how Docker solutions can help your organization build, ship and run applications, while making networking seriously easy.

You can watch the webinar here or below.

 

image00Our featured presenter Mike Coleman shares his answers to questions he received during the live presentation.

Q: All the containers share the same NIC. How do ensure one container doesn’t hog the shared physical NIC and affect the i/o performance of other containers?

Docker does not do any native throttling or control. Any such control would need to be done at the application or host level.

Q: Will Docker networking allow you to create vlan, vxlan and gre in Cisco network gear through plugins already in place or it is something which needs to be done?

Support for specific vendor gear is done via vendor-specific plugins. In the case of Cisco you can read about their work on the Contiv project in this blog post/

Q: What plugins does Docker Networking Support?

Docker Networking supports several different plugins including: Microsoft, Cisco, VMware, Weave, OpenContrail, Midokura and Nuage Networks.

Q: How is this (Docker networking) different from Openstack Neutron objects?

The two offerings are similar in that they both are related to SDN. However, there really isn’t a direct comparison since Neutron is Openstack focused, and our offering is specific to Docker containers. Also, Openstack is IAAS while Docker is more application focussed. Hence the Docker networking philosophy, design and APIs are more application focussed and any infrastructure specific configurations are handled by plugins.

Q: Is the network a physical network or virtual network?
From Docker’s perspective it doesn’t matter. Docker networking will work over both virtual and physical networks.

Q: Do the Docker elements (sandbox, endpoint, etc.) compliant with an 802.1ad infrastructure?

Yes and No. The constructs themselves do not prevent you from using QinQ. Unfortunately there are no plugins at present that allow you to use QinQ. The Plugin API is open and available and the go-plugins-helpers library makes it very easy to get started writing plugins so we would encourage you or others interested to write one!

Q: Can DOCKER MACHINE create docker hosts with networking already set up in between?

Docker Hosts aren’t the right level of abstraction when you think about Docker networking. Docker networking deals with connecting up containers. While it does support multi-host connectivity, you’re not really networking the hosts – you’re networking the containers that reside on those hosts. So, setting up multi-host networks is “out of bounds” for Machine.

Q: How does Jérôme Petazzoni‘s Pipework project play with this?

Pipework was a project from Jerome aimed at filling some gaps with Docker networking. As he says himself in his readme “In the long run, Docker will allow complex scenarios, and Pipework should become obsolete.” With the latest update to Docker networking, Pipework is no longer necessary.

Q: If you delete the network do you get an error or warning if that network is connected to a running container?

Yes, you get an error which tells you that a network cannot be deleted with active endpoints

Q: What happens to the containers that are running on the network when it is removed?

If you try and delete a network that has active endpoints, Docker will throw and error and prevent the container from being removed

Q: Is this VXLAN bridging or routing also?

We support both bridging and routing in overlay driver. If the user specifies multiple --subnet when creating a network using overlay driver, docker allocates 1 VXLAN-VNI per subnet. We will use Linux IP Routing stack to perform the routing, while the VXLAN is used for bridging.

Q: Which protocol KV store uses to learn info from each host?

We use Serf, a gossip protocol, to communicate things like container MAC addresses between hosts. The KV store itself only handles data that needs to be consistent, e.g Network IDs, VXLAN IDs etc… and that communication is done via Docker’s libkv project. That will chose the right protocol to communicate with the KV store.

Q: What’s the performance impact on the host and containers when defining multi-host networks?

For all intents and purposes the impact would be no different than running multiple NICs in a physical or virtual host. The overhead from Docker networking is imperceptible.

Q: Do you need to setup TLS for every docker docker host connected to overlay network?

TLS authentication, while not mandatory, is strongly recommended.

Q: Are/Can the communications be secured on the overlay network between instances?

Yes communication can be secured, but this is a manual process today – e.g you would have to set up a VPN between your hosts and use the VPN interface for all communications. We’re looking at adding support for encryption in future so watch this space!

Q: What does the raw L2/L3 traffic look like in a packet capture across a physical SPAN port? Encryption? Encapsulation?

Encapsulation is VXLAN. There is no Encryption.

Q: Is the KV store another container running possibly MongoDB or other such DB?

Docker networking uses libkv and it supports Consul, Etcd, and Zookeeper as it’s key / value stores.

 

Sign up for a free 30-day trial of Docker Subscription to get the essential software, support and security you need.

 


 

Learn More about Docker

 

, , ,

Chris Hines

Webinar Q&A: Docker Networking


2 Responses to “Webinar Q&A: Docker Networking”

  1. Tomer

    I've searched everywhere and could not find how can one Monitor and troubleshoot overlay networks (libnetwork, VXLANs of docker 1.12 for instance) like Operation people are used to do with "netstat -anp" for example. I understand why netstat and other such tools are not working for overlay networks, but I did not find anyone developing alternative tools… Docker 1.13 ?! currently debugging such 1.12 swarm cluster in case of (for example) a scaled stateless web service, is very difficult, as you have no idea where the docker daemon proxied the connection (to which 'task' = container, on which node)

    Reply
  2. PREMPON

    Dear all, I do not understand how to manage container network. Comparing to VMware ESXi host suppose its has 2 ethernet interfaces; ETH0, ETH1. I will use ETH0 for management and talk to vCenter Server. For ETH1 I will tagged multiple VLAN to use it for virtual machine connectivities because I don't want virtual machine and its application use the same subnet How do Docker Host manage Network because I do not want use all containers with subnet and expose port?

    Reply

Leave a Reply to Tomer

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.