Last week we announced the latest release of Docker Datacenter (DDC) with Engine 1.12 integration, which includes Universal Control Plane (UCP) 2.0 and Docker Trusted Registry (DTR) 2.1. Now, IT operations teams can manage and secure their environment more effectively and developers can self-service select from an even more secure image base. Docker Datacenter with Engine 1.12 boasts improvements in orchestration and operations, end to end security (image signing, policy enforcement, mutual TLS encryption for clusters), enables Docker service deployments and includes an enhanced UI. Customers also have backwards compatibility for Swarm 1.x and Compose.
To showcase some of these new features we hosted a webinar where we provided an overview of Docker Datacenter, talked through some of the new features and showed a live demo of solution. Watch the recording of the webinar below:
We hosted a Q&A session at the end of the webinar and have included some of the most common audience questions received.
Can I still deploy run and deploy my applications built with a previous Docker Engine version?
Yes. UCP 2.0 automatically sets up and manages a Swarm cluster alongside the native built-in swarm-mode cluster from Engine 1.12 on the same set of nodes. This means that when you use “docker run” commands, they are handled by the Swarm 1.x part of the UCP cluster and thus ensures full backwards compatibility with your existing Docker applications. The best part is, no additional product installation or configuration is required by the admin to make this work.In addition to this, previous versions of the Docker Engine (1.10 and 1.11) will still be supported as part of Docker Datacenter.
Will Docker Compose continue to work in Docker Datacenter? I.e Deploy containers to multiple hosts in a DDC cluster, as opposed to only on a single host?
In UCP, “docker-compose up” will deploy to multiple hosts on the cluster. This is different from an open-source Engine 1.12 swarm-mode, where it will only deploy on a single node, because UCP offers full backwards compatibility (using the parallel Swarm 1.x cluster, as described above). Note that you will have to use Compose v2 in order to deploy across multiple hosts, as Compose v1 format does not support multi-host deployment.
For the built in HTTP routing mesh, which External LB’s are supported? Nginx, HAProxy, AWS EC2 Elastic LB? Does this work similar to what Interlock was doing?
The experimental HTTP routing mesh (HRM) feature is focused on providing correct routing between hostnames and services, so it will work across any of the above load balancers, as long as you configure them appropriately for this purpose.
The HRM and Interlock LB/SD feature sets provide similar capabilities but for different application architectures. HRM is used for swarm-mode based services, while Interlock is used for non-swarm-mode “docker run” containers.
For more information on these features, check out our blog post on DDC networking updates and the updated reference architecture linked within that post.
Will the HTTP routing mesh feature be available also in the open source free version of the docker engine?
Docker Engine 1.12 (open-source) contains the TCP-based routing mesh, which allows you to route based on ports. Docker Datacenter also provides the HTTP routing mesh feature which extends the open-source feature to allow you to route based on hostnames.
What is “docker service” used for and why?
A Docker service is a construct within swarm-mode that consists of a group of containers (“tasks”) from the same image. Services follow a declarative model that allows you to specify the desired state of your application: you specify how many instances of the container image you want, and swarm-mode ensures that those instances are deployed on the cluster. If any of those instances go down (e.g. because a host is lost), swarm-mode automatically reschedules them elsewhere on the cluster. The service also provides integrated load balancing and service discovery for its container instances.
What type of monitoring of host health is built in?
The new swarm-mode in Docker Engine 1.12 uses a RAFT-based consensus algorithm to determine the health of nodes in the cluster. Each swarm manager sends regular pings to workers (and to other managers) in order to determine their current status. If the pings return an unhealthy response or do not meet the latency minimums for the cluster (configurable in the settings), then that node might be declared unhealthy and containers will be scheduled elsewhere in the cluster. In Universal Control Plane (UCP), the status of nodes is described in detail in the web UI on the dashboard and Nodes pages.
What kind of role based access controls (RBAC) are available for networks and load balancing features?
The previous version of UCP (1.1) had the ability to provide granular label-based access control for containers. We’ve since expanded that granular access control to include both services and networks, so you can use labels to define which networks a team of users has access to, and what level of access that team has. The load balancing features make use of both services and networks so will be access controlled through those resources.
Is it possible to enforce a criteria that only allows production DTR run only containers that are signed?
Yes, you can accomplish this using a combination of features in the new version of Docker Datacenter. DTR 2.1 contains a Notary server (Docker Content Trust), which allows you to provide your users cryptographic keys to sign images. UCP 2.0 has the ability to run only signed images on the cluster. Furthermore, you can use “delegations” to define which teams must sign the image prior to it being deployed; for example in a low security cluster you could allows any UCP user to sign, whereas in production, you might require signatures from Release Management, Security, and Developer teams. Learn more about running images with Docker Content Trust here.
As a very large enterprise doing various POC’s for Docker, one of the big questions is vulnerabilities in the open source code that can be part of the base images. Is there anything that Docker is developing to counter this?
Earlier this year, we announced Docker Security Scanning, which provides a detailed security profile of Docker images for risk management and software compliance purposes. Docker Security Scanning is currently available for private repositories in Docker Cloud private and coming soon to Docker Datacenter.
Is there any possibility to trace which user is accessing a container?
Yes, you can use audit logging. To provide auditing of your cluster, you can utilize UCP’s Remote Log Server feature. This allows you to send system debug information to a syslog server of your choice, including a full list of all commands run against the UCP cluster. This would include information such as which user attempted to deploy or access a container.
What checks does the new DDC have for potential noisy neighbor container scenarios, or for rogue containers that can potentially hog the underlying infrastructure?
One of the ways you can provide a check against noisy neighbor scenarios is through the use of runtime resource constraints. These allow you to set limits on how much system resources (e.g. cpu, memory) that any given container is allowed to use. These are configurable within the UI.
Do you have a trial license for Docker Datacenter ?
We offer a free 30-day trial of Docker Datacenter. Trial software can be accessed by visiting the Docker Store – www.docker.com/trial
For pricing, is a node defined as a host machine or a container?
The subscription is licensed and priced on a per node per year basis. A node is anything with the Docker Commercially Supported (CS) Engine installed on it. It could be a bare metal server, cloud instance or within a virtual machine. More pricing details are available here.