Phil Estes

Multi-arch All The Things

[This post was written by Phil Estes and Michael Friis.]

True multi-platform workload portability has long been the holy grail of enterprise computing. All kinds of virtualization strategies have been used over the years to approximate this dream to varying levels of acceptable performance or usability. On the one hand, virtual machines and hardware virtualization are flexible enough that you can mix and match operating systems (and even CPU architectures) on the same host—but they come with a lot of overhead. However, language-based virtual runtimes don’t have packaging formats that encapsulate all system-level app dependencies, and that makes them unsuitable for general-purpose deployment and configuration management.

Docker came along as a unique type of virtualization that only virtualizes the operating system for container processes. Docker uses existing Linux kernel features to offer isolation characteristics that are similar to what is available with virtual machines. The analogy of a “standard shipping container,” combined with these isolation primitives, caught developer interest immediately. With this new shipping metaphor came speed and agility that blew the doors off virtual machine size and speed constraints that impacted developer workflow, not to mention developer happiness! The containerization craze has grown like wildfire since then, but left us back in a Henry Ford-esque conundrum: “you can run a container anywhere you want, as long as it is the x86_64 CPU architecture on Linux.”

As Docker and container interest grew, many other UNIX and Linux user communities—including Raspberry Pi enthusiasts, FreeBSD, Solaris, ARM microservers, OpenPOWER and z/Linux communities—wanted in on the container craze and jumped in to port the Docker container engine and other required components to their UNIX and/or CPU specific architectures. Microsoft joined in as well, and added containerization features in the Windows Server kernel, working with Dockerto port the Docker engine to Windows. At DockerCon in Copenhagen, Michael Friis and I shared this multi-platform journey in their talk “Docker: Multi-arch All The Things. 

This story boils down to three main efforts that were covered during this talk:

  • Porting the Docker container runtime components to CPU architectures beyond x86_64 and to non-Linux operating systems.
  • Formulating a container image type that can represent multiple platform-specific images referenced by a single name, and implementing this support in container image registries.
  • Having common images available for a broadly supported set of platforms and architectures and helping image packagers to understand the benefits of building and packaging software for multiple platforms.

We’ve already alluded to the fact that many communities and vendors handled the first item over the first few years or Docker’s existence. Today several ARM variants, Power, z/Linux and Microsoft Windows all have the Docker runtime stack available for their CPU platforms and operating environments.

Docker Manisfest

To handle the need for a multi-arch-and-OS container image representation, in 2015 and early 2016 the Docker distribution project, with community involvement, created the v2.2 image specification with a new type—a manifest list—to represent a list of architectures/platforms and point to the platform specific container image content for the runtime to use. Soon after the Docker engine and the Docker registry project, along with the DockerHub implementation all supported the manifest list concept.

Docker Manisfest

The final piece of this multi-platform puzzle was for image packagers to take advantage of this new capability and create content in registries which could be consumed by Docker runtimes on the various platforms. First, a tool was needed to be able to create manifest list entries, and I created the manifest-tool project to fill this gap until the Docker client interface had the same capabilities. Secondly, late in summer 2017, all officially created and supported images on Docker Hub switched to use manifest list entries, with a large portion of them supporting two or more architectures. Now that this switch has happened, work has increased to broaden the support for many popular images across more architectures and platforms. This has improved usability of Docker instantly across many of the architectures and platforms, which no longer require workarounds like directing their users to add an architecture-specific prefix to image names. During the talk Michael and I also discussed how image creators and software packagers can start to deal with the added complexity of CI systems and cross-compilation to handle the creation of manifest list entries for their own images.

The current status of multi-platform support in the Docker ecosystem is a huge step forward and brings to fruition capabilities the broader community has been desiring for several years. Due to this progress, Michael and I were able to show off the Docker Enterprise Edition (EE) control plane managing a hybrid cluster of nodes running Windows Server, a mainframe VM running z/Linux, and two standard x86_64 cloud instances. Within this cluster they showed deployment of services and stacks using manifest list-based images across a hybrid cluster, supporting single-pane of glass management across these architectures and platforms. These truly are good times for multi-platform support and maybe we’re one step closer to that holy grail of multi-platform workload portability.

If this topic interests you, you can watch the talk from DockerCon EU above or check out the slides.

Additional links:

, , , ,

Phil Estes

Multi-arch All The Things


Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.