John Willis

Docker and the Three Ways of DevOps Part 1: The First Way – Systems Thinking

written by John Willis, Evangelist at Docker

Have you read Gene Kim’s The Phoenix Project? Some of the principles behind the Phoenix Project and an upcoming book I am co-authoring with Gene (The DevOps Cookbook) have been referred to as the “Three Ways of DevOps”. These are particular patterns of applying DevOps principles in a way that yields high performance outcomes.
threeways-1

 

The First Way: Systems Thinking

We think about this “way” as flow and direction (left to right). Sometimes referred to as the pipeline.

first-way2-400x191

Understanding the system as the complete value stream. Managing this flow is also referred to as global optimization or bottleneck reduction. In DevOps jargon we call this the time it takes to get from “A-Ha to the Cha-Ching”. In Lean we call it “Lead Time”. This is the time it takes to get from a whiteboard diagram to a paying customer feature? I like to say the time it takes getting the things that people make to the things that people buy.

DevOps_cycle_removeshorten-e1348184430980

To be effective at this first way you need be able to apply increased velocity to all of the local processes while not slowing down the global flow. This requires attention to three main principles. One, create increased “velocity” by accelerating each of the process components in the pipeline. Two, you decrease “variation” by eliminating wasteful or time consuming sub processes in the pipeline. Third, you elevate the processes by bounded context (isolating the functionality) therefore better visualizing and understanding the global flow (i.e., seeing the system).

What does Docker have to do with all of this?

Docker and the First Way

Velocity

Developer Flow

Most developers who use Docker create an environment with multiple containers for testing on their laptop. They run a local virtual instance as a docker host with either Vagrant or Boot2Docker. In this environment they can test a service stack made up of multiple Docker

 

Integration Flow

With Docker a continuous integration pipeline can be streamlined by the use of Dockerized build slaves. A CI (Continuous Integration) system can be designed such that multiple virtual instances each run as individual Docker hosts (i.e., multiple build slaves). In fact some environments run a Docker host inside of a Docker host (Docker-in-Docker) for their build environments. This creates clean isolation of the build out and break down environments for the services being tested. In this case the original virtual instances are never corrupted because the embedded Docker host can be recreated for each CI slave instance. The inner Dockerhost is just an Docker image and instantiates at the same speed as any other Docker instance.

Just like the developers laptop, the integrated services can be run as Docker containers inside these build slaves. Not only are there exponential efficiencies in the spin up times of the tested service, there are also additional benefits of rebasing multiple testing scenarios. Multiple tests starting from the same baseline can be run over and over in a matter of seconds. The sub second Docker container spin up times allow scenarios of 1000’s of integration tests in minutes that could otherwise take days.

Another way Docker creates increased velocity for CI pipelines is from it’s use of Union FileSystems and Copy on Write (COW). Docker images are created using a layered file system approach. Typically only the current (top) layer is a writable layer (COW). Advanced usage of baselining and rebasing between these layers can also increase the lead time for getting software through the pipeline. An example might be where a specific MySQL table could be initialized at a certain state and could be rebased back to the original state in the container image for each test therefore providing multiple tests with accelerated efficiencies.

 

Deployment Flow

To achieve increased velocity for Continuous Delivery (CD) of software there are a number of techniques that can be impacted by the use of Docker. A popular CD process called Blue Green deploys is often used to seamlessly migrate applications into production. One of the challenges of production deployments is ensuring seamless and timely changeover times (moving from one version to another). A Blue Green deploy is a technique where one node of a cluster is updated at a time (i.e., the green node) while the other nodes are still untouched (the blue nodes). This technique requires a rolling process where one node is updated and tested at a time. The two keys here are: 1) total speed to update all the nodes needs to be timely and 2) if the cluster needs to be rolled back this also has to happen in a timely fashion. Here again nodes as Docker containers make the roll forward and roll back process more efficient. Also because the application is isolated (i.e., containerized) this process is also a lot cleaner because there are less moving parts involved during the changeover. Other deployment techniques like dark launches and canarying can benefit from Docker container isolation and speed, all for the same reasons described earlier.

 

Variation

One of the main benefits of using Docker images in the delivery of a software pipeline is that both the infrastructure and the application can both be included in the container image. One of the core tenants of Java was that it was supposed to be “write once run anywhere”. However, since the Java artifact (typically a JAR, WAR or EAR) only included the application there was always a wide range of variation depending on the Java runtime and the specific operating environment of the deployment environment. With Docker a developer can bundle the actual infrastructure (i.e. the base OS, middleware, runtime and the application) in the same image. This converged isolation lowers the potential variation at every stage of the delivery pipeline (dev, integration and production deployment). If a developer tests a set of Docker images as a service on a laptop those same services can be exactly the same during the integration testing and the production deployment. The image (i.e., the converged artifact) is a binary. There should be little or no variation of a service stack at any of the stages of the pipeline when Docker containers are used as the primary deployment mechanism. Again contrast this to environments that have to be built at each stage of the pipeline. Alternatively, configuration management and release engineer scripting are often used at each stage of the pipeline to build out the service. Although most automation mechanism are far more efficient than checklist built service, they still run the risk of higher variation than binay Docker service stacks. These alternatives to Docker containers can yield wider variances of stability therefore increasing variation. A Dockerized pipeline approach delivers converged artifacts as binaries and therefore are immutable starting from the commit.

At last year’s DockerCon conference in San Francisco, the Gilt Group described how they use Docker as their primary deployment mechanism. Gilt makes use of a Microservices architecture in a delivery pattern of what they call an immutable infrastructure. In other words artifacts in their production environments are not updated. Infrastructure is always replaced. Meaning they either roll forward or roll backwards. In fact if you watch Michael Bryzek’s (CTO) video presentation it looks like their complete pipeline is immutable. I have coined this as “Immutable Delivery”. They showed how a developer packages a set of container binaries and then provides one meta file (a Docker run description file) to the pipeline. Everything in the bundle is self contained. Prior to their Dockerized deployment process they had a repo of over 1000 release engineering build scripts managing over 1000 different repo’s of software and with 25 different deployment models. This old process created a wide range of variation in the pipeline. Discovery and ownership of the scripts often created bottlenecks in the pipeline. Break fix situations were unclear do to the variation in the process. A classic “Devops” adage is having developers where pagers for production application issues. In the case of Gilt not only do the developers wear pagers, they also have ownership of the complete embedded infrastructure.

 

Visualization

A new model of disruption in our industry is called Containerized Microservices. In a Microservices architecture “services” are defined as what are called bounded context. These are services that model real world domains. There might be a service domain called finance or warehouse in a Microservices architecture. When these bounded services are coupled as Docker containers and used as part of the delivery pipeline they are immediately visible as real word domains. From an “Devops” operational aspect one of the keys to success is how well your organization is at MTTR (Mean time to Repair/Restore). When services are bounded by their business context and then isolated as Dockerized containers they become elevated (i.e., visual) in the pipeline. This increased visibility can help an organization isolate, discover and determine proper ownership faster; therefore decreasing their overall MTTR.

In this first post we focused on Docker and the “First Way” and how Velocity, Variation and Visualization can provide global optimization. In essence we showed how Dockerizing your pipeline can reduce the cost and risk of your software delivery while increasing the rate of change. In the next set of posts we will discuss Docker and the “Second Way and Third Way”.

 

Read Part 2 and Part 3 in this series


Learn More about Docker

 

, , ,

John Willis

Docker and the Three Ways of DevOps Part 1: The First Way – Systems Thinking


2 Responses to “Docker and the Three Ways of DevOps Part 1: The First Way – Systems Thinking”

  1. Colin Clark

    Great article John, looking forward to the rest of the series.

    Reply
  2. Stewart Haines

    Thanks, John. I’m enjoying the articles and the dedicated evangelism.

    correction: section on Variation, first para, second sentence. Perhaps for ‘core tenants’ you intended ‘core tenets’.

    http://grammarist.com/spelling/tenant-tenet/

    Reply

Leave a Reply

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.