Our latest Docker webinar is part of a three session series that dives into Docker storage from outlining the basics, to considerations for planning and best practices. Our first session goes back to the basics to explain how images, storage and volumes work in Docker.
There are two more sessions coming up about Docker Storage – register now to save your seat!
Our speaker from this session, Sr. Technical Marketing Engineer Mike Coleman takes on the questions from the webinar in this blog post:
What does it mean that Docker images are layers?
It literally means that a docker images is built on layers. Each layer represents a portion of the images file system that either adds to or replaces the layer below it. For instance you might start with a Debian image, and then add Emacs, and the Apache. Finally, when you instantiate a container a final read-write layer to capture any changes that are made to the container.
What is a union file system?
A union file system is a file system that amalgamates a collection of different file systems and directories (called branches) into a single logical file system.
Is UnionFS a distributed FS?
No, it is not.
Are Docker Storage Drivers and Volumes the same thing?
No, they are actually quite different.
A storage driver is how docker implements a particular union file system. Keeping with are “batteries included, but replaceable” philosophy, Docker supports a number of different union file systems. For instance, Ubuntu’s default storage driver is AUFS, where for Red Hat and Centos it’s Device Mapper.
A volume is subdirectory in a container’s file system that exists outside of the union file system. Unless you explicitly commit changes made to a Docker container during run time to a new image, they are lost when the container is destroyed. Volumes, by contrast, will not be automatically deleted when a container is removed (you can, of course, tell Docker to remove any volumes when removing a container).
Is there any issue using an NFS shared volume for your data container volume?
Not really. This is a pretty common scenario.
How do you export a modified Docker container into a new image to use it somewhere else?
You can do this by using
docker commit. In an example where you have a container named “mywebsite” the following command would create a new image named “mikegcoleman\mynginx” from that container
docker commit mywebsite mikegcoleman\mynginx
How can we best secure containers or provide security?
Visit the Docker Security resource center to get access to tools and best practices in securing your Docker implementation.
How do we configure resources(RAM and CPU) to containers?
What if I want to use RHEL based image?
Red Hat hosts a variety of docker images, but you must be a Red Hat subscriber to access them.
What’s the difference between RHEL containers vs. Docker containers
The Docker engine that runs on Red Hat Enterprise Linux (RHEL), offers the same functionality as the Docker engine running on any other Linux distribution. Additionally, If you are a Red Hat subscriber, you can run RHEL inside of a container on any host running Docker engine.
Say, I have Solaris OS and Docker is supported, does that mean I can deploy CentOS based os on top of it?
We are working with Oracle to implement Docker with Solaris Zones (https://blog.docker.com/2015/08/docker-oracle-solaris-zones/), but this funcitonaity is not yet generally availabke. When it does ship, however, it will allow Docker to run Solaris workloads on Solaris, but it won’t allow to run Linux workloads on Solaris. Similarly, the Docker Engine that will run on Windows when Microsoft ships its container support with Windows Server 2016 will run Windows workloads, not Linux workloads.
That being said, the Illumos kernel has support for a feature called “BrandZ,” which allows running Linux binaries on the Illumos kernel. Joyent’s Triton product leverages on this feature to run Linux containers on their SmartDataCenter platform, which uses the Illumos kernel.
For few of applications like Oracle 12g RAC which requires dynamic kernel tuning and fencing concept, will Docker work?
It might work, but it’s not something we advocate as a good use case. Docker is designed for microservices or distributed applications. . Each container should, ideally, perform a single function. You tie those containers together to create complex applications. Large monolithic applications like Oracle, are not a particularly good use case for containers.
When running 10 web containers,how do I set them up to share filesystem data, like uploaded files? Can I use something like GlusterFS?
Without knowing more specifics on the use case, it’s hard to say exactly what the best approach would be. However, using a shared data-only container with volumes mounted to house the shared data would be a good place to start.
As for Gluster, I’m not sure it’ll solve the problem you’re trying to solve by itself. That being said, customers have deployed Docker volumes to GlusterFS with success. Just make sure you understand the characteristics of the file system – in particular how to tune it appropriately.
If you’re developing and map a local folder to as a volume to a container, what’s the best practice when you need to get your app on the production containers?
The app code should be copied into the Docker image. The idea is that your Docker image includes everything the app needs to execute including the code and any libraries or dependencies.
Since data-only containers still store their data on the running host, why is this considered better than host-mounting?
Host mounting ties the volume mount point to one specific Docker host. With data-only containers, the volume is mounted to a container, and that abstraction works across different Docker hosts.
Wouldn’t a data container suffer from the same problem as writing data to a container over time? I.e. the build up of diffs may cause longer look up times due to folder traversal.
When we talk about data-only containers, we talk about them in the context of hosting a volume in that container. Since we’re using a volume, it bypasses the union file system, and isn’t affected by some of the performance issues.
What are the best practices for backup and version control?
These are pretty broad topics, but I’ll give a couple quick answers with the fact that we’re doing a session on October 27th that will talk about backups in pretty good detail.
Part of the reason for storing your data in volumes is it makes it fairly trivial to backup. Information on the hows and whats can be found in the Docker documentation.
As for version control, I’m not exactly sure what you mean, but I’ll take a crack at it anyway. As a best practice we recommend using Dockerfiles to control the creation of new Docker images. Those Dockerfiles should be stored in GitHub, and treated like any other source code. Additionally, when images are pushed to a registry, they can be tagged with a version number so you can keep older versions around or be explicit about which version you wish to use in the future.
Don’t miss the next two sessions on Docker Storage – register now:
Learn More about Docker
- New to Docker? Try our 10 min online tutorial
- Sign up for a free 30 day trial of Docker
- Share images, automate builds, and more with a free Docker Hub account
- Read the Docker 1.8 Release Notes
- Subscribe to Docker Weekly
- Register for upcoming Docker Online Meetups
- Attend upcoming Docker Meetups
- Register for DockerCon Europe 2015
- Start contributing to Docker