Jérôme Petazzoni

Why you don't need to run SSHd in your Docker containers

When they start using Docker, people often ask: “How do I get inside my containers?” and people will tell them “Run an SSH server in your containers!” But, as you’ll discover in this post, you don’t need to run a SSHd daemon to get inside your containers. Well unless your container is an SSH server, of course!

It’s tempting to run the SSH server, because it gives an easy way to “get inside” of the container. Virtually everybody in our craft used SSH at least once in their life. Most of us use it on a daily basis, and are familiar with public and private keys, password-less logins, key agents, and even sometimes port forwarding and other niceties. With that in mind, it’s not surprising that people would advise you to run SSH within your container. But you should think twice.

Let’s say that you are building a Docker image for a Redis server or a Java webservice. I would like to ask you a few questions.

  • What do you need SSH for?Most likely, you want to do backups, check logs, maybe restart the process, tweak the configuration, possibly debug the server with gdb, strace, or similar tools. We will see how to do those things without SSH.
  • How will you manage keys and passwords?Most likely, you will either bake those into your image, or put them in a volume. Think about what you should do when you want to update keys or passwords. If you bake them into the image, you will need to rebuild your images, redeploy them, and restart your containers. Not the end of the world, but not very elegant neither. A much better solution is to put the credentials in a volume, and manage that volume. It works, but has significant drawbacks. You should make sure that the container does not have write access to the volume; otherwise, it could corrupt the credentials (preventing you from logging into the container!), which could be even worse if those credentials are shared across multiple containers. If only SSH could be elsewhere, that would be one less thing to worry about, right?
  • How will you manage security upgrades?The SSH server is pretty safe, but still, when a security issue arises, you will have to upgrade all the containers using SSH. That means rebuilding and restarting all of them. That also means that even if you need a pretty innocuous memcached service, you have to stay up-to-date with security advisories, because the attack surface of your container is suddenly much bigger. Again, if SSH could be elsewhere, that would be a nice separation of concerns, wouldn’t it?
  • Do you need to “just add the SSH server” to make it work?No. You also need to add a process manager; for instance Monit or Supervisor. This is because Docker will watch one single process. If you need multiple processes, you need to add one at the top-level to take care of the others. In other words, you’re turning a lean and simple container into something much more complicated. If your application stops (if it exits cleanly or if it crashes), instead of getting that information through Docker, you will have to get it from your process manager.
  • You are in charge of putting the app inside a container, but are you also in charge of access policies and security compliance?In smaller organizations, that doesn’t matter too much. But in larger groups, if you are the person putting the app in a container, there is probably a different person responsible for defining remote access policies. Your company might have strict policies defining who can get access, how, and what kind of audit trail is required. In that case, you definitely don’t want to put a SSH server in your container.

But how do I …

Backup my data?

Your data should be in a volume. Then, you can run another container, and with the --volumes-from option, share that volume with the first one. The new container will be dedicated to the backup job, and will have access to the required data. Added benefit: if you need to install new tools to make your backups or to ship them to long term storage (like s3cmd or the like), you can do that in the special-purpose backup container instead of the main service container. It’s cleaner.

Check logs?

Use a volume! Yes, again. If you write all your logs under a specific directory, and that directory is a volume, then you can start another “log inspection” container (with --volumes-from, remember?) and do everything you need here. Again, if you need special tools (or just a fancy ack-grep), you can install them in the other container, keeping your main container in pristine condition.

Restart my service?

Virtually all services can be restarted with signals. When you issue /etc/init.d/foo restart orservice foo restart, it will almost always result in sending a specific signal to a process. You can send that signal with docker kill -s <signal>. Some services won’t listen to signals, but will accept commands on a special socket. If it is a TCP socket, just connect over the network. If it is a UNIX socket, you will use… a volume, one more time. Setup the container and the service so that the control socket is in a specific directory, and that directory is a volume. Then you can start a new container with access to that volume; it will be able to use the socket.

“But, this is complicated!” – not really. Let’s say that your service foo creates a socket in/var/run/foo.sock, and requires you to run fooctl restart to be restarted cleanly. Just start the service with -v /var/run (or add VOLUME /var/run in the Dockerfile). When you want to restart, execute the exact same image, but with the --volumes-from option and overriding the command. This will look like this:

# Starting the service
CID=$(docker run -d -v /var/run fooservice)
# Restarting the service with a sidekick container
docker run --volumes-from $CID fooservice fooctl restart

It’s that simple!

Edit my configuration?

If you are performing a durable change to the configuration, it should be done in the image – because if you start a new container, the old configuration will be there again, and your changes will be lost. So, no SSH access for you! “But I need to change my configuration over the lifetime of my service; for instance to add new virtual hosts!” In that case, you should use… wait for it… a volume! The configuration should be in a volume, and that volume should be shared with a special-purpose “config editor” container. You can use anything you like in this container: SSH + your favorite editor, or an web service accepting API calls, or a crontab fetching the information from an outside source; whatever. Again, you’re separating concerns: one container runs the service, another deals with configuration updates. “But I’m doing temporary changes, because I’m testing different values! In that case, check the next section!

Debug my service?

That’s the only scenario where you really need to get a shell into the container. Because you’re going to run gdb, strace, tweak the configuration, etc. In that case, you need nsenter.

Introducing nsenter

nsenter is a small tool allowing to enter into namespaces. Technically, it can enter existingnamespaces, or spawn a process into a new set of namespaces. “What are those namespaces you’re blabbering about?” They are one of the essential constituants of containers. The short version is: with nsenter, you can get a shell into an existing container, even if that container doesn’t run SSH or any kind of special-purpose daemon.

Where do I get nsenter?

Check jpetazzo/nsenter on GitHub. The short version is that if you run:

docker run -v /usr/local/bin:/target jpetazzo/nsenter

This will install nsenter in /usr/local/bin and you will be able to use it immediately. nsenter might also be available in your distro (in the util-linux package).

How do I use it?

First, figure out the PID of the container you want to enter:

PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)

Then enter the container:

nsenter --target $PID --mount --uts --ipc --net --pid

You will get a shell inside the container. That’s it. If you want to run a specific script or program in an automated manner, add it as argument tonsenter. It works a bit like chroot, except that it works with containers instead of plain directories.

What about remote access?

If you need to enter a container from a remote host, you have (at least) two ways to do it:

  • SSH into the Docker host, and use nsenter;
  • SSH into the Docker host, where a special key with force a specific command (namely,nsenter).

The first solution is pretty easy; but it requires root access to the Docker host (which is not great from a security point of view). The second solution uses the command= pattern in SSH’s authorized_keys file. You are probably familiar with “classic” authorized_keys files, which look like this:

ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque

(Of course, a real key is much longer, and typically spans multiple lines.) You can also force a specific command. If you want to be able to check the available memory on your system from a remote host, using SSH keys, but you don’t want to give full shell access, you can put this in the authorized_keys file:

command="free" ssh-rsa AAAAB3N…QOID== jpetazzo@tarrasque

Now, when that specific key connects, instead of getting a shell, it will execute the freecommand. It won’t be able to do anything else. (Technically, you probably want to add no-port-forwarding; check the manpageauthorized_keys(5) for more information.) The crux of this mechanism is to split responsibilities. Alice puts services within containers; she doesn’t deal with remote access, logging, and so on. Betty will add the SSH layer, to be used only in exceptional circumstances (to debug weird issues). Charlotte will take care of logging. And so on.

Wrapping up

Is it really Wrong (uppercase W) to run the SSH server in a container? Let’s be honest, it’s not that bad. It’s even super convenient when you don’t have access to the Docker host, but still need to get a shell within the container. But we saw here that there are many ways to not run an SSH server in a container, and still get all the features we want, with a much cleaner architecture. Docker allows you to use whatever workflow is best for you. But before jumping in the “my container is really a small VPS” bandwagon, be aware that there are other solutions, so you can make an informed decision!

, ,

Jérôme Petazzoni

Why you don't need to run SSHd in your Docker containers


52 Responses to “Why you don't need to run SSHd in your Docker containers”

  1. Matthias Kadenbach

    Nice write-up. Thanks for sharing. Also nice work with https://github.com/jpetazzo/nsenter. Will give it a try.

    Reply
  2. Matej

    If you want to copy/check the logs, do `docker cp container_name:/var/log logs` then all the logs will be in the logs/ directory.

    Reply
  3. Pavel Forkert

    What about other arguments of http://phusion.github.io/baseimage-docker/, ie cron, syslog, orphaned zombie processes?

    Reply
    • Bart M.

      @Pavel Forkert:
      cron can be running in a separate container, with access to the right volume.

      Orphaned system processes are only a problem when running applications that double-fork and detach from their parent process (aka daemonize). Running applications like that in a container wouldn’t be very useful, but sometimes you have no choice. This could potentially be handled by the ‘.dockerinit’ process which creates the container. Currently this runs initially as PID 1 in the container and then exec’s the new process, replacing itself with that. If it would not use the ‘exec’ method, but just launch the process without replacing itself, it could handle the reaping of orphaned PID’s, and your application will be running as PID 2. This however requires changes in the docker core, and might have serious side-effects.

      Syslog is tricky, you need ‘something’ listening locally on a network interface. Again this could be solved in a generic way by benefiting of the running ‘.dockerinit’ process by adding some sort of inter-container port forwarding in it, so it can listen on the loopback interface within a container, and proxy it to a certain port on another container or IP.

      So while perfectly possible to solve all these problems, we’re not there yet I think.

      Reply
  4. Thomas Clavier

    For log inspection i prefer
    docker stop : give a snapshot
    Docker run : service must be available
    docker commit : save an image for analysis
    docker run -i -t /bin/bash : do forensic

    Reply
  5. Hongli Lai (Phusion)

    Hello Jerome. I am the author of baseimage-docker (http://phusion.github.io/baseimage-docker/) and I work at Phusion. I have the feeling that you wrote this article mainly in response to the fact that Baseimage-docker encourages using SSH as a way to login to the container. I believe that the ability to login to the container is very important. Depending on how you architect your container, you might not have to, but I believe that it’s always good to have the *ability* to, even if only as a last resort method.

    I had a pleasant conversation with you quite a while ago about SSH and what the “right” way is to login to a Docker container. We were not able to find consensus, but I think that you are a brilliant guy and your reasons were sound. For some time, I considered using lxc-attach to replace the role of SSH. Unfortunately, a few weeks later, Docker 0.9 came out and no longer used LXC as the default backend, and so suddenly lxc-attach stopped working. We decided to stick with SSH until there’s a better way. Solomon Shykes told us that they have plans to introduce an lxc-attach-like tool in Docker core. Unfortunately, as of Docker 1.0.1, this feature still hasn’t arrived.

    Now, you are advocating nsenter. I am seriously considering this option. There is currently an ongoing discussion on the baseimage-docker bug tracker about replacing SSH with nsenter: https://github.com/phusion/baseimage-docker/issues/102

    But leaving all of that aside, we regularly get told by people that Baseimage-docker “misses the point” of Docker. But what is the point of Docker? Some people, you included, believe it’s all about microservices and running one process in a container.

    We take a more balanced, nuanced view. We believe that Docker should be regarded as a flexible tool, that can be mended into whatever you want. You *can* make single-process microservices, if you want to and if you believe that’s the right choice for you. Or you can choose to make multi-process microservices, if that makes sense. Or you can choose to treat Docker like a lightweight VM. We believe that all of those choices are correct. We don’t believe that one should ONLY use Docker to build microservices, especially because Microservices Are Not A Free Lunch (http://highscalability.com/blog/2014/4/8/microservices-not-a-free-lunch.html).

    Baseimage-docker is about *enabling users* to do whatever they want to. It’s about choice. It’s not about cargo-culting everything into a single philosophy. This is why Baseimage-docker is extremely small and minimalist (only 6 MB memory over), flexible and thoroughly documented. Baseimage-docker is *not* about advocating treating Docker as heavyweight VMs.

    Looking forward to hear your thoughts.

    Reply
  6. Christopher Kuttruff

    ‘When they start using Docker, people often ask: “How do I get inside my containers?” and other people will tell them “Run an SSH server in your containers!”’

    You’re setting up a straw man here… this is certainly not the only reason why one would want to run ssh. If you’re testing an applications ssh config, or utilizing ssh as part of a configuration management workflow, it makes perfect sense to run sshd in your container to maintain as much consistency as possible when testing.

    Reply
  7. Flo

    Great Post. This is one more point for the servers (servers = containers for the sake of this argument) as pets vs servers as kettle discussion where a server should be closed off completely and necessary data should flow out of it.

    If anything is wrong with the server you kill it and put another one in place. If that problem comes up again you check your metrics if you have all the data you need to debug it, if not you add that data into the next version of the server, deploy and debug.

    If you need to get access into the server during production there is probably something wrong with the metrics setup for your application that should be fixed first, especially as it allows for a longer term solutions like automated killing of servers based on your metrics.

    It’s just a shift in mentality that is easier for some, but hard for others but it needs to happen.

    Reply
  8. Niclas

    What about service discovery? A lot of people are using service discovery for each container by running a separate process inside that container, e.g. a serf or consul agent. Would that be “ok”? It does not provide a separate service (like sshd), it just serves as a helper for service discovery.

    Reply
  9. Sergey Porfiriev

    nsenter is provided in stock fedora20
    for ubuntu get https://github.com/jpetazzo/nsenter works great

    docker should add “docker enter container_name_or_ID”

    for now I wrote a script to ease nsenter usage

    $ cat docker-enter
    #!/bin/sh
    if [ ! $1 ]; then
    echo Usage: $0 container_name_or_ID
    echo Enter a running docker container
    echo

    echo Running dockers:
    docker ps

    exit
    fi

    PID=$(docker inspect –format {{.State.Pid}} $1)

    if [ ! $PID ]; then
    echo No such container
    exit 1
    fi

    sudo nsenter –target $PID –mount –uts –ipc –net –pid

    Reply
  10. Michael Schuerig

    > Combining those two steps into a single command is left as an exercise for the reader.

    Here you go:
    https://gist.github.com/mschuerig/6a2ffc07288b1b96fed9

    Reply
  11. James Mills

    Yes +1 NIce write-up! I suppose I should start packing up “nseneter“ for the CRUX Docker package(s) I maintain 🙂 — I actually haven’t used “nseneter“ yet but I do use volumes as much as possible!

    Reply
  12. Ervin

    Another option to login to a container would be using lxc-attach.
    – get the full ID of the container with docker ps –no-trunc and copy it
    – add DOCKER_OPTS=”-e lxc” to /etc/default/docker and restart the service/machine
    – run lxc-attach -n

    Reply
  13. Felix

    The debug section should be part of the docker tutorial. Nsenter could even be integrated into the docker CLI (docker debug $container_id).

    @pavel: Cron and syslog could run in specialized containers.
    About zombie processes I do not know – is this even relevant if your container runs a single process?

    Reply
  14. xr09

    Can’t you do something like “docker enter VMID” like in openvz? vzctl enter 100?

    Reply
  15. Phil Whelan

    Hi Jerome,

    http://jpetazzo.github.io/2014/03/23/lxc-attach-nsinit-nsenter-docker-0-9/
    Here ^^^ you said that “According to Michael Crosby, it is even better to use nsinit.”

    I assume from the above that nsenter is now the better option. How do these two differ? Is nsinit still relevant in this scenario?

    Reply
    • Jerome Petazzoni

      There is a big difference between nsinit and nsenter; I didn’t realize it when I wrote the blog post mentioning lxc-attach, nsinit and and nsenter. (Maybe I should update it, or write another, or… well!)

      nsinit will setup a new process in the container. The new process will:
      – be in the appropriate namespaces;
      – be in the appropriate cgroups;
      – relinquish capabilities (unless the container is privileged).

      nsenter, however, will just enter namespaces (as its name implies!) but it will not relinquish capabilities nor place the process in the right cgroups.

      “But! Isn’t that a Big Problem?” No! Quite the contrary. In the examples listed in my post, the only case where I advise to use nsenter is for debugging. By keeping the new process outside of the container’s cgroup, we make sure that its memory usage is not “charged” against the container. If we start a heavy debugger needing gobs of RAM, we won’t cause the container to go out of memory. Likewise, if we want to use some features requiring extended privileges, we’ll have them — since nsenter didn’t drop capabilities.

      Nsinit is still relevant if you want something “like SSH but without the overhead of SSH”; i.e. for some very optimized VPS setup, for instance.

      To be fair, I believe that nsenter is great when you operate your own Docker hosts, and you want something more powerful, unencumbered by restrictions. nsinit is better if you provide Docker-as-a-Service to others, and you want to make sure that if they drop into containers, they’re not creating processes with elevated privileges, or escaping the resource accounting system.

      Reply
      • Kenneth Nagin

        I’m curious about your last statement Ns-init and Docker-as-a-Service. Suppose you have a cloud service that hosts docker containers, e.g. OpenStack nova docker. The user does not have access to the docker host only to their docker containers. How does one enable them to debug their docker containers remotely without including a sshd in their container?

        –Ken

        Reply
  16. Andy Nemzek

    Any chance that an nsenter-like feature will make its way into the docker interface itself?

    The best way to make sure people use a tool the way you want them to is to make sure the right way to use it is easier than the wrong way 🙂 Got a feeling that this will be the case with docker and ssh. I’m guessing most people will probably not see the nsenter alternative as the ‘easier way’.

    Reply
  17. Mark Duncan

    You can create a function to make nsenter a little more convenient to use with Docker.

    `nsenter-docker() { nsenter –target $(docker inspect –format {{.State.Pid}} $1) –mount –uts –ipc –net –pid; }`

    Run that (or add to your shell startup script) and then you can just run `nsenter-docker [container name]`

    Reply
    • Mark Duncan

      It seems like the function got cut off. Let me try again.

      nsenter-docker() { nsenter –target $(docker inspect –format {{.State.Pid}} $1) –mount –uts –ipc –net –pid; }

      Reply
    • Mark Duncan

      Or even better, I see that a docker-enter command was added to nsenter. I should learn to read the rest of the comments before replying.

      Reply
  18. Felipe

    I have multiple containers that create log files in a /log volume, each container with its own volume. Is there a way to have a container read logs from each of these containers all at once in a –volumes-from kind of way? As of docker 1.0.0, using –volumes-from multiple times for containers that expose the same volumes causes the first match to be used without warnings. Apart from mounting the /var/lib/docker/vfs… volumes directly into the new container into different paths, I couldn’t find a way to read logs from more than one of these containers at once.

    Reply
  19. Die wunderbare Welt von Isotopp

    […] Why you don’t need to run SSHd in your Docker containers | Docker Blog When they start using Docker, people often ask: “How do I get inside my containers?” and people will tell them “Run an SSH server in your containers!” But, as you’ll discover in this post, you don’t need to run a SSHd daemon to get inside your containers. Well unless your container is an SSH … […]

    Reply
  20. Mark

    created a helper script to attach to docker container as per instructions in this blog post: https://gist.github.com/miki725/4ec0a63733248e0377cc

    Reply
  21. Use Docker to Build a LEMP Stack (Buildfile) | Daniel Watrous on Software Engineering

    […] with them, such as viewing the log files on the Nginx server or connecting to the MySQL? We use Linked containers that leverage the shared volumes or exposed […]

    Reply
  22. Dashamir Hoxha

    For me, this one works well:

    chroot /var/lib/docker/containers/2465790aa2c4*/root/

    Here, 2465790aa2c4 is the short ID of the running container (as displayed by docker ps), followed by a star.

    Reply
  23. Johannes

    “docker exec” could be another built-in alternative to running ssh. would be nice to have, though https://github.com/docker/docker/pull/7409

    Reply
    • Eric Ongerth

      Hooray, ‘docker exec’ made it into docker 1.3 in October. Sweet!

      Reply
  24. tobias schaber

    Hi!
    “The SSH server is pretty safe,…”
    Even If I wouldn’t have believed that a few days ago, shellshock puts us right.

    Kind regards

    Reply
  25. Geoff Flarity

    Wow. Thanks for making this but I was shocked this functionality wasn’t already part of the docker command line in the first place. Also what’s with setting the PID environment variable, it should just take the container id as param. I agree that debugging is THE use case for this, and it’s a pretty important use case.

    Reply
  26. Eric Ongerth

    FYI to anyone finding this in late 2014 or afterward, ‘docker exec’ takes care of this need now:
    https://docs.docker.com/reference/commandline/cli/#exec

    Reply
  27. rdamian

    I know this is and old post, so pardon me, but:
    What about allowing my IDE (Pycharm, Sublime Text) to access the python interpreter inside a dev container? I’ve search the web and there seems to be no answer for this question.
    Thanks

    Reply
  28. Alex Santos

    Probably a bit out-of-scope here but I am wondering if there is any way to prevent people from docker exec’ing into a container (essentially getting filesystem access and getting to peek all files and directories contents) but at the same time allow them to run that container?

    Reply
  29. Docker 终极指南 | 写代码度日的骚年

    […] 容器与虚拟机相比有两个主要差异。第一个是:它们被设计成运行单进程,无法很好地模拟一个完整的环境(如果那是你需要的,请看看LXC)。你可能会尝试运行runit或supervisord实例来启动多个进程,但(以我的愚见)这真的没有必要。 […]

    Reply
  30. Gendalph

    Well, there’s another way for trick with SSH’ing to host to work: force running wrapper via sshd_config. It’s trickier and I haven’t tested it yet, but allows for a much wider range of configuration:
    1. Add a new user (group) – someuser
    2. Run visudo and configure limited sudo with NOPASSWD key
    ## allow someuser and members of somegroup to run docker-enter as root without entering password
    ## without this you would need to use some workarounds to pass sudo auth,
    ## like requesting a pty from ssh or run sudo with either –askpass or –stdin option – both work, both leave password visible
    someuser ALL = (root) NOPASSWD: docker-enter
    %somegroup ALL = (root) NOPASSWD: docker-enter
    3. make a wrapper script (some sanity checks in bash) that takes container id or name and runs $(sudo /usr/bin/docker-enter “${SSH_ORIGINAL_COMMAND}”)

    #!/bin/bash
    ##
    ## !! THIS IS A ROUGH SKETCH !!
    ##

    ID=”$SSH_ORIGINAL_COMMAND”
    ## check container id and name
    ## you might want to fix second regex to broaden container naming rules
    if [[ $ID =~ ^[a-z0-9]+$ || $ID =~ ^[-_a-z]+$ ]]; then
    ## pass it to docker-enter
    sudo /usr/bin/docker-enter “$ID”
    exit 0
    else
    echo ‘Invalid container ID or name.’
    ## showing a list of currently running containers might be a security flaw
    ## echo -e ‘Currently running containers:’
    ## docker ps
    exit 1
    fi

    4. Open /etc/ssh/sshd_config and append (I repeat: APPEND, Match blocks have to come at the very end of your config, otherwise you WILL break your SSHd)
    Match User someuser
    ForceCommand /usr/local/bin/docker-enter-wrapper
    AllowAgentForwarding no
    AllowTcpForwarding no
    AllowStreamLocalForwarding no
    X11Forwarding no

    5. Reload sshd
    6. ssh someuser@your-server.tld some_container
    After entering a password you should end up being attached to some_container. Then again, you’re free to use any wrapper with any restrictions.

    Reply
  31. Lewis Crawford

    No mention of docker exec for accessing a running container:
    e.g.
    docker exec -ti [container name] /bin/bash
    negates the need for sshd running.

    Reply
  32. satheesh

    Hi i want to use the volume in my local server. how can i achieve that ?

    Reply
  33. Costa Shapiro

    I'm currently trying to setup a remote (really remote) Docker dev env with PyCharm. As of today, PyCharm doesn't seem to be able to setup a remote (really remote) Python interpreter other than via ssh. It appears that I don't have much choice but to use an ssh server inside my Python service container, nsenter or whatever will not help me here.

    Reply

Leave a Reply to Felix

Click here to cancel reply.

Get the Latest Docker News by Email

Docker Weekly is a newsletter with the latest content on Docker and the agenda for the upcoming weeks.