Why not run SSHD In the Docker container
When using Docker, people often ask, "How do I enter the container ?", Others will say "run an SSH server in your container ". However, from this blog, you will understand that you do not need to run the SSHd daemon to enter your container. Unless your container is an SSH server.
It is a great idea to run the SSH server, because it provides a simple way to enter the container. Almost everyone in our company has used SSH at least once. Most of us use it every day, and they are familiar with public and private keys, password-free logon, and key proxy, sometimes port forwarding and other uncommon features are used. For this reason, it is not surprising that you are advised to run SSH in the container. But you should consider it carefully.
Suppose you are assuming a Docker image of Redis Server or Java Webservice, I will ask you the following questions:
What do you need to do with SSH? Generally, if you want to back up, check logs, restart processes, and adjust configurations, you may also use gdb, strace, or other similar tools to debug the server. Let's take a look at how we don't use SSH to do these things.
How do you manage your keys and passwords? Generally, you either write them to your image or put them in a volume. What do you do if you want to update these keys or passwords. If you write them into the image, you need to re-create the image, re-deploy them, and then restart the container. It's okay, not the end of the world, but it's never a great way. Putting them in the volume and managing them through the volume is much better than the previous one. This method is available, but it has serious defects. You must make sure that the container does not have the write permission for this volume; otherwise, the container may destroy the key (which makes you unable to enter the container later ), it gets worse if you share one volume with multiple containers. If we don't need SSH, isn't there anything we need to worry about?
How do you manage security upgrades? The SSH server is safe, but there are still security issues. You will have to upgrade all the containers using SSH when necessary. This means a lot of reconstruction and restart. That is to say, if you have a simple and small memcached service in time, you still have to ensure timely security updates. Otherwise, the treasure of a thousand miles may be destroyed by the ant site. So, if we don't need SSH, isn't there anything we need to worry about?
Do you need to "Install only one SSH server? Of course not. You need to install a Process Manager, such as Monit or Supervisor. This is because Docker only monitors one process. If you need to run multiple processes, you must add a layer above to view their applications. In other words, you are complicate a simple problem. If your application stops (normally exits or crashes), you must view the process management logs instead of simply viewing the information provided by Docker.
You can put applications in containers, but should you be responsible for managing access policies and security restrictions at the same time? In small organizations, this is not a problem. However, in large organizations, if you are the person responsible for setting up application containers, another person may be responsible for defining remote access policies. Your company may have strict policy definitions that indicate who can access, how to access, or other requirements for various review trails. In that case, you will not be allowed to throw an SSH server into your container.
But what should I do... Back up my data?
Your data should exist in volume. Then you can use the -- volumes-from option to run another container and share the volume with the first container. Benefits: If you need to install a new tool (such as s75pxd) to save your backup data for a long time or transfer the data to another permanent storage, you can do this in this specific backup container, rather than in the main service container. This is concise.
Check logs?
Use volume again! If you write all the logs to a specific directory and the directory is a volume, you can start another log inspection "container (using -- volumes-from, remember ?) And do what you need to do here. If you still need special tools (or just need an interesting ack-grep), you can install them in the container to maintain the original environment of the main container.
Restart service?
Basically, all services can be restarted through signals. When you use/etc/init. d/foo restart or service foo restart, they actually send a specific signal to the process. You can use docker kill-s <signal> to send this signal. Some services may not listen to these signals, but can accept commands on a specific socket. For a TCP socket, you only need to connect through the network. If it is a UNIX socket, you can use volume again. Set the control socket of the container and service to a specific directory, which is a volume. Then start a new container to access this volume; then you can use a UNIX socket.
"But this is too complicated !" -Actually, this is not the case. Assume that the servcie named foo creates a socket in/var/run/foo. sock, and you need to run fooctl restart to restart. You only need to use-v/var/run (or add VOLUME/var/run to the Docker file) to start the service. When you want to restart, use the -- volumes-from option and reload the command to start the same image. Like this:
# Starting the service
CID = $ (docker run-d-v/var/run fooservice)
# Restarting the service with a sidekick container
Docker run -- volumes-from $ CID fooservice fooctl restart
Easy!
Modify my configuration file
If you are executing a persistent configuration change, you 'd better put the change in the image, because if you start another container, the service still uses the old configuration, your configuration changes will be lost. So there is no SSH access!"But I need to change my configuration while the service is alive, for example, adding a new virtual site !"In this case, you need to use ...... Wait ...... Volume! The configuration should be in volume, and the volume should be shared with a special "configuration Editor" container. You can use anything you like in this container: SSH + your favorite editor, a web service that accepts API calls, or a scheduled task that captures information from external sources; and so on. In addition, separate attention: one container runs the service and the other processes Configuration updates."But I made temporary changes because I'm testing different values !"In this case, check the next chapter!
Debug my application?
This may be the only scenario that requires entering the container. Because you need to run gdb, strace, tweak configuration, and so on. In this case, you need nsenter.
Introduction to nsenter
Nsenter is a small tool used to enter the namespace. Technically, it can enter an existing namespace or generate a process to enter a new namespace. "What is a namespace ?" They are an important component of containers. Simply put: By using nsenter, you can enter an existing container, even though this container does not run ssh or any daemon for any special purposes.
Where can I obtain nsenter?
View jpetazzo/nsenter on GitHub. Simple installation:
Docker run-v/usr/local/bin:/target jpetazzo/nsenter
It will install nsenter into/usr/local/bin, and you can use it immediately.
Nsenter can also be obtained in your release (in the util-linux package ).
How to use it?
First, calculate the PID you want to enter the container:
PID =$ (docker inspect -- format {. State. Pid }}< container_name_or_ID>)
Then enter the container:
Nsenter -- target $ PID -- mount -- uts -- ipc -- net -- pid
You can operate the shell parser in the container. If you want to run a special script or program in an automated way, add it as a parameter to nsenter. Besides using a container instead of a simple directory, it works like chroot.
How about remote access?
If you need to enter a container from a remote host, there are (at least) two methods:
SSH enters the Docker host and uses nsenter;
SSH enters the Docker host and uses a special key parameter to authorize the esenter command (that is, nsenter ).
The first method is relatively simple, but requires the root permission to access the Docker host (which is not very good in terms of security ). The second method is to use the command = mode in the authorized_keys file of SSH. You may be familiar with the "classical" authorized_keys file, which looks like this:
Ssh-rsa AAAAB3N... QOID = jpetazzo @ tarrasque
(Of course, a real key is actually very long and usually occupies several rows .) You can also force a proprietary command. If you want to view the memory that can be effectively used on a remote host on your system, you can use the SSH key, but you do not want to hand over all shell permissions, you can enter the following content in the authorized_keys file:
Command = "free" ssh-rsa AAAAB3N... QOID = jpetazzo @ tarrasque
Now, when using a proprietary key for connection, replace the obtained shell and it can run the free command. In addition, you cannot do anything else. (You may also want to add no-port-forwarding. For more information, see the manpage of authorized_keys (5 )). The key to this mechanism is to separate responsibility. Alice puts the service inside the container; she does not need to process remote access, login and other transactions. Betty adds an SSH layer, which is used in special cases (strange debugging problems. Charlotte will consider logging on. And so on.
Summary
Is it true that the SSH server is running in a container (W in upper case? Honestly, it's not that serious. This is even extremely convenient when you do not access the Docker host, but you still need to obtain a shell in the container. In addition, there are many ways to run the SSH server in the container and get all the features we want, and the architecture is very clear. Docker allows you to use any workflow that best suits you. However, before doing this, please note that there are other solutions when quickly entering the buzzword "My container is really a small VPS" (context, in this way, you can make a wise decision.
Install Docker in CentOS 6/7 Series
Detailed explanation of the entire process of building Gitlab CI for Docker
Docker installation application (CentOS 6.5_x64)
What is the difference between Docker and a normal Virtual Machine?
Use MySQL in Docker
Docker will change everything
Docker installation application (CentOS 6.5_x64)
Docker details: click here
Docker: click here
This article permanently updates the link address: