Containers can provide lightweight virtualization to isolate processes and resources, without the need to provide command interpretation mechanisms and other complexities of full virtualization. This document describes the container tool Linux containers (lxc) step by step ). The author demonstrates how to set and use them.
Containers effectively divide resources managed by a single operating system into isolated groups to better balance conflicting resource usage requirements between isolated groups. Compared with virtualization, this requires neither command-level simulation nor instant compilation. Containers can run commands locally on the core CPU without any special explanation mechanism. In addition, it avoids the complexity of paravirtualization and System Call replacement.
By providing a method for creating and entering containers, the operating system allows applications to run as they run on independent machines, but also shares many underlying resources. For example, you can effectively share the page cache of public files (such as glibc), because all containers use the same kernel, in addition, all containers often share the same libc library (depending on the container configuration ). Such sharing can often be extended to other files in the directory that do not need to be written.
While providing isolation, containers also save costs by sharing these resources, which means containers are much less costly than real virtualization.
Container technology has long existed. For example, Solaris zones and BSD jails are containers on non-Linux operating systems. The container Technology Used in Linux also has rich heritage, such as Linux-vserver, openvz, and freevps. Although these technologies are mature, these solutions have not yet integrated their container support into mainstream linux kernels. (For more information about these technologies, see the references section ).
In contrast, the Linux Resource containers project (see references) implements containers by contributing to mainstream linux kernels. At the same time, these contributions may be useful for mature Linux container solutions-providing a public backend for more mature container projects. This section briefly describes how to use a tool created by the lxc project.
To make full use of this article, you should be familiar with running programs using command lines, such as make, GCC, and patch. In addition, you should be familiar with extracting tarball (* .tar.gz file.
Obtain, build, and install lxc
The lxc project consists of a Linux kernel patch and some userspace tools. These userspace tools use new kernel features added by patches and provide a set of simplified tools to maintain containers.
Before using lxc, you must first download the Linux kernel source code, apply the appropriate lxc patch, and then build, install, and start it. Finally, download, build, and install the lxc tool.
I used a Linux 2.6.27 kernel with a patch (see references ). Although the Linux kernel lxc patch 2.6.27 may not be applicable to the kernel source code of your favorite release version, Linux Versions later than 2.6.27 may already include most of the features provided by the patch. Therefore, we strongly recommend that you use the latest patches and mainstream kernel source code. In addition to downloading the kernel source code and adding patches, you can also usegit
Get code:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git |
At kernelnewbies.org, you can find instructions on how to add patches to the kernel, configure, build, and install and start the kernel (see references ).
Lxc requires some specific kernel configurations. The easiest way to configure the kernel for lxc is to usemake menuconfig
And then selectContainer support. Depending on the features supported by the kernel, this will further select a set of other configuration options.
Available lxc Environment
In addition to a kernel that supports containers, some tools are required to start and manage containers. The container management tool in this article is from liblxc (get the link from references. You can also use libvirt ). This section discusses:
- Liblxc Tool
- Iproute2 Tool
- How to configure the network
- How to fill in a container File System (build a custom Debian container or run an SSH container)
- How to connect to the container File System (SSH, VNC, Vt: tty, Vt: gui)
Tool: liblxc
Download and decompress liblxc (see references), and then from the liblxc directory:
./configure --prefix=/makemake install |
If you are used to building source rpm, you can download one from the Internet (see references ).
Tool: iproute2
To manage network interfaces in containers, 2.6.26 or later versions of iproute2 are required (see references ). If your Linux release does not have this package, download and configure and install it according to the instructions.
Configure the network
Network Access is another key part of many practical containers. Currently, bridging (connecting multiple Ethernet segments to form a separate Ethernet segment) is the best way to connect a container to the network. To prepare for lxc, we will create a bridge (see references) and use it to connect our real network interface with the container's network interface.
CreateBr0Bridge:
brctl addbr br0brctl setfd br0 0 |
Use the IP address of an existing network interface (in this example10.0.2.15
) Bridge connection interface:ifconfig br0 10.0.2.15 promisc up
. Set the existing network interface (in this exampleeth0
) Add to the bridge and cancel its direct association with its IP address:
brctl addif br0 eth0ifconfig eth0 0.0.0.0 up |
Add to bridgebr0
All interfaces will respond to that IP address. Finally, make sure that the default route is usedroute add -net default gw 10.0.2.2 br0
Send data packets to the gateway. Later, specifybr0
As a link to the outside world.
Fill the container File System
In addition to the network, containers often need their own file systems. Depending on your needs, there are several methods to fill the container file system. I will discuss two of them:
- Build a custom Debian container
- Run an SSH container
Usedebootstrap
Command to build a custom Debian containerVery simple:
debootstrap sid rootfs http://debian.osuosl.org/debian/ |
To build a large number of containers, first download the package to a tarball to save time, for exampledebootstrap --make-tarball sid.packages.tgz sid http://debian.osuosl.org/debian/
. This will generate a. tar file, which is about 71 MB (52 MB compressed), and a root directory is about 200 MB. Then, create the root directory in rootfs:debootstrap --unpack-tarball sid.packages.tgz sid rootfs
. (debootstrap
The home page contains more information about building smaller or more suitable containers ).
This generates a highly redundant environment for the host container (see references ).
Run an SSH containerThis greatly reduces the disk space occupied by the container file system. For example, this method can run multiple SSH daemon on port 22 of different containers by using only a few Kb (see the example below ). Containers use key root directories to achieve this. For example, read-only binding of/bin,/sbin, And/lib shares the content of sshd packages from existing Linux systems. Here we use a network namespace and create basic read/write content.
The methods used to generate lightweight containers are basically the same as those used to generate the chroot environment. The difference is that the read-only binding and namespace are used to enhance the isolation of the chroot environment and make it a valid container.
Next, you need to select a method to connect to the container.
Connect to container
The next step is to connect to the container. Depending on the container configuration method, there are several options:
- SSH
- VNC (GUI)
- Vt: tty (text)
- Vt: X (GUI)
If you do not need GUI interfaces for containersConnect through SSHYou can. In this case, a simple SSH connection is enough (see "run an SSH container" above "). The advantage of this method is that IP addressing can be used to create any number of containers.
If it takes a long time for the SSH connection to arrive at the password prompt, The avahi multicast DNS/service discovery daemon may time out during DNS lookup.
Connect through virtual network computing (VNC)In this way, you can add a GUI interface for the container.
Use vnc4server to start an X server, which serves only the VNC client. You need to install vnc4server to run it from the/etc/rc. Local file of the container, as shown below:echo '/usr/bin/vnc4server :0 -geometry 1024x768 -depth 24' >> rootfs/etc/rc.local
. When the container starts, a 24-bit x screen with a resolution of 1024x768 will be created. The next connection is simple, as shown below:
If the container shares tty with its hostConnect through Vt: tty (text)It is very useful. In this case, you can use Linux virtual terminals (VT) to connect to the container. A simple usage of VT is to log on to one of the TTY devices, and then the tty device will communicate with the Linux vt. The logon process is calledgetty
. Use VT 8:
echo '8:2345:respawn:/sbin/getty 38400 tty8' >> rootfs/etc/inittab |
Once the container is started, it will run on tty8getty
To allow users to log on to the container. You can use a similar technique to restart a container using lxc.
This method does not support the graphical interface of the container. In addition, because only one process can connect to tty8 at a time, to enable multiple containers, further configuration is required.
Connect through Vt: xAllows you to run a GUI. Run gnome Display Manager (ANN) on VT 9, edit rootfs/usr/share/TPD/defaults. conf, and setFirstVT=7
ReplaceFirstVT=9
AndVTAllocation=true
ReplaceVTAllocation=false
.
In this way, you can use only one of the few Linux virtual terminals.
Run lxc Tool
Now, you are running an appropriate kernel, installing the lxc utility, and having an available environment, you can learn how to manage instances in this environment. (Note: most of this content is described in more detail in lxc Readme ).
Lxc uses the cgroup file system to manage containers. Before using lxc, you must first mount the file system:mount -t cgroup cgroup /cgroup
. The cgroup file system can be mounted anywhere. Lxc uses the first cgroup file system mounted in/etc/mtab.
Later in this article, we will show some basic lxc knowledge and miscellaneous and discuss low-level access.
Lxc basic knowledge
For the basic knowledge of using lxc tool, we will look:
- Create a container
- Get (or list) information about existing containers
- Start the System and Application container
- Send signals to processes running in the container
- Pause, resume, stop, and destroy containers
Create a containerIs to associate a name with a configuration file. This name will be used to manage containers:
lxc-create -n name -f configfile |
This allows multiple containers to use the same configuration file at the same time. In the configuration file, you can specify the container attributes, such as its host name, network, root file system, and fstab. After running the lxc-sshd script (which creates a configuration), the SSH container configuration is as follows:
lxc.utsname = my_ssh_containerlxc.network.type = vethlxc.network.flags = uplxc.network.link = br0lxc.network.ipv4 = 10.0.2.16/24lxc.network.name = eth0lxc.mount = ./fstablxc.rootfs = ./rootfs |
Regardless of the configuration file, containers started with lxc have their own system process view, as well as their own mount tree and available inter-process communication (IPC) resource view.
In addition, when a container is started, any type of resources not mentioned in the configuration are considered to be shared with the host. This allows the Administrator to easily specify the key differences between the container and its host, and make the configuration portable.
List information about existing containersIt is very important to manage existing containers. Display the status of a specific container:
Display the processes that belong to a container:
Start
Lxc varies depending on the container type: one isSystem container, One isApplication container. System containers are similar to virtual machines. Compared with real virtualization, although their isolation is lower, their overhead is also reduced. The direct cause is that each container uses the same Linux kernel. To be similar to a virtual machine, the system container is started in the same place as the Linux release, that is, by running the INIT program:
Compared with system containers, application containers only create separate namespaces used to isolate an application. Start an application container:
Send signal
Send a signal to all processes running in a container:
lxc-kill -n name -s SIGNAL |
Pause
The concept of pausing a container is similarSIGSTOP
Signals are sent to all processes in a container. However, sending falseSIGSTOP
Signals may confuse some programs. Therefore, lxc uses the Linux Process freezer (process freezer) through the cgroup interface ):
Restore
To restore a frozen container:
Stop
Stopping a container will cause all processes started in the container to die and clear the container:
Destroy
Container destruction means that the container is deletedlxc-create
The configuration file and metadata associated with the name are as follows:
Miscellaneous
Below are some other information you may want to know (some are related to monitoring ).
View and adjust the container priority:
lxc-priority -n namelxc-priority -n name -p priority |
Observe the container status and priority changes continuously:
PressCTRL-CStop monitoring containers.
You can also wait for the container to enter|
One of the following states:
lxc-wait -n name -s states |
ExceptRUNNING
All other statuses:
lxc-wait -n name -s 'STOPPED|STARTING|STOPPING|ABORTING|FREEZING|FROZEN' |
Of course, this will return immediately. If no unexpected error occurs, you should expect that only when the container enters the specified statuslxc-wait
.
Low-level access
Lxc uses the cgroup file system to manage containers. You can use lxc to read and manipulate some parts of the cgroup file system. To manage the CPU usage of each container, you can read and adjust the container's CPU. shares, as shown below:
lxc-cgroup -n name cpu.shareslxc-cgroup -n name cpu.shares howmany |