Virtualization through a Linux container

Source: Internet
Author: User

Brief introduction
The Linux container is a lightweight "virtualization" approach for running multiple virtual appliances (containers) concurrently on a single control host. Another term that can be used to describe the operations performed by the Linux container is "containerized."
Linux containers provide OS-level virtualization in which the kernel controls the quarantined containers. The container is isolated through the kernel control group (cgroup) and the kernel namespace. With other complete virtualization solutions such as Xen and KVM, the virtualization subsystem simulates a complete hardware environment.

The Apache Web server is a Linux container use case. With a hypervisor such as Xen or KVM, users can install SUSE Linux Enterprise Server and Apache on virtual machines. The virtual machine is booted in the same way as any physical computer, starting with a power-on self-test (POST) to pass control to Bootloader (GRUB), and then bootloader will load the Linux kernel and start init Process to load all services and daemons, such as Apache, against the defined run level.

The Linux container for the Apache Web server running on the SUSE Linux Enterprise server is very different from the hypervisor sample. For SUSE Linux Enterprise Server and Apache, fewer packages need to be installed in the Linux container. The most obvious package that does not need to be installed is the Linux kernel. Another difference is the boot process. With the Linux container, the boot process starts from the INIT process, which loads all services (such as the network) and daemons (such as Apache) against the defined RunLevel. The Linux container does not require a hardware POST (boot GRUB and load the Linux kernel) to run.

There are two major benefits to using Linux containers. First, because the POST, Bootloader, and kernel are not needed in the container boot process, the container can start very quickly. Second, the container will use fewer server physical resources than the hypervisor-run virtual machine. This means that more containers can be started in a single physical system.

A disadvantage of the Linux container is that the container needs to use the kernel of the host system. In other words, if the host is a SUSE Linux Enterprise Server, the container will not be able to run Microsoft Windows.

This article focuses on the important terminology related to Linux containers, describes the architecture of the Linux container in SUSE Linux Enterprise Server SP3, discusses the use of Linux containers, and provides some information about SUSE Linux Enterp Some insights into the future of Linux containers in Rise Server.

Terms

    • chroot -change root (chroot or change root jail) is a part of the file system that is isolated from the rest of the file system. For this purpose, the chroot command is used to change the root of the file system. Programs executed in this class "Chroot jail" cannot access files other than the specified directory tree.
    • cgroups -The Kernel control group (commonly referred to as "Cgroup") is a kernel feature that allows tasks (processes) and all of its children to be aggregated or partitioned into groups organized hierarchically to isolate resources.
    • Container -a "virtual machine" on the host server that can run any Linux system. For example, OpenSUSE, SUSE Linux Enterprise Desktop, or SuSE Linux Enterprise Server.
    • container name -the name of the container. The name is used by the LXC command.
    • kernel Namespaces -a kernel feature that isolates certain resources (such as file systems, networks, users, and others) for a set of processes.
    • Linux Container host Server -a system that includes a Linux container system and provides container and administrative control functions through Cgroup.
    • The resource Management -cgroup subsystem provides some parameters that can be assigned and controls how the container uses system resources such as memory, disk I/O, and network traffic.

Structure overview

The hypervisor is not required for Linux containers. This differs from type 1 or type 2 hypervisor, in which the hypervisor layer is above the hardware layer. Conceptually, the Linux container can be seen as an improved chroot technology that leverages the extra features of Linux to create powerful but lightweight virtualization options that isolate almost all container content from the Linux container host server.

The chroot environment separates the file system so that the container appears to run on the root of the file system, but the file system is actually stored in a directory within the Linux container host server. The default location for SUSE Linux Enterprise Server SP3 is to store the container file system in/var/lib/lxc/<container name>. You can also store the container file system in a virtual disk image. This is not the default method for storing Rootfs, but rather the advanced configuration options described in the Lxc.conf man page.

The LXC manual page mentions the use of Linux containers as application containers or system containers. Currently, SUSE Linux Enterprise Server SP3 only supports setting up system containers. The System container is where most of the SUSE Linux Enterprise Server operating system files are installed, which are installed in the directory that will be the root of the container. The application container contains only the files and libraries that are specific to the application that you want to run in the container. All other files and libraries will be used by the Linux container host file system. Setting up an application container is not as easy as setting up a system container, but it is the future goal of the Linux container on SUSE linux Enterprise Server.

You can separate networks in a Linux container, which means that the container can have its own IP address. Network separation is accomplished by using Linux bridging in the SUSE Linux Enterprise Server. The Linux bridging technology is the same as the bridging technology for the network in Xen and KVM on the SUSE Linux Enterprise Server. Use the BRCTL command to inquire and interact with the bridge on the Linux container host server.

The control group is the most interesting component. They are a feature of the Linux kernel used by Linux containers because it provides container resource management and control. The control group is not specific to the Linux container, but is added to the Linux kernel version 2.6.24. The first step in the interaction with the control group is to install one or more control group subsystems as a virtual file system and use the response command to define a single control group container. You can start the Linux process ID (PID or task) or move it to the newly created Cgroup container. The Cgroup subsystem can control many aspects of the process assigned to the container. You can isolate a specific device and a single CPU into a container. The control subsystem can freeze/thaw and collect CPU usage information in the container. You can use the resource management subsystem to dynamically define the container's CPU usage, memory, and block device IO for the process within the container. All of this can be done by reacting the parameters to the files in the container's virtual file system. Using control groups sounds very difficult. But the Linux container is a project that makes the control group easier to use. You can read the relevant content in the "Getting Started with Linux containers" section.

When using a Linux container, you also need to fully understand another architecture item, which is "security." It is important to note that the kernel of the Linux host server is available to all containers. This means that you can upgrade access to the Linux container host server by using a Linux system invoke attack from within the container. There are several different options in the community that can help resolve this issue. One option is the kernel security feature called Seccomp, which Seccomp represents "Security computing" (Secure Computing). The Linux container community uses SECCOMP2 and associated LIBSECCOMP2 libraries to create sandboxes around containers, restricting containers from using system calls. The other two options are to use SELinux and AppArmor. All three options have been reviewed by SuSE Engineering and are available for future versions of the SuSE Linux Enterprise Server.

Getting Started with Linux containers
Start with the SUSE Linux Enterprise Server One SP3 system, which is already registered and fully installed with the patch. Installing a GUI like GNOME makes it easy to use the YaST module, but does not require a GUI. One important point to note: Creating a Linux container requires a defined and usable repository. It is recommended that you register the Linux container host server so that the installation and update repositories are set up correctly when the container is created, and the repository is available.

Linux containers require multiple packages to be installed on the Linux container host. Use YaST or Zypper to install LXC, YAST2-LXC, Sles-lxcquick_en-pdf, and Bridge-utils. YaST or Zypper may also add some additional related packages. Although Sles-lxcquick_en-pdf is not the core, it contains the Linux Container QuickStart PDF, which is located in/usr/share/doc/manual/sles-lxcquick_en-pdf/ Sles-lxcquick_en.pdf.
Set up the network bridge using the YaST network settings module. Typically, the default name for the first bridge on a host is "Br0".

SUSE has created the Linux container YaST module. This module allows you to easily create, delete, start, stop, and connect to containers. When you start the Linux container YaST module, it automatically reports the output from Lxc-createconfig. All features are displayed as enabled except that the file feature is usually displayed as disabled. If you report any errors or appear in red text, you must resolve these issues before you create and start the first container.

If you are more accustomed to using the command line, Linux administrators can easily interact with any container using the "lxc-" command. As mentioned earlier, the Lxc-checkconfig command verifies that the control group virtual file system and subsystem are properly installed, and that all content is properly configured to run the container. Lxc-createconfig and lxc-create can define containers and install SUSE Linux Enterprise Server into a container based on a Linux container template. You can customize the template to add additional packages to be installed in the container. The template file is located in/usr/share/lxc/templates.

Lxc-start and Lxc-stop commands are straightforward and straightforward. Administrators can use Lxc-console to connect to containers that are started in the background. The Lxc-ls, Lxc-info, and LXC-PS commands enable administrators to list processes in a container, get process information, and view processes. There are many other LXC commands, but the last LXC command to highlight is Lxc-cgroup. This command controls the resource management aspects of the container, including setting CPU, memory, and block I/O limits.

Outlook
With SUSE Linux Enterprise Server 12, we will switch from the "LXC" Linux container framework to LIBVIRT-LXC. This means that the same libraries and tools used to manage Xen and KVM (for example, Virt-manager) will be used to manage Linux containers.

Some additions to security. SECCOMP2 will be part of SUSE Linux Enterprise Server 12. Use the Seccomp2,linux container to create a sandbox around a container to limit the types of system calls that an application can implement within a container. In addition, SELinux and AppArmor are supported and can be used to secure Linux containers access to SUSE Linux Enterprise Server Linux container host servers. SUSE Linux Enterprise Server 12 will be shipped with the AppArmor configuration file.

SUSE Linux Enterprise Server 12 will support the application container. The application container uses LIBVIRT-LXC to create a combination of containers and seccomp2/selinux/apparmor to sandbox the applications running within the container.

We'll also cover the Docker project, a framework built on a Linux container. See the introduction to the Docker Project website: "It's a lightweight framework (with powerful APIs) that provides the lifecycle of building and deploying applications in containers." "Docker provides an image repository and simplifies container usage. Docker will be a technical overview of SUSE Linux Enterprise Server 12, which means that Docker can be tested but not recommended for production.

Summarize
The Linux container provides another "virtualization" option, which has both advantages and limitations. The benefits of Linux containers include:

    • Isolating applications and operating systems through containers
    • No virtualization overhead compared to full virtualization Hypervisors
    • Provides near-native performance because Linux containers can manage resource allocations in real time
    • Control network interfaces via Cgroup and apply resource management within the container

Limitations of Linux containers

    • Run in the kernel of the host system and cannot use a different kernel
    • Allow only "virtual machine" operating system
    • is not a complete virtualization stack, unlike the Xen or KVM included in SUSE Linux Enterprise Server, because they are the complete virtualization stack
    • It is important to understand the security of the Linux containers in SUSE Linux Enterprise Server 11. If you need a fully secure system, use KVM or Xen in conjunction with SUSE Linux Enterprise SP3. SUSE Linux Enterprise Server 12 will add additional security features to the Linux container.

Here are a few other ideas for using the Linux container:

    • Provides access to the user/developer (root), but does not provide complete (root) access to the "real" system
    • Limit the applications that have a tendency to get all the resources on the system, just as the database typically does for memory, or as a compute-sensitive application does to the CPU
    • Ensure a specific number of resources for a set of applications (sla!) for specific customers with no more virtualization technology
    • Run Dhcp/dns, install servers, and SMT (subscription management tools) on low-end servers when difficult to obtain lab hardware

See more information

    • About SUSE Linux Enterprise Server 11 virtualization via LXC
      The Quick Start Guide
    • LXC Home
    • Kernel control Group (cgroup)
    • Managing virtual machines with Libvirt
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.