Quick understanding of Docker-container-level virtualization Solutions
Simply put , Docker is a lightweight VM solution built on top of LXC , based on process containers (processcontainer)
By analogy with the transport of goods in the real world , in order to solve the problem of transporting goods of various types and sizes in various transport vehicles , we invented the container
The purpose of Docker is to package applications and the running environment they depend on as standard container/image, and then publish to different platforms to run
Theoretically this concept is not new , and the various virtual machine Image also plays a similar role
The biggest difference between Docker container and normal virtual machine Image is that it does not contain the operating system kernel .
A normal virtual machine runs the entire operating system on a virtual hardware platform , providing a complete running environment for the application to run, while Docker loads the running application directly on the host platform . Essentially , he started a Linux Container using LXC at the bottom , isolating applications running in different Container through mechanisms such as cgroup. , Rights Management and quota allocation, etc.
Each container has its own separate namespaces (i.e., resources ) that include :
PID process , MNT file system , net network , IPC , UTS host name, etc. /c23>
What's the difference with LXC ?
Basically you can think that the current Docker is a premium package for LXC , providing a variety of assistive tools and standard interfaces for you to use LXC, which you can rely on LXC and various scripts to implement with Docker Similar features, like you do not use Apt/yum and other tools can also do their own software package installation, you use their key reason is easy to access!
In practice, you don't usually care about the details of the underlying LXC , nor do you plan to implement a non- LXC scheme for Docker in the future.
On The basis of LXC , Docker provides additional Feature including: Standard Unified package Deployment operation scheme, Historical version control, Image reuse, Image shared Publishing, and so on
Container Build Scenarios
In additionto LXC, the core idea of Docker is embodied in its operational container building scheme .
To maximize Image reuse, speed up operations, reduce memory and disk footprint, the operating environment that the Docker container runtime constructs is actually composed of multiple layers with dependencies. For example, an Apache operating environment may be based on the underlying rootfs image, overlaid with an image containing various tools such as Emacs , overlaid with Apache and its associated library -dependent image, which are merged into the unified path by the AUFS file system load, exist in a read-only manner, Finally, the overlay loads a layer of writable white space used as a record of changes made to the current operating environment.
With hierarchical Image as the basis, ideally, different apps can be both possible to share the underlying file system, dependent tools, and so on, different instances of the same app can share the vast majority of data, and then copy On write to maintain your own copy of the modified data, etc.
History and ecological environment
Docker projects from the start to the present but more than a year, the momentum of development is very rapid
2013.01 as dotcloud internal project start
2013.03.27 officially released as public project
2014.1 was selected by BLACK DUCK as The new Open source project "TOP Open source ROOKIE ofthe Year"
Current status (2014.3)
Docker 0.8.1
10000+ GitHub stars (Top 50)
350+ Contributors
1500+ Fork
For specific applications, you can see that Baidu has successfully used Docker to support its BAE platform's Paas services at least in the year to April
Installation Run and use
Although Docker is called build once, runeverywhere. However, it is still limited by its engine dependencies, and the current version is specific to the system requirements:
- Linux Kernel 3.8+
- LXC Support
- 64bit OS
- AUFS
Above requirements, take Ubuntu as an example, need 12.04 with 3.8kernel upgrade, or ubuntu 13.04+
On the ubuntu12.04 , the basic installation steps are as follows
Sudoapt-get update sudo apt-get install linux-image-generic-lts-raringlinux-headers-generic-lts-raring
Sudoapt-key adv--keyserver keyserver.ubuntu.com--recv-keys36a1d7869245c8950f966e92d8576a8ba88d21e9
Sudosh-c "echo Deb Http://get.docker.io/ubuntudocker main\ >/etc/apt/sources.list.d/docker.list"
Sudoapt-get Update
Sudoapt-get Install Lxc-docker
If you want to experience Docker 's basic operational commands before installing, you can try this live tutorial online https://www.docker.io/ gettingstarted/#h_tutorial
Common commands
Classify a list of common CLI commands
Search/pull/push/login etc.
Example: Docker pull Ubuntu download from warehouse ubuntuimage
Images/rmi/build/export/import/save/load etc.
Example: Docker images-t lists the current local Image in a tree-shaped structure
Run/start/stop/restart/attach/kill etc.
Docker run-i-T Ubuntu/bin/bash launches ubuntu imageand runs the shell interactively
Docker Diff/commit
Dockerinfo/ps/inspect/port/logs/top/history etc.
use of specific Docker commands see http://docs.docker.io/en/latest/reference/commandline/
Problems
The current version of Docker uses the Socket for communication and therefore requires root user privileges sudo xxx, or will need to use dockerclient Users join the Docker user group
Sudogpasswd-a ${user} Docker
When you need to connect to the index database of Docker via proxy behind the gateway , you can manually add the http_proxy environment variable to start the Dockerdaemon
http_proxy=http://proxy_server:port docker-d &
A better practice is to modify/etc/default/docker (on Ubuntu) to add exporthttp_proxy=proxy_server:port
Similarly,if Docker container does not automatically get the DNS configuration from the host environment correctly, You will need to specify the DNS server address manually, which can be done through Docker- Run--dns=xxx to implement, or you can modify /etc/default/docker Add for example docker_opts= "-dns 8.8.8.8"
container internal you do not have permission to operate device device, and in the current version, container internal part files such as /etc/hosts;/etc/hostname;/etc/resolve.confmount ( For example through --dns set resolve.conf) docker client docker run--privileged
- Excessive levels of dependency
cheap reuse and fast update for apps and related libraries in Layer form are key to Docker. However, due to the current AUFS file system limitations, the default layer hierarchy can only reach 127(once only ), in the actual use of a variety of conditions may lead to your Container 's hierarchical relationship quickly increases to this limit, leaving aside the efficiency of AUFS after so many layer overlays , and in many cases you can't update your image anymore .
- when you use Dockerfile to build an image, each instruction adds a layer dependency to the final image.
- To modify, submit, and then modify the way the re-commit to constantly adjust, update your Image
- The Image of someone else downloaded from the warehouse already contains a number of hierarchical dependencies, and you need to update them further to create your own version
The first two, to a certain extent, or you may be the control, the last situation will not be able to. This problem will ultimately affect the actual availability of Docker, and current solutions include:
- when using Dockerfile , merge as many actions as possible: for example, use "&&" or ";" combine to run multiple shell commands , write multiple shell commands into a script , Add and run the script in Dockerfile
- Through Export and Import image, discard all historical information and dependencies, create a brand new image
Possible future solutions include:
- Add support for merge submissions for multi-step operations in Dockerfile
- External Image Flat tool support, the goal is to be able to retain historical information, etc.
- Other Storage solutions for non-AUFS
Future development
Although Docker currently uses LXC and AUFS by default, the Core idea of Docker is not to enforce the binding of both, and the0.8 version is already available BTRFS, and the entire Docker framework has been changed into a plug-in architecture for easy addition and replacement of each function module
For example, more support for Storage programs, to circumvent AUFS current problems, in addition to LXC more virtualization solutions, etc.
Quick understanding of Docker-container-level virtualization Solutions