Docker is an open-source application engine created by PAAS provider dotcloud in early 2013. docker can automatically package any application into a lightweight, portable, and self-contained container engine. Applications built by developers can run on the entire platform at a time, including local development machines, production environments, virtual machines, and clouds.
Docker is developed based on the go language and its code is hosted on GitHub and complies with the Apache 2.0 open-source protocol. Recently, this project has been sought after by more and more users. There are more than 9000 stars on GitHub, and Google's compute engine also supports docker. In China, well-known Baidu also uses docker as the basis of its Paas.
From the official docker blog
Docker containers can encapsulate any server load, and can run consistently between almost any servers.
Common cases of docker include:
- Automatically package and deploy applications
- Create a lightweight and private PAAs Environment
- Automated Testing and continuous integration/deployment
- Deploy and expand web applications, databases, and backend servers
Background
Fifteen years ago, almost all applications were written using well-defined stacks and deployed on a single VPC. Today, developers can use the best off-the-shelf Service combinations to build and assemble applications, and are ready for multi-deployment across different hardware environments, includes public, private, and virtualized servers.
Figure 1 it Evolution
This setting may be used in:
- Adverse Reactions between different services and "dependency resistance"
- It is impossible to manage a matrix that is deployed across multiple services and on different hardware types as a challenge between fast migration and different hardware types.
Figure 2 challenges faced by multiple stacks and multiple hardware Environments
We can see that there are a lot of combinations and arrangement of applications/services, as well as hardware environments that need to be written or rewritten to every application at any time. This will bring difficulties to developers who write applications and those who try to create a stable, secure, and high-performance operating environment.
Figure 3 create an N * n matrix using a dynamic stack and a dynamic hardware environment
So how can we solve this situation? Let's take an example in the transportation industry. before March 31, 1960, most of the bulk cargo was shipped by ship, and the shipper and the carrier were worried about some adverse reactions between different types of cargo (for example, A batch of iron is pressed on a bag of bananas ). Similarly, switching between different transportation modes is also very painful, and most of the time is spent in the port to unload the goods, and then load, and wait for the same batch of goods to be loaded together on trains, trucks and other means of transportation. In this way, there is an N * n matrix between multiple different items and multiple different transport mechanisms.
Figure 4 shipping before December 31, 1960
Fortunately, the emergence of standard containers solves the above problems. Any goods, from pistachio to Porsche, can be packed in standard containers. The courier or carrier can seal the container or prohibit it from being opened again until it is transported to the destination. During the transportation process, containers can be unloaded, loaded, stacked, and transported, and long-distance transportation is also effective. Containers are revolutionizing global means of transportation-a standard that allows cargo to flow freely between trains, cars, and ships. Today, 18 million containers are engaged in 90% of world trade.
Figure 5 the emergence of a standard container (container) solves the transportation problem
To some extent, docker can be regarded as an international container written in code.
Figure 6 the software "transport" solution is also a standard container system
Docker can package any application and related dependencies into a lightweight, portable, and self-contained container, which has standard operations to automate. At the same time, all applications can run on any Linux service. With the same container, developers can run and produce on a large scale on their laptops, or on virtual machines, logical servers, openstack clusters, Public instances, or all of the preceding methods.
In other words, applications built by developers can run on multiple platforms only once. Operators only need to configure their services to run all applications.
Main features of docker
| |
Physical container |
Docker |
| Content agnostic) |
The same container can accommodate almost any type of cargo |
It can encapsulate any server load and its dependencies. |
| Hardware independence (hardware agnostic) |
The same standard container allows the cargo to be transported from the ship to the train or truck until it is transported to the warehouse. The whole process does not require sorting or opening the container. |
Operating system elements (such as lxc) can be used almost on any platform-virtual machines, bare metal machines, openstack, public IAAs, and so on without modification. |
| Content isolation and Interaction |
Containers can be accumulated for transportation without worrying about Iron on bananas. |
Isolate resources, networks, and content to avoid relying on hell |
| Automation |
The standard interface makes it easy to implement automated loading and unloading, handling, etc. |
Standard operations are available for running, starting, stopping, submitting, and searching, and are ideal for devops: Ci, CD, auto scaling, and hybrid cloud. |
| Efficient |
You do not need to enable or modify it. You can quickly move or transport data in the first two locations. |
Lightweight, with almost no bias or startup penalties, you can quickly move and operate |
| Separation of duties |
The shipper worried about the inside of the box, and the carrier worried about the outside of the box. |
Developers worry about code and operators worry about infrastructure |
More technical features:
- File System isolation: each process container runs in a completely independent root file system;
- Resource isolation: system resources, such as CPU and memory, are allocated to different process containers using cgroups;
- Network isolation: each process container carries its own virtual interface and IP address to run in its own network space;
- Copy-on-write: Creates the root file system using the copy-on-write method, which is extremely fast to deploy and has very little memory and hard disk space;
- Logs: docker collects and records the standard streams (stdout/stderr/stdin) of each process container for real-time retrieval or batch retrieval;
- Change management: Changes to the container file system can be submitted to a new image and can be reused to create more containers. No template or manual configuration is required;
- Interactive Shell: docker can allocate a virtual terminal and associate it with the standard input of any container, for example, running a one-time interactive shell.
What are the basic functions of docker?
Docker makes development and O & M easier. The figure below will give you a good understanding of the basic functions of docker. docker will configure network and storage. Download and install the application. And configure some parameters to package and upload the image. In addition, the container can be created manually or automatically. If the source code library contains the dockerfile file, it will be automatically created. The container contains not only applications, it also includes all dependencies of the application.
Developers can use the docker SEARCH Command to search for containers in docker Registry (public or private), and use the docker pull command to push container from the registry, run the docker Run Command to Start, Run, stop, and perform other operations. It is worth noting that the object of the run command may be your own server, public instance, or a combination of the two.
Figure 7 basic functions of docker
For a complete list of docker functions, you can visit: http://docs.docker.io/en/latest/commandline/
Three docker running modes:As a daemon, you can manage the lxc container on a Linux host. As a CLI, you can talk to the rest API of the daemon (docker run ...); as the repository client, share the content you have built (docker pull, docker commit ).
How does containers work? What is the difference between VMS and VMS?
A container usually contains application and application dependencies, which are used to isolate processes. These processes are mainly run in the isolation zone and user space of the host operating system.
This is obviously different from the traditional VMS. Traditional hardware Virtualization (such as VMWare, KVM, xen, and EC2) is designed to create a complete virtual machine. Each virtualized application not only contains the binary file of the application, but also the library and complete guest operating system required to run the application.
Figure 8 containers vs. Traditional VMS
Because all containers share the same operating system (as well as binary files and libraries), they are much smaller than VMS, you can host 100 VMS on a physical host (generally, the number of VMS is strictly limited ). In addition, because they use the host operating system, restarting a VM does not mean restarting the operating system. Therefore, containers are lighter and more efficient.
Containers in docker are more efficient. Because a traditional Vm, application, copies of each application, and minor changes to each application need to re-create a complete VM.
As shown in, a new application only contains the application and its binary files/libraries on the host, so that you do not need to create a new client operating system.
If you want to run several copies of the application on the host, you do not even need to copy the shared binary file.
Finally, even if you make changes to the application, you do not need to copy the changes.
Figure 9 the mechanism makes docker containers lighter
This not only makes storage and container operations more efficient, but also makes application updates extremely simple. As shown in, updating a container only requires application differences.
Figure 10 modify and update a container
Below we will share some cool docker Use Cases
| Instance |
Instance description |
Link |
| Build your own PAAs |
Dokku -- Mini-Heroku implemented by docker. The smallest PAAs implementation you have ever seen |
Http://bit.ly/191Tgsx |
| Web Based on instruction environment |
Jiffylab-Web Based on instruction environment, using lighter, Python, and Unix shell |
Http://bit.ly/12oaj2K |
| Easy Application Deployment |
Use docker to deploy Java applications Run Drupal on docker Install redis on docker |
Http://bit.ly/11BCvvu Http://bit.ly/15MJS6B Http://bit.ly/16EWOKh |
| Create a Security Sandbox |
Docker makes creating secure sandbox easier |
Http://bit.ly/13mZGJH |
| Create your own SaaS |
Use memcached as a service |
Http://bit.ly/11nL8vh |
| Automated application deployment |
Deploy with docker push-button |
Http://bit.ly/1bTKZTo |
| Continuous integration deployment |
Next-generation continuous integration and deployment of dotcloud docker and Strider |
Http://bit.ly/ZwTfoy |
| Lightweight desktop virtualization |
Docker desktop: run an internal docker container through SSH |
Http://bit.ly/14RYL6x |
Related information:
- Official docker website
- Docker hosting address
- Getting started with docker
Recommended reading:
Russian search overlord yandex released docker-based open-source PAAs service cocaine
Beyond Google, he attempted to plug the entire Internet into a computer
PAAs chaos: New Opportunities for iner
Docker: Implementation of container-type "transportation" in Software