docker--Lightweight Linux container "turn" for unified development and deployment

Source: Internet
Author: User
Tags docker ps docker hub docker run docker registry

Transferred from: http://www.oschina.net/translate/docker-lightweight-linux-containers-consistent-development-and-deployment

English Original: Docker:lightweight Linux Containers for consistent development and Deployment

Use Docker containers-lightweight and flexible VM-like to take over "dependency hell". Learn how Docker is based on LXC technology, enabling applications to be portable and independent by wrapping them in containers.

Imagine that you can easily package your application and its dependencies, and then run smoothly on other development, testing, and production environments. This is the goal of the open source Docker project. Although it is not officially in production yet, the latest release (0.7.x at the time of writing this article) has made Docker a step closer to achieving this great goal.

Ley
Translated over 2 years ago

2 Person top

Top translation of good Oh!

Docker container tries to solve the "dependency hell" problem. Modern applications typically come from a combination of existing components and rely on other services and applications. For example, your Python app might use Postgre as a data store, use Redis cache, and use Apache as a Web server. Each of these components comes with some of its own dependencies, which may conflict with other components. By packaging each component and its dependencies, the Docker container addresses the following issues:

  • Conflicting dependencies: Need to run a Web site on PHP4.3 and another running on PHP5.5? If you run each version of PHP in a separate Docker container, that's fine.

  • Is missing dependencies: Installing an app on a new environment is just a matter of moments for a Docker container, because all dependencies are packaged with the application in a single container.

  • Platform dependency: Moving from one release to another is no longer a hassle. If both systems are running Docker containers, then the same container executes without any problems.

JIMMYJMH
translated 2 years ago

2 Human Top

 

top   Good translation!

Docker container: A little background

At the beginning of 2013, Docker was dotcloud-a platform-as-a-service, The cloud-centric company was born in the form of an open source project. Docker is a natural extension technology that the company has developed to run the cloud business on thousands of servers. It is written in the go language, and the go language is a static type programming language developed by Google and loosely based on the C language. Growing rapidly for 6-9 months, the company hired a new CEO, joined the Linux Foundation, changed the company name to Docker, and announced the shift of focus to the development of Docker containers and their ecosystems. The popularity of the Docker container further illustrates that it has been Fork1304 by Star 8,985 times on GitHub while writing this article. Figure 1 shows the popularity of the Docker container's continued rise in Google search. As Docker released its first version of the container for product deployment and a wide community of knowledge about the usefulness of Docker containers, the waveform figures for the past 12 months are expected to dwarf in the next 12 months.

Figure 1. Trend Map of Docker software on Google search in the past 12 months

JIMMYJMH
translated 2 years ago

3 Human top

 

top   translation is good!

under the hood

Docker uses some powerful kernel-level technology and gives us the power to reach. The concept of container virtualization has been around a few years ago, but by providing a simple toolset and a unified API interface to manage some kernel-level technologies such as LXCS (Linux container), cgroups, and a write-replicated file system, Docker has created a better tool than its parts. It is a potential rule changer for developing operators, system administrators, and developers.

Docker provides tools to make container creation and operation as simple as possible, and container sandboxes to handle each other. You can temporarily think of a container as a lightweight virtual machine.

JIMMYJMH
translated 2 years ago

1 Human top

 

top   Good translation!

Linux containers and LXC, a user-space program package for Linux containers, is the core of Docker, LXC uses a kernel-level namespace to isolate hosts and containers from each other. The user namespace separates the host from the user database of the container, which ensures that the root user of the container does not have root privileges on the host. The program namespace is only responsible for displaying and managing programs in the container, not running on the host. and the network namespace provides its own network device and virtual IP address to the container. Another component provided by

Lxc is the control group (cgroups). The namespace is responsible for the isolation between the host and the container, while the control group implements resource accounting and throttling. When Docker is allowed to limit the resources consumed by a container: such as memory, disk space, and input and output, the control group also outputs a large number of related metrics. These metrics enable Docker to monitor the resource consumption of each process within the container and ensure that each process obtains only the available fair shared resources.

JIMMYJMH
translated 2 years ago

1 Human top

 

top   Good translation!

In addition to the above components, Docker has been using the AUFS (Advanced Multilevel Unified file System) as the file system for the container. Aufs is a layered file system that can transparently cover one or more existing file systems. When a process needs to modify a file, Aufs creates a copy of the file. Aufs can combine multiple layers into a single layer representation of a file system. This process is called write replication.

What's really cool is that Aufs allows Docker to base some of the mirrors on the container. For example, you might have a CentOS system image that can be used as a basis for many different containers. Thanks to Aufs, as long as a copy of the CentOS image is sufficient, this saves both storage and memory and ensures faster container deployment.

JIMMYJMH
translated 2 years ago

2 Human top

 

top   Good translation!

Another benefit of using AUFS is the version container mirroring capability of Docker. Each new version is a simple diff with the previous version, effectively keeping the image file minimized. However, this also means that you always have an audit trail that records the change of the container from one version to another.

Traditionally, Docker relies on AUFS to provide a write-replication storage mechanism. However, a recently added storage boot API may reduce this dependency. Initially, there are three storage drivers available: AuFS, VFS, and device mapper-products that work with Red Hat.

Since version 0.7, Docker has collaborated with all Linux distributions. However, it does not take into account most of the non-Linux systems, such as Windows and OS X. The recommended way to use Docker on those operating systems is to use vagrant to provide a virtual machine on the VirtualBox.

JIMMYJMH
translated 2 years ago

2 Human top

 

top   Good translation!

container vs. other virtualization Types

What exactly is the container and what is the difference between it and hypervisor-based virtualization? Simply put, containers are virtualized at the operating system level, while hypervisor-based virtualization is at the hardware level. The effect is similar, but the difference is important, which is why I spent some time exploring their differences and the differences and tradeoffs they produced.

Virtualization: Both the

container and the virtual machine (VMS) are virtualization tools. On a virtual machine, a hypervisor makes each orphaned hardware available. Typically, this includes two types of hypervisor: Type 1 runs directly on a hardware bare metal sheet, while type 2 runs on the guest operating system as a software add-on layer. Open source Xen and VMware Esx are examples of type 1, and examples of type 2 include Oracle's Open source VirtualBox and VMware servers. Although Type 1 is a better candidate than a Docker container, I do not distinguish between the two types in other parts of the article.

In contrast, containers construct protected parts that are available in the operating system-they effectively virtualize the operating system. Two containers running on the same operating system do not know that they are sharing resources because they have their own abstract network layers and processes, and so on.

JIMMYJMH
translated 2 years ago

1 Human top

 

top   Good translation!

operating system and resources

Because hypervisor-based virtualization provides only hardware access, you also need to install the operating system. This will run multiple full operating systems, one on each virtual machine, which will quickly consume resources such as memory (RAM), CPU, and bandwidth on the server. The

Container runs on top of the operating system and treats the running operating system as its own host environment. It only runs in such a space: These spaces are part of the host operating system, and the space used by each container is independent of each other. This brings two very distinct advantages. The first advantage is more efficient use of resources. If a container does nothing, it does not run out of resources, and the container can invoke its own host operating system to implement some or all of the functionality it needs. The second advantage is the low cost of containers, so you can quickly create and delete containers. The container does not need to restart or shut down the entire operating system. Containers only need to terminate processes that run in their own independent space. So starting and stopping a container is more like starting and exiting an application, so it starts and stops very quickly.

several people
translated 2 years ago

1 human top

 

top   translation is good!

Figure 2 shows two types of virtual machines and containers

Figure 2. Virtual Machines and Containers

Independent Performance and security

The processes executed in the Docker container are independent of the processes running on the host operating system or the processes running in other Docker containers. However, all of the processes are running in the same kernel. Docker uses LXC to provide a separate namespace for each container, and the technology in the kernel has a history of more than 5 years and is already very mature. In addition, the container also uses the control group, the Linux kernel in the technology than the history of LXC, it is to audit and limit resources.

Several people
Translated over 2 years ago

2 Person top

Top translation of good Oh!

The Docker service process itself is also a potential attack vector because it is currently only run with root privileges. Improvements to LXC and Docker should allow the container to run with non-root privileges, and the Docker service process can be run with another user.

Although this type of isolation used by the container is generally very powerful, it is not as strong as the virtual machine running on hypervisor is still controversial. If the kernel stops, all the containers will stop running. The advantage of virtual machines is that it is very mature and widely used in production environments. By contrast, Docker and its support technology have almost no action. In particular, Docker makes a lot of changes every day, and we all know that change is the natural enemy of security.

Several people
Translated over 2 years ago

2 Person top

Top translation of good Oh!

Docker and virtual machine-also enemy friends

We've been comparing Docker and virtual machines, and it's time to look at what the two technologies are really complementary to each other. Docker works very well in a virtualized environment. Obviously, you don't need to encapsulate every application or component of each virtual host. And assuming you have a Linux virtual machine, you can easily deploy the Docker container. That's why the official installation of Docker on non-Linux systems, such as OS X and Windows, is the reason you're not surprised to install Ubuntu-based virtual machine Precise64 with Vagrant's assistance. The Http://www.docker.io site has detailed and simple instructions.

First, virtualization and containers behave very similar in some ways. At first, it makes you feel that the container is a very lightweight virtual machine. However, with your knowledge of the container, your understanding of the container will be subtle and significant. Docker maximizes the container's strengths in the packaging and deployment of lightweight applications, where containers Excel.

several people
translated 2 years ago

0 human top

 

top   translation is good!

Docker warehouse

One of the Docker killer features is the ability to quickly find, download, and launch container images created by other developers. The place where the image is stored is called the registry. Docker Limited provides a public registry, also known as the Index Center. You can think of this registry and Docker client as equivalent to node's Npm,perl cpan or Ruby RubyGems.

In addition to the various base images that can be used to create Docker containers, the public Docker registry provides a software image that runs instantly, including databases, content management systems, development environments, Web servers, and so on. By default, the Docker command-line client searches for a public registry, but it can also maintain a private registry. The registry is a good choice if you want to publish an image of a component that contains proprietary intellectual property code or that is only used internally by the company. Uploading the image to the registry is as easy as downloading. Just ask you to create an account, and it's all free. Finally, Dcoker Limited's registry also has a web interface that makes it easy to search, read, comment, and recommend images ("flag asterisks"). The image is surprisingly easy to use, and I encourage you to start browsing through the links in the section of this document resource.

several people
translated 2 years ago

1 human top

 

top   translation is good!

teach you to use Docker

Docker is made up of a single binary file that can be run in three different ways. First, it can be run as a service process for managing containers. The service process provides out-of-the-rest-style APIs that can be accessed locally or remotely. A growing number of client databases can communicate with the service process APIs, including ruby,python,javascript (angular and node), Erlang,go, and PHP-provided client libraries.

The

Client library is programmed to access the service process in most cases, but more often than not, the command line is used to submit instructions. This is the second way to run the Dcoker binary, which is to access the RESTful service process through the command-line client.

The third way, the Docker binaries can run as clients accessing the remote image warehouse. The image that generates the container file system is called a warehouse. Users can download images provided by others, and can upload their own images to the registry to share those images. Registries are used to collect, list and organize these warehouses.

several people
translated 2 years ago

1 human top

 

top   translation is good!

Let's look at the three ways in which Docker is actually running. In the following example, you will search the Docker repository to find the MySQL image. So you find the image you like, then download it, and then tell the Docker service process to run the corresponding command (MySQL). All of the operations you do are done through the command line.

Figure 3. Download the Docker image and start the container

Start by running the Docker search MySQL command, which displays a list of images in the public Docker registry that match the keyword "MySQL". I'm sure this command will work, and then use the command Docker pull Brice/mysql to download the "Brice/mysql" image. You can see that Docker not only downloads the image you specified, but also downloads other images that depend on the package. Enter the docker images command, which will list all the images currently available locally, including the "Brice/mysql" image. Start the container with the-D option, which will run a container out of the currently running container, and you are already running MySQL in a container. You can use the Docker PS command to verify that the command lists the running containers instead of the images. In the output of the command line, you can also see the port number that the MySQL service listens on, which is 3306 by default.

several people
translated 2 years ago

1 human top

 

top   translation is good!

But how do you connect to MySQL when you know that MySQL is running inside a container? Remember: Each Docker container has its own network interface. You need to determine which IP address and port The MYSQLD server process is running on. Run Docker inspect <imageId> command, it will give us a lot of information. However, since all you need is an IP address, when you use the container's hash value to view the container, you can crawl to the IP address, which is running Docker inspect 5A9005441BB5 | grep IPAddress. You can now connect by specifying the host address and port options for the standard MySQL CLI client. After you have finished using the MySQL server, you can use the command Docker stop 5A9005441BB5 to close this container.

We used 7 commands to find, download, launch the Docker container running the MySQL server and close the container when it is finished. In this process, you don't have to worry about the conflicts that exist with the installed software, and you don't have to worry about the MySQL version being different or what the package dependencies are. You've used 7 different Docker commands: Search, pull, images, run, PS, inspect, and stop, but the Docker client actually has 33 commands. You can use the command to run the Docker help command or find an online manual to view the full list of commands.

several people
translated 2 years ago

1 human top

 

top   translation is good!

Before the Docker operation in the example above, I mentioned that the communication between the client and the service process and the Docker registry is done through a rest-based Web service. This implicitly tells you that you can use the local Docker client to communicate with the remote service process to effectively manage the containers on the remote server. The Docker service process, registry, and index APIs are well documented and provided with examples (see Resources section).

Docker workflows

Have several ways to introduce Docker into the development and deployment process. Let's take a look at the example of the demo workflow, 4. We envision a company developer who might run Ubuntu with Docker installed. He may download images from a public registry or upload images to a public registry, install its own code or proprietary software on the basis of the image, and generate images that can be uploaded to the company's private registry.

several people
translated 2 years ago

1 human top

 

top   translation is good!

In this example, the company's product quality test environment runs CentOS and Docker. It also downloads images from a public or private registry, and then launches various containers when the environment is updated.

Finally, to facilitate scaling and scaling, the company deploys the production environment in the cloud, which is deployed on Amazon's Web Services (AWS). Docker, which manages different containers, is also running on Amazon Linux.

Note: All three environments above are running different versions of Linux, but these three environments are Docker compatible. And each environment is running a different combination of containers. However, since each container separates its own dependencies from other containers, there is no conflict and all containers coexist peacefully.

Figure 4. Examples of workflow for software development using Docker

translated 2 years ago

1 people top

 

top   Good translation!

It is important to realize that Docker provides an application-centric container model. In other words, the container is running a separate application or service, not many applications or services. We already know that creating and running containers is very fast and consumes less resources. Because the system you are using follows a single principle of responsibility, and each container runs a master process, the system components are loosely coupled. Based on this concept, we can create our own image of the container that can be started.

Create a new Docker image

In the previous example, you have been interacting with Docker through the command line. However, when creating an image, it is more common to create "Dockerfile" for the automated build process. Dockerfile is a simple text file that describes the build process. You can implement version control on Dockerfile so that you can recreate the image perfectly.

Several people
Translated over 2 years ago

1 Person top

Top translation of good Oh!

In the next example, we'll look at the Dockerfile named PHP box (see Code Listing 1).

Code Listing 1. PHP Box?
123456789101112131415161718192021222324252627282930313233 # PHP Box## VERSION 1.0 # use centos base imageFROM centos:6.4# specify the maintainerMAINTAINER Dirk Merkel, [email protected]# update available reposRUN wget http://dl.fedoraproject.org/pub/epel/6/x86_64/?epel-release-6-8.noarch.rpm; rpm -Uvh epel-release-6-8.noarch.rpm# install some dependenciesRUN yum install -y curl git wget unzip # install Apache httpd and dependenciesRUN yum install -y httpd# install PHP and dependenciesRUN yum install -y php php-mysql# general yum cleanupRUN yum install -y yum-utilsRUN package-cleanup --dupes; package-cleanup --cleandupes;  ?yum clean -y all# expose mysqld portEXPOSE 80# the command to runCMD ["/usr/sbin/apachectl""-D""FOREGROUND"]

Let's take a closer look at what the Dockerfile have done. The syntax for dockerfile is the command keyword, followed by the arguments to the command. Usually the command keyword is uppercase. The comment section begins with #.

The FROM command indicates to you the base image you are using. It must be the first command of the Docker file. In this case, all you do is build on CentOS, the basic image you just created. Clearly, the maintainer command lists the people who maintain the dockerfile. The Run command executes a command and gives a running result image, so it creates a new image. The Run command in this dockerfile gets the configuration files for other repositories, and then installs Curl, Git, wget, unzip, httpd, Php-mysql, and yum-utils using Yum. We can combine these several yum install commands into a single run command to avoid multiple commits in a row.

Several people
Translated over 2 years ago

1 Person top

Top translation of good Oh!

The next EXPOSE command opens port 80 externally, which is the port number that Apache listens to when it launches the container.

The last command CMD gives the default command to run when the container starts. Starting a container is starting a separate process so that you can think of the container as a command.

At the command line docker build -t php_box . , Docker will start building using the Dockerfile in the current directory. The resulting final image will be named "Php_box" so that you can easily identify and look up the image later.

This build process downloads the base image, followed by the installation of Apache httpd and all dependencies associated with it. After the installation is complete, a hash value is returned to identify the newly created image. This value is similar to the value you used when you started the MySQL container earlier. You can use the Php_box tag to run Apache and PHP images with the following command: docker run -d -t php_box .

Several people
Translated over 2 years ago

1 Person top

Top translation of good Oh!

We will end this article with a very short example of how to create a new image simply based on an existing image:

?
123456789101112 # myapp # # version       1.0   # use php_box base image from php_box  # specify the  maintainer maintainer dirk merkel, [email protected]  # put my local web site in myapp  folder to /var/www add myapp / var /www

The second dockerfile is shorter than the first one, and in fact it contains only two commands that really work. The FROM Php_box image is started with the command specified first. Then use the ADD command to copy the local directory to this image. In this example, the Apache Documnet_root folder that was copied to the image is a PHP project. The end result is that the Service site is started by default when this image is started.

Several people
Translated over 2 years ago

0 Person Top

Top translation of good Oh!

Summarize

Lightweight applications and their dependent packaging and deployment tools The advent of Docker was an exciting thing, and the Linux community quickly adopted it and tried to use it in a production environment. For example, Red Hat announced in December that it will support Docker in the upcoming Red Hat Linux Enterprise Edition 7. However, Docker is still a young project and is developing rapidly. It will be an exciting time to see the 1.0 release of the Docker project, and version 1.0 will be the first version of the officially approved production environment. Docker relies on existing technologies, some of which have been around for more than more than 10 years, but that doesn't mean it doesn't have any innovation. I hope this article will give you enough information about Docker and encourage you to download Docker and try it out for yourself.

Several people
Translated over 2 years ago

1 Person top

Top translation of good Oh!

Docker Latest Progress

When this article was released, the Docker team released version 0.8. The latest release adds support for Mac OS X, which consists of two components. The client can run on the OS X operating system, while the Docker service process runs on a lightweight VirtualBox virtual machine managed by Boot2docker, which also contains the command-line client. This is an inevitable choice because the underlying technologies, such as LXC and namespaces, are not supported by OS X. I think everyone is expecting a similar solution to be used on other platforms, such as Windows.

Version 0.8 also introduces several new build features and tries to provide support for the two-tree file System (BTRFS). Btrfs is another write-and-copy file system, in addition to the Btrfs storage driver to replace the Aufs driver. The

is particularly noteworthy: Docker 0.8 fixes many program vulnerabilities and enhances performance. The total number of submissions illustrates the efforts of the Docker team to build a release 1.0 that can be used in a production environment. Because the Docker team commits every month, we expect to release 1.0 in the 4-May time window.

Resources

Docker primary site:  https://www.docker.io 
Docker registry:  https://index.docker.io 
Docker registry-related api: http://docs.docker.com/reference/api/registry_api/ 
Docker hub api:http:// docs.docker.com/reference/api/docker-io_api/

Docker remote application Api:http://docs.docker.com/reference/api/docker_ remote_api/

Note: Because the Docker Index API has been changed to the Docker Hub API since translation is complete, the new API is in use.


For now, Docker has at least the following scenarios:

1) Test: Docker is a great fit for test releases, and the Docker package can be delivered directly to testers, eliminating the need for testers to work with operations, development, and environment to build and deploy.

2) test data separation: In the test, often due to the test scene transformation, need to modify the dependent database data or empty the change memcache, Redis cache data. Docker is lighter and more convenient than traditional virtual machines. It is easy to separate the data into different mirrors and switch at any time depending on your needs.

3) Development: The developer uses the same Docker image together, and the modified source code is mounted to the local disk. No longer because of the different circumstances caused by different program behavior and the brain, while the newcomer to the post can also quickly build development, compile environment.

4) PaaS Cloud service: Docker can support Command line encapsulation and programming, through automatic loading and service self-discovery, it is convenient to extend the services encapsulated in the Docker image into a cloud service. Services such as the DOC conversion preview are encapsulated in the mirror, increasing and decreasing the number of containers to run on demand as the business request grows.

Specific to the application of Docker technology in the field of testing, can be reflected in:

1) Quickly build a compatibility test environment

From the Docker image and container technology features can be foreseen, when the test application requirements in a variety of Web servers, middleware, database combination environment to be fully validated, can quickly use the base Docker image to create a variety of containers, loading the corresponding technical components and quickly start running, Testers save a lot of time spent on testing environments.

2) Quickly build a complex distributed test environment

The lightweight virtualization of Docker makes it easy to build a container environment for hundreds of distributed nodes on a single machine (or even on a tester's laptop), simulating a distributed, complex test environment that previously took a lot of time and machine resources to build.

3) Continuous Integration

Docker can quickly create and revoke containers, and in an environment of continuous integration, deployment and validation can be done frequently and quickly.

docker--Lightweight Linux container "turn" for unified development and deployment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.