Compared with Vm,docker, it has obvious advantages in light weight, configuration complexity and resource utilization. With the maturing of Docker technology, more and more enterprises begin to consider improving their IT system through Docker.
This article enumerates some Docker practical application scenes, in order to be able to play the role of the trigger, to help us to use Docker more conveniently.
Application Packaging
Students who have made software packages such as RPM and gems may well know that each package relies on which version of the library, and often needs to be explicitly written in the dependencies list. Dependencies are often divided into compile-time dependencies and run-time dependencies.
In a traditional infrastructure environment, to ensure that the generated packages are installed and running properly on other machines, it is generally necessary to create a clean virtual machine before packaging, or manually create a chroot environment, then secure various dependencies in this clean environment, and then execute the packaged script. After you build the package, you will need to create a clean environment to install and run the package to verify that it is in line with your expectations. This completes the packaging, but at least has the following drawbacks:
time consuming dependence is easily missed, for example: in a clean environment after many debugging, the lack of a dependency pack on a one, but the last to write a spec file but forgot to add a dependency, resulting in the next packaging needs to be debugged or packaged after the package can not be used, and so on.
The packing problem can be solved well by Docker. The following are the specific practices:
"Clean packaging environment" is easy to prepare, Docker provided by the official Ubuntu, CentOS and other system mirrors can be used as a pure non-polluting packaging environment to use the Dockerfile itself can play the role of document curing, as long as the written dockerfile, To create a packaged image, you can reuse this image indefinitely for packaging
Example:
We're going to make an RPM package for a PHP extension module like: Php-redis.
First, you need to write a dockerfile to create a packaged image, as follows:
From CENTOS:CENTOS6 run Yum update-y run yum install-y php-devel rpm-build tar gcc make run mkdir-p/rpmbuild/{build,rpms, SOURCES,SPECS,SRPMS} && Echo '%_topdir/rpmbuild ' > ~/.rpmmacrosadd http://pecl.php.net/get/ Redis-2.2.7.tgz/rpmbuild/sources/redis-2.2.7.tgz ADD https://gist.githubusercontent.com/mountkin/ 5175c213585d485db31e/raw/02f6dce79e12b692bf39d6337f0cfa72813ce9fb/php-redis.spec/redis.spec RUN RPMBUILD-BB Redis.spec
Then execute the Docker build-t Php-redis-builder. After successful execution, the RPM package we need is generated.
Next, execute the following command to copy the resulting package from the Docker Mirror:
[-d/rpms] | | Mkdir/rpmsdocker run--rm-v/rpms:/rpms:rw php-redis-builder cp/rpmbuild/rpms/x86_64/php-redis-2.2.7-1.el6.x86_64. rpm/rpms/
Then the/RPMS directory will have the RPM package we just made.
Finally, the software package verification is very simple, only need to create a new Docker mirror, add the newly generated package and install it.
Dockerfile is as follows (for add RPM files, you need to keep them in the/rpms directory):
From CENTOS:CENTOS6 ADD php-redis-2.2.7-1.el6.x86_64.rpm/php-redis-2.2.7-1.el6.x86_64.rpm RUN Yum localinstall-y php-redis-2.2.7-1.el6.x86_64.rpm RUN php-d "extension=redis.so"-M |grep redis
Executes the Docker build-t Php-redis-validator in the/rpms directory, which indicates that the RPM package works correctly if the execution succeeds.
Multi-version hybrid Deployment
With the continuous upgrading of products, multiple applications on one server or multiple versions of the same application are common within the enterprise.
However, multiple versions of the same software are deployed on a single server, and resources such as file paths, ports and so on often collide, causing multiple versions to coexist.
If you use Docker, the problem will be very simple. Because each container has its own independent file system, there is no problem of file path conflict at all; For port conflict issues, you can resolve the problem simply by specifying a different port mapping when the container is started.
Upgrade Rollback
An upgrade is often not just an upgrade of the application software itself, but also an upgrade that includes dependencies. But the dependencies of the old and new software are likely to be different or even conflicting, so it is generally difficult to roll back in the traditional environment.
If you use Docker, we only need to make a new Docker image each time the application is upgraded, stop the old container, and then start the new container. When a rollback is needed, the new container is stopped, and the old boot completes the rollback, and the whole process is completed at the second level, which is very convenient.
Multi-Tenant resource isolation
Resource isolation is a strong requirement for companies that provide shared hosting services. If you use a VM, the isolation is very thorough, but the deployment density is relatively low, resulting in increased costs.
The Docker container leverages the Linux kernel's namespaces to provide resource isolation capabilities.
Combined with Cgroup, you can easily set resource quotas for a container. It can not only meet the requirements of resource isolation, but also can easily set different levels of quota limits for different levels of users.
But in this application scenario, because the program running in the container is not trustworthy to the hosting service provider, special means are required to ensure that the user cannot manipulate the host's resources from the container (ie: jailbreak, although the probability of such a problem occurs is very small, but security is not trivial, One more layer of protection is sure to make people more assured.
For safety and isolation reinforcement, the following measures may be considered:
through the iptables block from the container to all intranet IP communications (of course, if necessary also for specific IP port open permissions) through SELinux or as limit the resources that a container can access to some SYSFS or Procfs directory, Using read-only mount to strengthen the system kernel through the grsec to control the cgroup of memory, CPU, disk read and write and so on, the bandwidth of each container is controlled by TC
In addition, we found that the random number generator of the system is easily blocked by the exhaustion of entropy source in the actual test. In a multi-tenant sharing environment, you need to enable rng-tools on the host to supplement the entropy source.
There is a lot of work in this scenario that Docker itself cannot provide, and more details are needed to implement it. To this end we provide a security-enhanced version of the Docker management platform, can be a perfect solution to the above problems. Friends who need it can get more details through the Csphere website.
Internal development Environment
Prior to the advent of container technology, companies often acted as development test environments by providing each developer with one or more virtual machines.
The development test environment is generally low in load, and a lot of system resources are wasted on the process of the virtual machine itself.
The Docker container has no additional CPU and memory overhead and is ideal for providing a development test environment within the company.
And because Docker mirroring can be easily shared within the company, this is also a great help to the normative development environment.
If you want to use the container as a development machine, you need to address the issue of remote login containers and process management within the container. Although Docker's original intention is to "micro-service" architecture design, but according to our actual use experience, in Docker run multiple programs, even sshd or upstart is also feasible.
This aspect Csphere also has the mature product and the solution, welcome the interested friend trial feedback.
PostScript
The above summarizes some of our scenarios for using Docker in actual development and production environments, as well as the problems and solutions we encounter in each case, hoping to inspire friends who intend to use Docker. At the same time, we also welcome more friends to share the experience of using Docker. (Zebian: Liu Yajon)
Author: Wei, founder of Nicescale, has long been engaged in devops related research and development work. Focus on the Linux environment Web application and Peripheral service configuration management automation. Good at using the Go language and PHP language, the container technology has a certain degree of research, is currently focusing on Docker based enterprise-class solutions. Before starting a business in the Sina Cloud Platform (SAE) as technical manager. Welcome like-minded friends in various ways to hook up with harassment.
Original link: https://blog.nicescale.com/docker-use-cases/
OpenCloud 2015 will be held in Beijing in April 2015 16-18th. The conference contains "2015 OpenStack Technical Conference", "2015 Spark Technology Summit", "2015 Container Technology Summit" three technical summits and a number of in-depth industry training, theme focused on technology innovation and application practices, the domestic and foreign real cloud computing technology, Daniel Lecturer. Here are the first line of ground gas dry goods, solid products, technology, services and platforms. OpenCloud 2015, the knowledgeable people are here!
More lecturers and schedule information please pay attention to OpenCloud 2015 introduction and official website.