Dockone WeChat Share (64): Some explorations on implementing DevOps based on Docker

Source: Internet
Author: User
Tags cloud computing platforms git client docker compose docker swarm docker registry docker machine
This is a creation in Article, where the information may have evolved or changed.
"Editor's word" This share is introduced from DevOps, Docker Introduction, the benefits of DevOps based on Docker, and four aspects of Docker DevOps pipelining instance sharing.

Introduction to DevOps

DevOps (short for deveplopment and operations), translated into development operations integration, can be defined as a process, method, culture, movement, or practice, primarily to enhance communication and collaboration between development and other IT functions through a highly automated pipeline Accelerate delivery of software and services.


In a more mature software and services delivery team, the technical dimension is divided into three components: development, testing, and operations. The role of DevOps is to connect these three parts tightly together, providing an automated pipeline from software development to quality assurance to technology operations, enhancing communication and collaboration among different roles, and enabling rapid delivery of software and services based on user needs.

"The development of this group of Silly Fork new release package to the system CPU to get 100%, the application and rammed, what level of people ah ..."

"Operation of these silly birds technology is too poor, maintenance is what the system, in my run well, on their application on the hanging ..."

"This is the development of the pot ..."

"This is the disk of operation and maintenance ..."

The description is slightly pompous ... But this buck thing is really everywhere in the IT company. Needless right or wrong, there is no pot can be back, are determined by the development and operation of the gene, but the ultimate victim is the user. It is interesting that the development and operations of these people are also for the user, the developer must be based on the needs of users of the application of the function of continuous changes, operations and maintenance personnel must provide stable and continuous service according to the needs of users. Each of them also formed an invisible wall between the two, hindering the communication and collaboration between development and operation, and the advent of DevOps was to crush the invisible wall.

Thinking about the landing of DevOps

Technical level

DevOps is not a tool, but it needs to be implemented by tools, but today there are many commercial and open-source versions of the software to form an effective tool chain to support the DevOps technology level. But the tools are not enough, and the best tools are no longer useful, so it is necessary to have an IT staff familiar with the toolchain to provide technical support and use the tools to achieve a high degree of automation in DevOps.

Process level

DevOps is a pipeline from development to operations, and it is necessary to set up a series of processes and specifications to control the pipeline to run efficiently and automatically. IT managers need to have a holistic view of software or service delivery, to clearly recognize where the pain points of different roles in the delivery cycle are, and then to customize the appropriate collaboration process.

Organizational Level

DevOps does not simply merge development and operations, but strengthens collaboration and communication between the development and operations departments. This requires that managers have enough attention to the IT departments of the enterprise and are willing to promote devops as a model for efficient collaboration between development and operations, and that there is a need for openness, acceptance, and collaboration among the development and operations people.

DevOps is a disembodied thing, and it cannot be simply defined or quantified by tools or software. But tools or software are an important part of achieving devops, and Docker is one of the most appropriate tools for devops.

About Docker

Docker is an open platform for building, migrating, and running distributed applications that allow development or OPS to package the files that the application and running applications depend on to run in a standardized unit (container).


The container is a very early technology, the UNIX chroot function can be said to be the prototype of the container, and then to everyone familiar with the namespace and cgroups technology-based LXC (Linux Container), finally to the current heyday of Docker. Standing on the shoulders of predecessors, Docker's best place is to simplify and standardize the use of containers, and then with a wave of open source, Internet, cloud computing, big data, is the age of the darling.


Many people like to compare containers with virtual machines, in fact, both the container and the virtual machine are part of the implementation of virtualization technology. Both architectures are the same on the ground floor and require the support of physical hardware and operating systems. The difference is in virtual airport view, Hypervisor (such as KVM) as the operating system to the middle tier of the virtual machine, while the container scene Docker Engine (in Docker) as the operating system to the middle tier of the container. Virtual machines encapsulate operating systems and applications, while containers encapsulate applications directly, which is why containers are lighter than virtual machines.


Comparing the characteristics of virtual machines and containers, we can see that the advantages of containers compared to virtual machines are light weight, flexibility and high resource utilization. The drawback is that isolation is not as good as a virtual machine, which is the security of containers that have been infinitely magnified. But it is because the container is not completely isolated into a sealed small black room inside, so can bring better resource utilization than virtual machine.

Personally, the container in the short term also can not replace the virtual machine, in the future for a long time will be the container and the virtual machine coexistence situation. And ultimately who substitutes, depends on not the technology itself, but the needs of the user experience era.


PS: I hope a friend can find a little flaw in this figure.

Introduction to Docker basic components



    • Docker Image
      A docker image is a read-only template that runs a container.

    • Docker Container
      A docker container is a standardized unit for running applications.

    • Docker Registry
      The Docker registration server is used to store the image.

    • Docker Engine

      The Docker engine is used to create, run, and manage containers on the host.



As Friends of Docker know, Docker describes "Build,ship and Run any App Anywhere" in the following sentence for its main features. Build an image, then use registry to ship mirroring, and eventually use the engine to run container and included apps on any platform (anywhere).

Docker Native Tools Introduction


    • Docker machine: Enables users to quickly deploy Docker hosts on infrastructure platforms;
    • Docker Swarm: Enables users to schedule and run containers in a clustered environment;
    • Docker Compose: Let users orchestrate and deploy apps in a clustered environment.


These three tools form the native environment of Docker, with the comparison of Fire kubernetes, Mesos, Rancher, ETCD and other external ecology, to build a relatively complete Docker container ecosystem. For native tools and external tools, individuals feel that the tool or technology is not good or bad, mainly to see the applicable scenarios and customer needs. And it is the troubled times that have these ecological cooperation and competition to promote the rapid development of container technology and gradually mature.

Scenarios where Docker works

    • Continuous integration and continuous delivery
    • Development operation and Maintenance integration
    • Container Cloud
    • Big Data



Docker's official use case is CI/CD, DevOps, Big data, and infrastructure Optimization (Cloud).

What's interesting here is that these usage scenarios seem to depict the current history of Docker.

At first, the main object of Docker's appearance was the developer, which provided the developer with an environment for rapid application development and testing, which is the scene where CI/CD is located.

The ensuing development made Docker no longer just focusing on the development dimension, but a devops scenario as it moved towards the operational dimension.

Now that you have operations, it's certainly not something that touches the infrastructure, and today's infrastructure revolves around cloud computing, so Docker comes to the level of infrastructure optimization, container Cloud.

The infrastructure of the container cloud has, then inevitably need to provide services to the cloud applications, coupled with the many advantages of Docker itself, and naturally involves the use of big data scenarios.

The rollout of Docker Cloud and Docker Data Center also reflects the gradual support from development to operational scenarios. The advent of the DDC is aimed directly at the enterprise's internal container cloud.

It's hard to tell whether new technology has made Docker a success or that Docker is hosting new technology. At least for the time being, Docker is in the right direction to adapt to this era. This is just three years old Docker, and can't imagine how much energy it will explode in the future, and leave it to the time to tell.

The benefits of Docker for DevOps

Advantage One

Harmonization and standardization of development, testing and production environments. Mirroring is a standard deliverable that can be run in a container on a development, test, and production environment, ultimately achieving the full consistency of the three set of applications on the environment and the content that is dependent on the operation.

Advantage Two

Solve the heterogeneous problem of the underlying underlying environment. The diversity of the underlying environment creates resistance from Dev to ops, and using Docker engine ignores the type of infrastructure. Different physical devices, different types of virtualization, different cloud computing platforms, as long as the environment running Docker engine, the final application will be based on the container to provide services.

Advantage Three

Easy to build, migrate, and deploy. Dockerfile realizes the standardization and reusability of mirroring construction, and the layered mechanism of image itself also improves the efficiency of image building. Using registry, you can migrate a built-in image to any environment, and the deployment of the environment requires only a static, read-only mirror to be transformed into a dynamically operational container.

Advantage Four

Light weight and efficient. In contrast to the virtual machines that need to encapsulate the operating system, containers only need to encapsulate the dependent files required by the application and application, achieve a lightweight application run environment, and have a higher utilization of hardware resources than virtual machines.

Advantage Five

Standardization and rapid deployment of tool chains. When you docker the many tools or software you need to implement DevOps, you can quickly deploy one or more toolchain in any environment.

Instance sharing

Architecture Introduction


The DevOps environment is managed based on the open source product rancher, divided into three sets of environments and a set of private registry across the environment. Based on Rancherui and Docker hosts, each environment configures rights management for different roles, and each role can only access the corresponding environment. Development, testing, and operations personnel are free to choose the Web UI or the Docker CLI to manage their environment.

DEV ENV

defined as a development environment that contains a developer client's notebook and a Docker host on the server side.

TEST ENV

Defined as a test environment that contains a docker host for the tester client notebook and the service side.

OPS ENV

Defined as an operations environment that contains swarm clusters built by two Docker hosts in the OPS client notebook and the service side.

Pri-registry

The private mirror server, the core of the entire DevOps pipeline. Ship and run in different environments as final deliverables for consistent deployment from dev to OPS application environments.

Operation Process



    • The developer submits the code to the server's gogs via a local git client, and Jenkins builds the code into a mirror locally. The developer initiates the container for the corresponding image to preview the development results. If you confirm that you have met your expectations, push the image to the private Docker registration server for storage.

    • The tester runs the newly submitted image of the developer in the private image warehouse into a container and carries out manual or automated functional testing, and the test pass-through mirror will be hit with a new tag to be used by other environments.

    • OPS is started as a container based on a tag-tagged image in a private image warehouse and is eventually delivered to the customer.


Currently, the environment is mainly used for the construction and release of Docker image, while the Ops (Prod) environment runs a number of applications that are used internally by projects, but more so as a demo environment to provide customers with access rather than a real production environment.

Due to the limited time, this exchange is here. The content is relatively coarse, the purpose is to introduce, thank you for your participation. Open source is not only technology, but thought, the whole set of environmental construction process and technical details will be in my subscription number and Dockone official website (dockone.io) in succession, interested friends can pay attention to "small Zhang Baked Eggplant" subscription number, thanks again!

Q&a

Q: For some small and medium-sized Internet enterprises, especially devops two-time development capabilities are not strong, in the use of Docker cluster What is the recommendation?


A: The need for a DevOps pipeline for a clustered environment is low, and we should be more concerned with the use and management of a single tool for each step, and the cluster environment for use in large-scale container deployments. Or that sentence, the technology is good or bad, choose the most appropriate on the line. If the container as a virtual machine to run, can solve some of your problems, then how to run, technology or tools for the needs of the service.
Q: operation and maintenance complaints development application consumption is too big, why use the container can solve? Just developed and run with the same set of environments, performance is the same as the optimization, the guest is just let everyone on the same platform to talk.

A: Yes, if the application itself is poorly written, both in the development environment and in the operational environment will cause the application to consume a lot of problems. One aspect of using a pipeline like DevOps is to solve the problem of heterogeneous environment. Like I am a developer, I developed in Windows, Java is 1.5. However, in a production environment, the app server uses a linux,java of 1.6, which can cause functional problems.
Q: Hello, some traditional applications are more difficult to split, Oberyun words will inevitably be used to make the container virtual machine, in this piece has a good practice?

A: Traditional application splitting is really a challenge, from usability and performance considerations to putting traditional applications in containers as virtual machines. This scenario achieves high utilization of hardware resources, but because of the tight coupling of traditional applications, it is difficult to take advantage of the flexible migration and flexible deployment of containers. What's more, the traditional application is put into the container to run, the data protection in the end how to do. At present, I do not have a good practice, and this case is indeed the entire Docker industry in the country needs to face a problem.
Q: Virtual machine or container, traditional dual-machine node mode deployment, the problem through the dual-machine switch master and standby application still have significance?

A: Yes, the two-machine hot-standby or mutual-prepared solution in the traditional IT environment has gone through the test of time. The appearance of the container is not to subvert all of the previous, but to let the customer's application scenarios have a number of choices.
Q: Currently containerized is App app and so on, can you use containerized a Unix-like system? For example, can a containerized apple system be implemented?

A: The container is positioned in the app's package, even if there is a centos image is just to pull the application to run the bin files and Lib files. Your question is a bit of a way to run a container as a virtual machine, to get to know RANCHERVM or Hypercontainer, perhaps to meet your needs.
The above content is organized according to the June 21, 2016 night group sharing content. Share people publicity, nickname Yiu Yang. The IBM AICs Cloud services Project DevOps team leader is dedicated to the design, implementation, and technical support of the DevOps toolchain in the customer cloud environment. Currently, we are mainly responsible for the research, construction and promotion of the DevOps pipeline and container cloud platform based on Docker container technology in the project. Open source technology advocates, enthusiastic about it technology exchange and sharing, technical subscription number "Small baked eggplant" operators. Dockone Weekly will organize the technology to share, welcome interested students add: Liyingjiesz, into group participation, you want to listen to the topic or want to share the topic can give us a message.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.