Pain Point of Driven DevOps Practice: Container Delivery (2)

Source: Internet
Author: User
Keywords how to implement devops what are devops practices devops containerization

DevOps has been proposed for almost ten years, but it has really been popular for these two years. It is now very hot and has been recognized and verified by the industry. However, when it is landed, it will be found to be a very large project. You may not know from How to start.You can get more information from Pain Point of Driven DevOps Practice: Challenge of Implementing (1)

Container Service Platform
This is a simple architecture diagram of our container service platform, developed based on mesos. Now most of the container orchestration services for Docker are Kubernetes. It is a complete solution based on containers. Basically, it can be deployed to run container services.

. At the end of 2015, when we made the technical selection at that time, the complexity of Kubernetes was still very high at that time, and there were some problems. Finally, Mesos was selected. Mesos is not specifically designed for Docker. Its application in big data scenarios is already mature and stable at the resource scheduling layer, so we can quickly implement our resource scheduling capabilities through Mesos. It is flexible and can support many frameworks, and it is very important that it can be easily docked with some of our original platforms.

There are several necessary modules for building a container service platform. The first is resource scheduling, which implements resource-as-a-service capabilities through online resource scheduling. For applications or R&D, a resource is a runnable program, not a server or virtual machine. The second module is service discovery. Online service changes are notified to the access layer or the calling end through service discovery, so as to be transparent to the application layer.

Of course, the container operation and maintenance module, monitoring system and logs related to operation and maintenance will also have specific processing, and dock with the original operation and maintenance platform. These are a few points to consider when building a container service platform. Of course, in addition to these modules, image management is in my opinion the core of the service of the container service platform, or one of the cores of building continuous delivery.
Mirror-immutable infrastructure
First look at the image, which is the carrier of immutable infrastructure. Everything is considered code. In addition to code, application configuration, environment configuration, including startup scripts, build scripts, all through version library management and version control. For example, in terms of configuration, there were configurations that were centrally managed by the configuration center, such as different configurations for development testing and production environments.

Now in the container, we choose to store the configuration and code together, put all the configuration in the image at the time of construction, and initialize different configurations by passing different parameters at startup. The most important thing is that an image is delivered to each environment to maintain a sufficiently strong environmental consistency to avoid problems caused by inconsistent environments.

Regarding the three-layer mirroring of mirroring, I think that mirroring layering is also a very common practice in the industry.

The first layer is the basic mirroring layer, which is a minimal set of tools. Why is there this layer? The main reason is that we need to control some of the global situation through these tools, which may be to solve the problem of global tools and to upgrade.

The second layer, the middle layer is that we will maintain a component level, such as (nginx, php, tomcat). For example, we know that there are many security issues with these open source things. From time to time, some versions may have some security vulnerabilities. We have to solve these security vulnerabilities and control the version through this layer. We can also release the version of the middle layer to the host in advance to improve the efficiency of our image distribution to a certain extent.

The third layer is to build an application image through the base image and complete the delivery through the image.

container
Resource Scheduling
Let's talk about several modules of container management. The first is resource scheduling, that is, how to run the container, allocate computing resources on demand, mount network resources, and store resources, and store data through distributed storage, such as block storage for high-performance data, and shared storage. Or the way objects are stored. In addition, you need to have the ability to check health, which is the basis for self-healing.

In addition to these basic capabilities, some applications require special configuration and resources, such as fixed IP/ports, orchestration of multiple containers, or related images to run on the same host. This is some special application scenarios. We manage the scheduling of resources through tags. At the same time, the platform level should consider the high availability of resources and the ability of disaster tolerance.

Service discovery
Service discovery is a key service that the backend is transparent to the frontend. Any changes in backend resources are not aware of the caller or the user. Service discovery has a registry or name service. Taking host mode as an example, its characteristic is to share the network of the host machine. Every time it is released or deployed, a new IP and port are randomly assigned.

Service discovery is to discover these changes, notify the real address of the client service, or notify load balancing to obtain a new configuration to update the distribution strategy. The load balancing service or proxy is a part of service discovery, so it is necessary to consider the ability of overload protection and the strategy of gray release.

Monitoring and logging
Let's talk about the operation and maintenance level, mainly monitoring and data collection. One of them is the agent. In order to maintain it simply, some of them use super agent to run all monitoring, capacity collection and log collection. But our approach is to separate these modules and use containers to run each individual ability.

For example, there is a container for independent log collection, which only does one thing. A resource collection container and an application monitoring container do not actually have much overhead and do not affect each other. In addition, the container itself promotes a single container to do a single thing, and maintenance is simpler.

Container-application delivery
Earlier, I talked about images and containers. Through the container service platform, we have been able to open some basic online O&M capabilities and achieved certain O&M service capabilities, but this is still at the O&M level, and application delivery should involve more links.

The next question we should face is how to achieve automated delivery between R&D and operation and maintenance. I mentioned earlier that image management should be the core of application delivery. The application is submitted from the code of the code warehouse to the compilation and packaging of the code, built into a business image push image warehouse, and the container platform distributes the application. This is the most basic scenario and process of application delivery, so we should design based on these most basic processes Our image management and delivery process.

Through the connection with the code management platform, the process of submitting the code is the starting point, and the fully automated process of code compilation and image construction is completed through the Jenkins cluster construction. The delivery of the docker single image allows the test, pre-release, and production environment to maintain a unique deployment method and standardized Operation and maintenance interface.

Continuous delivery
At this point, we have realized the most basic automated application delivery capabilities. Of course, continuous delivery requires quality and efficiency guarantees. At this time, what we have to consider is how to ensure the quality, and the knowledge system of continuous delivery. Here I focus on sharing the built-in quality.

Quality should not exist independently, but throughout the steps of the application delivery process. In the continuous integration CI phase, some quality inspection tools are integrated, such as static code scanning. A professional team in the company is developing products for static code quality scanning. We only need to integrate such capabilities. Of course, unit testing is indispensable, and it is difficult at the beginning. It is necessary to promote the continuous improvement of R&D through the continuous delivery pipeline quality gate.

In the testing phase, it is to introduce automated testing capabilities as much as possible to reduce human participation, and interface testing and UI testing are gradually covered. In addition, we also introduced a security scanning mechanism to discover some security vulnerabilities in advance during the continuous delivery process. These capabilities are also the ability to integrate the company's existing platform, and open source also has many solutions. Of course, I think the most important thing in the automated testing phase is not the tools, but the investment in the development and maintenance of test cases.

What is easily overlooked in continuous delivery is the feedback from the operation stage after the delivery is completed. It is necessary to find out whether these changes and releases bring abnormalities in a timely manner through the monitoring system. That is, the ability to integrate the operation and maintenance monitoring platform into the pipeline, for example, a pipeline can automatically output the health report on the line.
Finally, each stage of the pipeline process requires abnormal feedback, triggering a mechanism that prevents delivery, and tracking through TAPD defect management to form a closed-loop quality control.

Finally, the CI/CD continuous delivery pipeline bus is built, and the ability to access various tools, from TAPD project management platform to code management docking, automated testing tools, automated release or container services, operation and maintenance monitoring. Through the integration of these tools and platforms into the delivery pipeline, to continuously optimize the quality and efficiency of application delivery. Finally, promote the development of culture through the achievements of tools. This is an example of our continuous delivery pipeline.

to sum up
To sum up our practice, I didn't want to do CI/CD from the beginning. Knowing the problem from the beginning of pursuing the organizational model, driving us to build an operation and maintenance automation service system, jump out of the operation and maintenance perspective to build application delivery capabilities, and finally to integrate more tools for continuous delivery of the pipeline, from the ideal state needs to continue optimization.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.