How to Build A Complete DevOps Workflow Based on Kubernetes?

Source: Internet
Author: User
Keywords devops workflow devops workflow automation kubernetes devops tools
Foreword
About DevOps is a big topic, it may involve both the company ’s technical culture construction and the support of developers ’technical capabilities. This technology sharing mainly focuses on the technical aspects, that is, how to use Kubernetes to serve the DevOps workflow. This article introduces from three aspects:

1. What is Kubernetes? Why is Kubernetes suitable for building DevOps workflow?
2. How to build a DevOps workflow based on Kubernetes?
3. Some common challenges in DevOps.

1. What is Kubernetes? Why is Kubernetes suitable for building DevOps workflow?
Kubernetes is currently the most extensive and popular container orchestration and scheduling system. It is also the best platform for building cloud-native application orchestration. At present, all cloud-native applications are basically built based on the Kubernetes API.

Kubernetes brings many practical features to developers, such as consistency, scalability, and self-healing capabilities. Consistency means that applications built on Kubernetes can be seamlessly migrated to any environment, whether public cloud, private cloud, or cross-cloud. Scalability refers to applying the Kubernetes plug-in mechanism to any environment, and these plug-ins can be customized through Kubernetes. The self-healing functions of Kubernetes include mechanisms such as health check, automatic recovery from failures, and automatic expansion, which are essential to the operation of the system.

The architecture of Kubernetes is relatively simple and is divided into two parts: Master and Node.
Master is mainly responsible for the maintenance of the state of the cluster, and also provides an external access to the cluster;
Node is responsible for running containers and providing some necessary environments for containers, such as storage and networking.

There are only four essential components of Kubernetes on the Master, and the essential components on the Node, in addition to Kubelet, Kube-proxy, and Docker. And everything else is deployed into the cluster through expansion. For example, when deploying a cluster, DNS is a necessary function, but this function is deployed into the cluster in the form of a container.

Because the Kubernetes architecture is very simple. Therefore, it has some special advantages, that is, it can be applied to the DevOps workflow. Taking continuous integration as an example, when testing new features in version upgrades, these software may be managed through IPM packages before Kubernetes. When upgrading IPM packages, there may be conflicts. In Kubernetes, by putting IPM packages in each container, conflicts between software packages are avoided. If you put everything that each application depends on in a container, the application can easily be consistent in different environments.

In DevOps, the monitoring system is critical, and in Kubernetes, the container is run by a Pod, and the application can be deployed in a Pod container, and other containers can be used for monitoring.

In DevOps, it is often necessary to publish and change the system frequently, and rollbacks are found when problems are found, and during frequent releases and rollbacks, a system is needed to manage the historical life cycle of these releases, and rollbacks can be done when problems are found come back. For the system, it is required that the process of publishing it cannot affect the normal external access of the application program. There is already a native mechanism in Kubernetes, such as using Service and Deployment to complete this function. Service can provide an entrance to external access to automatically load balance; Deployment is responsible for managing these copies. If you need to upgrade, deploy by rolling upgrade.

2. DevOps based on Kubernetes
DevOps is a process that combines personnel processes and products to provide users with continuous prices. This process involves both personnel, processes, and products. The ultimate goal of DevOps is to provide customers with the value of continuous delivery. The process includes: product planning and tracking, software development, build testing, product deployment, operation and maintenance, monitoring and optimization, and are connected in series through a workflow. Therefore, these DevOps processes are usually combined and called a DevOps workflow. The core goal of this workflow is to continuously deliver valuable products to users.

How does Kubernetes serve DevOps workflow? The advantage of Kubernetes is that different products and tools can be connected in series. Take the most commonly used Jenkins in CI / CD as an example: there is now a Jenkins-x project that implements integration with Kubernetes, using Kubernetes CI / CD to achieve custom redemption, with these resources cashed, you can use Kubernetes API to define the CI / CD process.

In addition, if you use Kubernetes to build the entire workflow, you can also choose projects in the Kubernetes ecosystem during monitoring. For example, you can obtain many indicators directly from Prometheus to build the desired monitoring alarms and subsequent optimization basis for the entire product. . Based on these tools, a DevOps workflow can be built.

A typical DevOps workflow process is: when the project starts to develop, use VS Code to develop code, and then push the code to GitLab to store it, and use GitLab hooks to make Jenkins perform some CI processes, such as doing some unit tests and building Docker image, and then deploy the Docker image to helm in the development environment or test environment. In the test environment, an integration test function is triggered by Jenkins. After completion, it can be deployed to the production environment. Through the Kubernetes addon method, Prometheus, Grafana and other monitoring components are deployed to the cluster, and a complete set of To the monitoring process of the CD.

3. The challenge of DevOps
The challenge of DevOps is the lack of automated testing first. Due to the lack of investment in testing in the DevOps workflow, the newly released functions are not tested. At this time, various problems are prone to appear in the production environment. For example, after the application is deployed, availability is reduced, and resource usage is suddenly high. This problem can actually be solved through a certain organizational culture. A simpler way is to provide appropriate test coverage requirements for these processes. For example, when a new product is released, it is required that the test coverage cannot be reduced, but the test coverage is relatively difficult to control, and may be solved from the organizational culture.

The second problem is that the DevOps toolchain lacks links. Without links, there is no way to achieve a high degree of automation. The more recommended approach is to use these tool chains in the Kubernetes ecosystem and use the Kubernetes API to better connect applications to form a complete DevOps workflow.

The third problem is that it is difficult to quantify the results and it is difficult to coordinate the cooperation between the teams. For example, users complain that the website access speed is slow. Because they do n’t know where the slowness is now, they should go to each product to check. Without a complete monitoring system, it is difficult to locate who is responsible for the product. For this problem, you can use Kubernetes to track the call chain of the entire application without changing the application. For example, you can use ServiceMesh to monitor these applications, which can reduce the bottleneck of the product that cannot be specifically located after the problem is found.

The last problem is that in DevOps, although the Kubernetes ecological chain tool is used, if you do not follow some Kubernetes / DevOps best practices, it will cause unexpected problems in actual operation. For example, the simplest problem is that you can use Kubernetes Deployment to do rolling updates during the upgrade process. According to normal expectations, in the process of rolling updates, the original copy is still running normally, new copies are gradually created, and Service load balancing is also running normally. But the problem is that the Service will always break from time to time every time it is upgraded, and its usefulness will always decrease a little every time it is deployed. This problem arises because the most basic best practices of Kubernetes are not followed, and health checks are not deployed for applications. For the issue of best practices, in fact, it requires the entire team, not only the DevOps workflow construction team, but also the application team to apply these best practices to the management of the workflow and applications. For example, there will be a problem that the failure rate of Job is very high. At this time, not only the resources of the application must be restricted, but also these Jobs in the DevOps workflow must be controlled to a certain extent. Do not exhaust all resources.

Summary
In Kubernetes, it is not recommended that you directly manage a Pod. You created a Pod before and did not use a controller to manage it. In Kubernetes, it is recommended that each Pod has a controller to manage, for example, you can use Deployment to manage, or use a copy controller to manage. All these controllers are used to ensure that the Pod is in the expected state. You can also use your own controller to manage these Pods, and develop an API to manage the life cycle. In this way, when the container is finally in the running state, there is always a controller to manage it, and there will be no problem that a container is always running, but it does not know who is managing it.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.