DevOps and Continuous Delivery Under Cloud Native Computing

Source: Internet
Author: User
Keywords cloud native computing cloud native devops cloud native devops with kubernetes epub is devops a cloud computing
DevOps is a compound word composed of two words, Development and Operations. Its main purpose is to tear down the separation wall between different departments. In short, DevOps can make a company's processes faster, more efficient, and more reliable. So, what does continuous delivery have to do with DevOps? Why should we put it together?

Concept and Introduction
There are many explanations about cloud native. This PPT lists two common statements, which generally means that cloud native is continuous delivery + DevOps + microservices + containerization.

In other words, cloud native is a collection of concepts, including microservices, containers, and more management methods, such as continuous delivery, DevOps, and reorganization.

So how do you define cloud native? The CNCF official website's definition of cloud native is also a relatively reliable definition. If you want to understand cloud native, you must first understand K8S, because it is the cornerstone of cloud native.

K8S is the first project of CNCF. The whole cloud native ecosystem is built on K8S. Representative technologies of cloud native include containers, microservices, service grids, immutable infrastructure, and declarative APIs.

-DevOps

Whenever one mentions DevOps, it is always likened to a blind man, and many companies have different names. Some companies will refer to the internal online platform as DevOps built by themselves.

But let us first look at the definition of Wikipedia: that is, to automate the process of "software delivery" and "build changes" to make building, testing, and releasing software faster, more frequent, and more reliable.

-Continuous integration, continuous delivery and continuous deployment

Continuous delivery can make software delivery faster and more frequent, that is, it can be released at any time. Its goal is to make software construction, testing, and release faster and more frequent. To ensure efficiency, we can only deliver faster, but the premise of delivering faster is to ensure quality. Speaking of continuous delivery, many people have heard of continuous integration and continuous deployment.

In traditional software development, developers need to do integration after the project is completed, this process can range from a few weeks to a few months. In the middle and early stages of software development, frequent integration is required. The advantage of frequent integration is to avoid discovering problems until the last link.

Many people may say that our team has already integrated, so how often do you build it? Does it take the build process every time it is released? If so, consider continuous integration. Because continuous integration is the first step in continuous delivery.

And continuous delivery is to integrate everything in front to deliver to customers. Of course, different companies have different names for this stage, some are called "test-production", some are called "test-quasi-production" or "online" and so on.

Continuous deployment means that all links are automated. After the developer submits the code, it can be automatically deployed into the production environment. The only difference between continuous deployment and continuous delivery is whether the developer can automatically release the product to the production environment.

In recent years, technical terms have been updated very quickly. Regarding this, it is most obvious in the aspect of infrastructure. When an application is released for deployment a few years ago, there are generally the following options: The best and most basic way is to build an equipment room, but all hardware, network, water and electricity issues have to be solved. Consider it; the second is to build the server, install the operating system, and finally deploy or virtualize it.

Later, we will make a virtual machine similar to VMware, that is, virtualize the deployment method on it, and write some scripts to run it. Later, everyone started to engage in private cloud technologies like OpenStack.

No matter how the method changes, and no matter what infrastructure you use, it will eventually evolve into a hybrid cloud or multi-cloud mode. Your delivery must not only support these models, but also support container packages. At present, this process has become more and more automated, and more and more able to meet business demands. For example, we will use elastic scaling technology to meet business needs and cost demands.

Continuous delivery-assembly line?
Continuous delivery requires a series of processes. In this series of environments, whether it is testing, pre-launch, production, the closer it is to the customer's environment. Although the issues of concern at each stage are different, you will find that every process and environment needs to be published and verified.

And through the pipeline method, this series of processes can be well automated, and the efficiency of developer construction and the quality of releases will also be improved.

-Software delivery challenges and issues

Everyone knows that the software delivery process itself means online change, and change means risk. The biggest challenge of delivery is the failure during the online process. If it fails, it will definitely cause online failure.

The 2016 survey report shows that 81% of the teams can control the failure rate within 15%, but only 35% of the teams can control the failure rate to 5%. This means that there are 50 failures after 100 releases. Therefore, changes are important to the quality of software delivery.

Since quality is so important, how to choose delivery tools?

-Pipeline
The world's first assembly line was proposed and launched by Ford Motor Company. At that time, the assembly line greatly improved the efficiency of automobile production. And after the developer submits the code, the software is delivered to the customer, and also goes through a series of processes. So how to model and run this process?

At this time, we have to combine the goals of the pipeline. The goals of the pipeline are rapid integration and rapid delivery. It can be said that the pipeline is actually a pipeline. It does not have a standard process. We can customize it according to the business.

For example, when you want to build a pipeline, you may need a series of processes such as code scanning, code testing, test deployment and pre-launch. But how do you ensure that the pipeline you make works well?

This problem is not difficult, you can do the following three points:
First, it must be automated. Many developers feel that they have been automated and have been continuously integrated. But if you are not automated or timely enough to build, then you cannot talk about testing or integration.

Second, we must do automated testing. The products produced after construction must be tested, whether it is functional testing, performance testing, or stress testing. This test process is to achieve the expected quality, but it should also be automated according to the situation, because only automation can improve efficiency.

Third, continuous integration. Only when the pipeline runs repeatedly, quickly and frequently can problems be discovered and solved.

In the entire software industry, at least the above three points can be achieved to achieve the goal of continuous delivery. The world's first assembly line was proposed and launched by Ford Motor Company. At that time, the assembly line greatly improved the efficiency of automobile production. And after the developer submits the code, the software is delivered to the customer, and also goes through a series of processes. So how to model and run this process?

The practice of assembly line can be divided into three aspects:

-The binary package is only built once

The binary package is built only once, meaning that the developer builds it again after each code is written to generate a binary package, and then goes on to the subsequent process. This not only avoids wasting time, but also improves efficiency.

But even so, the results of the two builds are still different, this is because although your code and ID have not changed, but your package may change.

-To adopt the same deployment method in different environments

When publishing and deploying, all deployment methods and environments, such as testing, pre-launch and production, are deployed in the same way, instead of using Jenkins for continuous integration when deploying to the test environment, but when the production environment is released, another set of tools is used To deploy. Different deployment tools may lead to different final delivery products.

In the online production environment, whether it is a physical machine, cloud host or virtualized physical machine, it must ensure its high availability performance. At this time, a pile of resources needs to be piled up to ensure the availability of online users.

-Flexible pipelines

The significance of the pipeline is to improve efficiency and ensure quality, but the developer's business may have Java, may have Go. Therefore, your assembly line must satisfy various demands.

For example, some teams have enough test environment. Another project may feel that the project needs to be provided to other teams for joint debugging. At this time, it is necessary to ensure the stability of the test environment, such as building two sets of test environment, A environment It can be used by the team for project verification, and the B environment can be provided to a third party for joint debugging.

The last thing that needs to be done is to verify the landing of the entire pipeline. It cannot be said that you built the entire pipeline and then go in this way. When it really hits the ground, it is necessary to consider whether the process of automated trigger testing is completed after each code submission is completed.

-Two delivery processes

In terms of tools, the private version is Artifactory, the build is Jenkins, the code scanning is SonarQube, the object storage is an internal private cloud, and the release is rsync.

In the second delivery process, the biggest change was that we replaced rsync with ANSIBLE, which did greatly improve efficiency and solved the queuing problem.

So during the release process, we turned out to be a package-drawing model, but later changed it to a full package. The other piece is a test. It turns out that we only do functional tests, and later we also do security tests. Originally, we only docked static code scans, and later we also screened for security vulnerabilities. The two pipelines have a common problem, that is, each pipeline will have a test package and a production package.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.