NetEase Cloud container Service micro-service practice-micro-service testing and mirroring the whole process of measurement

Source: Internet
Author: User
Tags rollback

This article comes from NetEase cloud community.

Objective

In recent years, many Internet projects have changed from monomer service to micro-service trend, especially some complex architecture, business more extensive projects, micro-service is the trend, can solve a series of independent construction, update, operation and other issues, so as to liberate productivity, promote delivery efficiency and quality.

At present, the NetEase Cloud container service team manages the micro-service in a DevOps way, and the container service project has obvious features of micro-service and clear hierarchy:


On the first level, the diversity and complexity of the online and offline environments determine the frequency of deployment times, according to incomplete statistics, and the number of deployment times per week in the online and offline 400+. Each module is updated at least 2 to 6 times before each iteration (two-week iteration), and testers need to update each module at least 2 to 6 times after each iteration to meet the release criteria. Able A good build-and-deploy tool can increase the time to build deployment time to save developers and testers, and achieve rapid delivery. Therefore, the importance and priority of finding good build, deployment tools, and platforms are first-class.

Another level, containerized services in the test mode, on-line mode, branch management is very different from before, the testing team needs to the container service characteristics in the model, branch management, quality evaluation to make some adaptation and optimization to adapt to the characteristics of the container and micro-service architecture, This paper introduces the complete containerized continuous integration pipeline model, and the practical exploration of the image quality evaluation into meta-data.

At the last level, microservices-only projects have a higher requirement for testing, which is reflected in a wider range of tests and deeper testing depth. For example, NetEase container cloud Platform of micro-service practice (a) in the article mentioned in the smooth split from the main project User Service example, a split is equal to the previous process of user Service internal calls into the Facade+http interface call. In a nutshell, the number of interfaces is doubled (the internal logic is split into HTTP interfaces), and more professionally, the layered test pattern is further manifested, which makes the test scope more extensive. Second, different micro-service scenarios and characteristics, need to test the analysis more deeply, testing strategy more targeted, in order to fully cover the micro-service architecture of a variety of service types. The ability of testing personnel, testing platform and tools, testing efficiency requirements are higher, in this aspect of the deep exploration is necessary, the goal is to better quality + efficiency of all-round protection work.

The following is a detailed look at the above three aspects of how the container service testing team to solve this series of problems and challenges.

The practice of construction and deployment mode in the micro-service architecture

Omad Service Deployment era: in the Q3 quarter of this year, most of the container Service Web services were built using the Omad automated deployment platform and deployed remotely to the target machine using the build machine.


, the user creates the deployment task through the UI, and triggers the build and deploy action to complete the command, builds the machine to get the command, builds the project file according to the integration template, downloads the source code, compiles and packs the NOS; the target machine gets the deployment installation package and replaces the template properties with the agent service to start the service.

This system is relatively useful in projects where microservices architectures are not obvious, but there are drawbacks when the number of services increases and when multiple teams of people are involved to build the deployment at the same time. For example, the construction of multi-task parallel time delay, Nos upload and download delay in the number of services while building the update efficiency will be lower. Other people who operate the same service build and deploy at the same time, may cause conflicts that cause build failures. None of these problems can be solved effectively. Most importantly, the build deployment environment relies heavily on the environment configuration on the target machine, such as the JDK and Tomcat versions, so that under the multi-environment complex scenario, online and offline, the same service performance may be inconsistent in multiple environments. It also introduces the era of containerized service.

A new era of DevOps containerized local :

There are two issues that need to be addressed when servicing a service: The first problem is that the configuration file is decoupled from the code, can be passed in with the ENV environment variable when the Docker container is started, or the configuration data service can be centralized, both at present.

The second issue is how the build-up of the mirror to the target running between the machine to do, is now using the NetEase Cloud container service itself is provided on-line image warehouse to store and pull. Of course, you can directly use the SCP and other operations on the build machine to upload the image directly to the target machine.

The build and deploy operations operate on the build machine (multiple units) and on the target machine via SSH login shell commands and script execution. This phase is the local update era, as shown in the operating procedure.


This approach can solve the multi-task parallel blocking problem in the Omad era, as well as the problem of multi-person operation to build the deployment (build machine multiple units, and even one, because it is scripted, completely independent process, non-impact The deployment operation only needs to update the image field in the Pod.yaml template of the target machine, with little or no conflict, even though the conflict also downloads the image and restarts the container with the last updated image version as a quasi-update. and can solve different target machine environment caused by different service performance is different, because each service is running in a standardized container, the container preset tomcat, JDK and other dependent packages are exactly the same.

Currently, in collaboration with developers and testers, most of the online environment has been deployed in this mode, and the offline walkthrough environment is updated by testers to this local update mode to ensure that the walkthrough environment and the online environment are consistent as much as possible.

A new era of DevOps containerized cluster Management :

In the local update deployment mode, if you want to update the environment requires SSH to log on to multiple target machine updates, and the deployed modules can not migrate the target machine as required, these problems in the middle of the last several iterations of the environment and the context of the test environment has been improved, that is, the transition to cluster Management update mode. The so-called cluster Management update mode refers to using the target machine independently to use Kubelet monitoring management to register to kubernetes master to manage. Invoke create deployment on dashboard to initiate the deployment and update of the corresponding microservices, visualize the progress and status of the deployment, and target machines that can dynamically specify the service deployment, so that a target machine is down quickly to pull the service from another target machine.

After the testers recently updated the offline test CI environment to this mode, testers generally reported that the update operation was easier than before. Construction operations can also be omitted, when the developer in the joint environment Self-test completed, the tester directly in the dashboard to obtain the latest development of the mirror image address, replace the mirror address to complete the service upgrade update, can carry out testing, greatly reducing the testing staff to repeat packaging, connect the target machine and other tedious action time.

The cluster Management update mode is as follows:



Cluster management mode is gradually upgraded to the walkthrough environment and online after the experiment is stabilized. There is currently a risk that the Kubelet upgrade or reboot will cause all containers on the target machine to restart, and the risk needs to be further addressed and evaluated.


Second, the micro-service architecture mirroring the whole process of the measurement of the practice

The Container Service Project Test team presents a complete set of solutions for the microservices architecture itself combined with the testing process, including a set of plans and practices for mirroring, container-based, continuous integration, and quality evaluation, from a multi-perspective to a micro-service project from the development, testing, on-line, quality archiving and other aspects of the whole process, So as to create quality + efficiency of all-round protection work.

Mirroring the introduction of the test mode

At present, most of the services on the online container service line are changed from code release to container release, and it supports the image multi-environment publishing by means of configuration parameter environment variables and configuration Data Service center, so that the nature brings the traversal. That is, multiple environments can run services using the same mirror deployment. The Test team comprehensively evaluates the factors of efficiency and version traceability, and puts forward the scheme of mirroring measurement. That is, developers from the way to submit commitid to the tester to submit the image tag to the tester, on the one hand, can save the tester to build their own code, update the environment steps, on the other hand, through the image versioning management and the newly introduced quality evaluation into the image metadata and other means to archive the image after the survey, A mirror record that can be traced quickly to a different quality version.

The code reference mode is as follows:

After the developer self-test is complete, tell the tester git to submit the Commitid, build the package by the tester, and deploy to the offline environment, if there is a bug, the repair will need to build the package deployment again later. And the disadvantage is only to increase the logic forward, inconvenient rollback. Once the code is closed, it is difficult to do so if it is to be rolled back or split. Another inconvenient to troubleshoot the introduction and release of new issues, quality evaluation can only be iterative and not in accordance with the proposed version, once the problem can only be full-scale fallback and can not be rolled back some specific submissions.

For the code to measure the way the flowchart, yellow for the developer to complete the content, green represents the action of the tester.





Mirroring model and quality evaluation Enter the image metadata:

In order to better trace the quality data of the image version, the testing team of the container service has built a quality evaluation and archiving tool with the Jira platform, which enables the version quality evaluated on Jira to be automatically linked to a specific version of the image metadata and archived to the mirror repository. Judging whether the image can meet the on-line standard according to the content of the evaluation. Follow-up historical versions are also quick and easy.

Quality evaluation The flowchart of the image metadata and the survey and the on-line is shown below, yellow represents the developer's content, green represents the action of the tester, which triggers the Jenkins job to be triggered automatically by the Jira callback event, without human involvement:


Container Continuous integration construction and quality evaluation whole process construction

In addition to the change of the model, the manual test and then the quality evaluation automatically into the mirror, we also built a number of micro-service in-line continuous integration and quality evaluation pipeline construction. The purpose is to integrate static code inspection, unit testing, containerized build deployment, Automation interface test, coverage statistics, and test conclusion into a set of processes, which can meet the whole process of service from code submission to automated testing to automatic release and conclusion archiving, which is suitable for stable interface, good unit test construction, Module relies on a small number of micro-service modules.

This full-process practice is currently being built in the container service team in the middle tier service, as shown in the following process:





Triggers the start of the containerized continuous integration process when new logic commits.

1.Interlayer-unittest-sa uses SONAR+PMD to perform static checks on the code and to count the issues that need to be modified (major above needs to be repaired before going live) and to execute the unit tests developed. The Interlayer-image-build job is triggered after the run passes.

2.Interlayer-image-build the process of building a mirror and uploading it using the build machine script described in the first part of this article to prepare for the environment update. The image tag that was hit is passed to the next job.

3.Interlayer-image-deploy get the newly-hit mirror tag, use the first part of this article to manage a new era of container cluster Management Kubelet update interface to update the target machine's mirror tag remotely, thus updating the middle tier corresponding service version.

4.Interlayer-it-test contains all the interface test content of the middle tier, which is automatically triggered and executed after an environment update, and is displayed in the remote Jacoco piling collection execution coverage and backfilling into Sonar's coverage statistics.

5.Interlayer-buildimage-test is the last step, when all interface tests are passed, the test results are passed to the job parameter, and the RePack action is triggered, the test results are packaged into the mirrored metadata being measured, and archived to the mirrored warehouse.

The final tester gets the mirror package with the automated test pass, can proceed to manual testing, and then the manual verification results through the Jira note triggered by the way the manual test through the image tag, in this version of the image provided to the walkthrough environment and even the online environment.

Ideally, if the iteration content is evaluated only to complete the automated validation to go live, you can skip the manual test directly, the automated test through the mirror tag updated to the walkthrough environment and the online environment. I believe this will become a reality in the near future of the DevOps era. So that developers can directly from the code submission to the release of the entire process of devops, historical image version can also be traced back to the rapid update, fast rollback and other operations.

third, micro-service architecture test work Practice

The trend of microservices of container service is becoming more and more obvious, and the requirement of depth and breadth of testing has also raised a step.

The container service includes the service of the two types of services, that is, the vertical business type, which is the application support business which is provided by the container Service project from top to bottom, such as micro-service, Mirror Warehouse, these two large-sized vertical business is done by several micro-service cooperation. The second category is the horizontal business, by accessing other external project applications, or being used by external project applications to provide services, such as the G0 Gateway is provided to other projects Openapi interface to access, G1 Gateway is provided to the underlying project of the resource call interface to access, Internal module services are called by other projects, and user services are also called by other projects to perform their duties. These two types of microservices each have their own characteristics. First of all, the vertical business type service microservices split test characteristics.

Micro-Service tiered test response: Tiered automation and end-to-end automation for vertical service MicroServices

Take the MicroServices service and image warehouse business example, before service split:


After the service has been split:


The yellow block is an independent external HTTP service, you can see, directly can be called by the user of the HTTP service from 3 split into 7, this is only the container service and the mirror warehouse provided by the backbone of the logical split, the Container service team Project service is now split up to 30 + of independent microservices. Service splitting can make the service scope of different business more clear, and update upgrade service will not cause other business impact. However, for testers, the number of interfaces increases exponentially, even if the logic does not become complicated after splitting.

For example, creating a mirrored warehouse exists in the Web module interface before splitting, and automating and manually overwriting the creation of a mirrored warehouse interface on the Web module is all you need. However, after splitting, the Web module retains the logic of agent forwarding, flow control, and the real business logic is moved to the Nce-repo module and invoked by the Web service as an HTTP interface. This allows the creation of a mirrored warehouse interface with two dimensions of the HTTP interface available externally. The split of other services is also the same. Therefore, automated overrides need to be overwritten at multi-tiered service levels. The interfaces that call the Nce-repo module directly focus on the business and logical dimensions, and the end-to-end scenario is highlighted in the interface called from the Web layer. Currently the HTTP interface has reached thousands of interfaces. There are also SDK packages and tests for standalone tools and engineering. Automation is especially important because human flesh tests and regression workloads are huge.

Hierarchical Automation Mode:

Currently in the Container Service project has been grounded in practice line timed trigger layered interface dimension test, to ensure that all levels of service can interface and logical dimension coverage, the interface is extensive, now covers all HTTP service modules (Web, OPENAPI, API, repo, build, Internal, user), timed to run once a day.

Automation strategy: interface coverage, parameter checking, etc. can rely on the interface document quickly with the development of the iteration to complete synchronization, you can ensure that the walkthrough regression, the efficiency of the line regression, as well as the return efficiency of subsequent iterations. Logical and complex interface testing depends on the degree of stability and importance of the interface followed by priority replenishment.

Timed interface testing for the HTTP service covered by the Jenkins job+testng method, where the following four independent service timed interface tests were added to the test job after microservices.


End-to-end automation mode:

has been grounded practice online timed triggering end-to-end business testing, to ensure that the user triggered high frequency of the backbone end-to-end scene can be covered, and timely monitoring, timing for each scene once per hour (the line has currently deployed 10 backbone scene +), you can quickly find online users may appear problems.

End-to-end business testing for the health of the current online environment:


Micro-service Layered test response: Enhancing mocks for horizontal service-based microservices, strengthening independent service logic testing, and strengthening special testing

For vertical business tiering automation and end-to-end automation efficiency is significant, for the horizontal business, such as G0, G1 Gateway, these themselves do not provide a logical interface services, how to conduct effective targeted testing?

Take G0 Gateway For example, the main function of G0 Gateway and its position in the project are as follows:


If only the interface-based testing is completely unable to overwrite the logic of the G0 gateway itself, and the logic of the interface itself is not in the scope of container service, inefficient. In this kind of micro-service test, the tester uses mock backend service to strengthen the mode of independent service logic test to overwrite. The G0 gateway after the mock is as follows:


G0 Gateway Authentication Service, flow control, audit function through simulation of different authentication methods, parameter combination, scene combination, mock interface pressure trigger flow control, log retrieval and other ways to carry out testing coverage. In a single environment, a mock-up of most of the dependent external services, and then simulation and automation of the test scenario, is a prerequisite for a tester in a microservices-based architecture. This effectively eliminates the problem caused by the exception or instability of the relying party, and can focus on the logic and stability of the G0 Gateway service itself.

In addition to the basic functional and logical coverage, the depth of testing is also a necessary requirement for microservices projects. The test-depth mining of the Container Service Project Test team consists of a variety of tools, platform applications, and specific testing practices.

The logical complexity of a microservices project determines that the test team cannot focus solely on the basic functionality and logic to meet the requirements, but also in some extreme cases to verify that the collaboration between multiple microservices is smooth, such as a scenario with large requests for traffic, a network anomaly scenario between services, and a scenario that relies on a service return value exception. All these require the analysis and coverage of the scene in the special test. In the Container Service project, several special anomaly tests and stability and performance tests have been landed. During the testing process, the potential risks of many valuable scenes and abnormal branches are also explored. A number of tools and platforms are also introduced, such as performance testing tools, compression tools and platforms, anomaly construction tools and platforms, mock platforms, and more.

Micro-service, because the test introduced by the layered test automation coverage requirements are higher, because the special testing needs more cumbersome means, automation complexity has become higher, so the corresponding test execution and automation time if there is no better method, time will inevitably occupy higher. In the Container Service project, testers carried out a series of efficiency improvement measures to solve the problem of increasing the workload after the depth and breadth of testing.

Micro-Service efficiency enhancement means: TC Association of Interface Automation and automatic forward shift

As mentioned above, service splitting and interface layering will cause the interface to multiply, automation is essential, before writing automation is often before the manual test execution began to write, and sometimes even wait until the interface and function on the line after the completion, which will result in manual testing in multi-environment repeated execution, Because the automation code is still not written before going online, it can result in inefficiency. In the Container Service project team, testers introduce automated forward-moving and TC platform-related automation processes.

Automatic forward refers to: before the development of the test, the tester to develop the interface documentation and some design documents to carry out automated writing work. Based on the premise is that the interface document is completely non-change, which in the new version of OPENAPI mode has become the status of landing, because the new version of the OPENAPI mode from the beginning of the requirements of the interface is determined, less change. Therefore, it also helps to move the automation forward to the ground. After testers write most of the automation code, after the test can be used less time to debug through, and supplemented by a small number of manual testing to complete these interfaces and functions of the test task, after the code quality standards to the walkthrough environment and the online environment can quickly adapt to the Automation code configuration file for rapid regression purposes. This effectively reduces the time of repeated manual multiple regression.

TC Platform Association Automation refers to: all of our current execution sets areTC Platformis created and managed, even after automated tests are run online, it is still necessary to manually manipulate the corresponding test cases into a successful state. There is a duplication of operational effort. With the introduction of TC correlation automation, by adding TC to the ID of the application example in the metadata of the Automation code case, the TC's use cases can be associated with the Automation code instance, triggering an automated regression on the TC and backfilling the success or failure state of the test case at the same time. Save manual viewing of the correspondence between use cases and automation and set the steps for the operation.

For the new computing requirements in the container service project, most interfaces automate the execution and backfill effects associated with TC later:



Interface automation and logic automation can almost always be used to improve efficiency, but the container service project still has a large portion of the testing scope for the UI, such as microservices and mirrored warehouse pages that require testers to routinely return. How is the efficiency of this part solved?

Micro-Service efficiency enhancement means: front-end automated recording regression

The Container Service Project Test team introduced the Uirecorder component to the front-end automated recording process. The recording method is provided to front-end developers through training and collaborative research, which allows the front-end to be quickly identified and re-recorded with style or logic changes.

Testers can use the recording scripts provided by the front-end development in conjunction with the Jenkins job to return to the front-end use cases of each environment in one click. At present, the front end of the container service has a very large number of scenes that have been integrated into the Jenkins job by script recording, freeing up a lot of testers to return to work.


Micro-Service efficiency enhancement means: Mock platform Introduction

The Container Service Project Test team introduced a set of self-developed mock platforms for the Platform Foundation service, and quickly used mock platform to mock off the dependent services and complete the test and automation of the micro-service logic in the horizontal service-based microservices testing. In practice, the container Service project has used mock platform constructs to construct scene time savings of 80% compared to using mock server tools such as Moco, which greatly improves execution efficiency in exception-specific testing and horizontal service-based microservices test execution.

Externally-dependent interfaces that are currently being instantiated on the mock platform:


Summarize

This article is "NetEase container cloud Platform of micro-service practice," one of the series of articles, a total of three aspects of the current container service testing team to do a series of practical work, about the container deployment model, the model, container continuous integration and other activities have been in some modules and services have already landed practice, It also solves the problem that the quality and efficiency of micro-service testing team is facing the challenge. In the subsequent testing work, we need to further solidify the micro-service container-testing process, quality metadata archiving and other steps, and promote to other MicroServices Architecture project team. Some practical activities introduced in the test to improve the effectiveness of the means of improving the effectiveness of the follow-up need to dig some of the ideas and practice of enhanced testing exploration, so as to ensure the quality of the project and build team influence to make an important contribution.


NetEase Cloud Container service provides users with a server-free container, allowing enterprises to quickly deploy business, easy operation and maintenance services. The container service supports features such as elastic scaling, vertical expansion, grayscale upgrades, service discovery, service orchestration, error recovery, and performance monitoring.


Background review: The practice of micro-service of NetEase container cloud Platform

This article from the NetEase cloud community, by the author Chi Xiaoqing authorized release.

Address: NetEase Cloud container Service micro-service practice-micro-service testing and mirroring the whole process of measurement

More NetEase research and development, product, operation experience Sharing please visit NetEase Cloud community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.