ICBC data center Senior Manager Li Yannan: interface smoke test method, ICBC Yannan
Refer to the following link for more technical tips: Alibaba Cloud blog
This year, I encountered several problems related to the functions and performance of the interface. It happened that the company recently organized a smoke test-themed activity, I wonder if I can combine the interface test with the smoke test to explore some new interface test ideas and methods.
At ordinary times, I pay less attention to interface testing. Most of the interface functions are covered by the functional test cases in the previous section of the application, and no test cases are arranged separately for the interface. Therefore, when it comes to implementation, I found that there is still a lack of accurate definition for interface testing. Baidu knows the following definition: interface testing is a test for testing interfaces between system components. Interface testing is mainly used to detect the interaction between external systems and between internal subsystems. The focus of the test is to check data exchange, transfer and control the management process, and the logical dependency between systems. This definition is not much different from our previous understanding. In short, Open Platform applications exchange messages and data between applications through interface services, therefore, our testing focus is on messages and exchanges.
Design Concept:
Switching is easier. After all, the common interface service types of applications are mainly HTTP and SOCKET, and there are also many testing methods for these two types of services, baidu has many related testing methods and frameworks. For those of us who do not know programming, python is naturally the first choice. Python provides the most basic request and httplib2 libraries to send and receive packets. Of course, HTTP interfaces are also divided into post and get interfaces, this method also exists in the request library. We use an interface registration form to record the type, address, and method of each interface. This information can be obtained from the configuration management system.
Messages can be simply considered as interface test cases, which are much more complex than exchange problems. There are many factors to consider. We have summarized the following four main problems:
1. channels for obtaining messages;
2. Whether the message can cover all program branches;
3. Determine whether the returned results are correct;
4. test efficiency problems.
Next I will introduce our solutions one by one:
1. How to obtain messages:
The traditional Interface Test method mainly uses the method of manually editing interface packets. This method only needs to construct the test packets according to the descriptions in the interface document. Although simple, it is inefficient. Therefore, this method has an upgraded version, which is to generate test cases in batches by using the keyword fields in parameterized packets. This is also one of the main methods for interface performance testing. Although this method solves the problem of packet retrieval efficiency, it cannot solve the coverage rate well. After all, packets are constructed manually and cannot reflect the actual business transaction scenarios very realistically, the actual test results also confirm this point of view. Therefore, we thought that the traditional interface test was covered in the normal business transaction test, so we simply captured the interface message generated when the transaction was initiated in the previous section. Fortunately, most development departments of the company record application transaction logs in strict accordance with the LOG4J format. Therefore, we only need to analyze the application transaction logs according to certain rules, we can extract the content we need.
2. Questions about whether a message can overwrite all program branches:
Depending on the message content, the application selects different program logic branches. The traditional method is only implemented through White-box testing, however, the acceptance test is more focused on black box or gray box tests. Therefore, in the past, due to incomplete test cases, a program problem that does not cover the branches often flows into the production environment. The current method we have come up with is to import existing Interface Test Cases in the system and use the test cases captured in logs. After a period of accumulation, gradually form a complete Interface Test Case library. If you can bypass the logs of an application server in a production environment, the effect will be better. After all, the transaction types and scenarios of production are the most comprehensive. Of course, we need to solve problems such as desensitization of production data, the financial industry is still faced with many institutional processes.
3. How to determine whether the message return result is correct:
Each application is designed to comply with certain specifications and habits for interface packets. We only need to sort out the Fields marked as successful transactions. Some transactions do not contain this field, so we need to manually judge and format the successful results (such as timestamp and serial number) to extract MD5 feature values, it serves as the basis for determining the correctness of the subsequent test results of the interface. However, if the Status field is successful, it does not mean that the interface test has passed. After all, the returned results also contain many business data fields that need to be verified. If these field values change regularly (for example, they remain unchanged, increase or decrease continuously), we are prepared to define some model rules to judge them. The data that goes up and down will be left for judgment. In fact, for the smoke test, we don't think it is necessary to judge the correctness of each transaction. We only need to calculate the success rate of a large number of test cases and compare it with the success rate in the early stage, to determine whether the test result is normal.
4. execution efficiency
We understand that smoke testing requires an entrance test for a new version or test environment in the shortest possible time, in order to determine whether the smoke test is carried out in the future is the acceptance and adaptability test conditions, so the efficiency of the smoke test is crucial. Our policy is to continuously scan log processing packets through asynchronous small batch jobs and run test cases on a daily basis in a scheduled and concurrent manner. The execution time depends on the version installation time or the needs of test tasks, at present, there are 20 thousand test cases, which can be controlled within 10 minutes.
Implementation Scheme:
The implementation architecture is very simple, that is, an open-source ELK log collection architecture, coupled with the interface testing framework developed by python and the result statistics function, as shown in:
The main steps are as follows:
1. Use open-source ELK to collect and manage application logs. Deploy the logstash agent on the client and configure the log collection policy. The log records are sent to the REDIS memory database in the key-value format. This design is mainly to provide a buffer between the client and the server, ELSTICSEARCH provides the full-text retrieval function and API service for external calls.
2. Use the python pyes library to call ELSATICSEARCH's API service and capture interface messages in xml and json formats based on feature fields.
3. format the collected interface packets, format fields such as date, serial number, or timestamp, and perform MD5 verification on the formatted packets.
4. Use the http and socket interfaces library of python to implement interface test cases. Here we may want to make some customization based on different applications and try to implement it in a general way.
5. Automatically exit abnormal test cases. To ensure the availability of the case set, we have implemented a simple interface exit rule. If an interface case that has been executed more than three times and fails each time, it will be automatically defined as an invalid case by the system.
6. Perform success rate analysis and error attribution analysis on the execution results of the case to discover interface problems. Here, we will not focus on the success and failure returned by each test case. Instead, we will make statistics on the success rate, failure rate, and Error Type of each type of interface and identify problems from the perspective of changes in values and quantities.
7. The interface definition platform provides a web interface definition module to help business testers edit interface elements based on the Interface documentation and assemble them into interface packets for testing. For complex transaction scenarios (such as a long process or a large number of interactions), you can orchestrate the call sequence of the interface and the logical relationship between the front and back items on the platform to implement interface testing in a complex scenario. Although this function is more focused on automated testing, it helps us to implement interface testing that cannot be covered by the application's previous functional test, which is a good supplement.
Through the above method, we tested three applications in a week and found over 30 interfaces, nearly 20 thousand packet cases. The effectiveness of the case can reach 97%. Through daily automated testing of these cases, we found some interface functions and application environment configuration problems.
The above test method is only from a technical perspective. To meet the actual business test requirements, we also implement some simple functions: for example, we provide multi-dimensional Test Result Statistics; provides retrieval of message cases and test results based on business keywords, so that business testers can quickly find their own test cases. Business testers can manually modify the message case library, in this way, you can skip the application front-end and test the interface directly. At last, we record each execution time to form a baseline of the response time of the packet case, which is used for subsequent interface performance evaluation.
Summary and problems:
The above method is a very simple interface smoke test method, provided that the function test covers the interface case, and the interface message is recorded in the log. With the accumulation of cases and execution results, the interface test coverage will be more adequate, and the statistical results will be more accurate. If you can obtain the case from the production environment log, the test will be better. There are still many immature methods, such as the use of test results, the classification and Attribution Analysis of Failure packets. If the implementation is fully promoted, the test efficiency, especially the efficiency of test packet extraction and analysis, needs to be further improved.
You are welcome to make a brick.
Link: http://blog.tingyun.com/web/article/detail/1340