Web Service Automation Test at Baidu experience-http level

Source: Internet
Author: User
Tags truncated adwords api soapui java keywords

Objective

We tried several methods before testing the Web service, such as using SOAPUI, programming with local proxy classes, and so on. The former is not easy to automate, the latter is tested from the SOAP protocol level, the data organization is not very convenient, the test program is relatively complex, and the extensibility is not good.

Most of the SOAP protocol uses HTTP binding, and we find that using HTTP to send packets directly to the Web service server can be interface-independent, and that different interfaces are different for HTTP requests except for the contents of the package and the destination URL. This transforms the work of the modified test program into the work of constructing the data, which reduces the preparation and execution time of the test in the case of a small amount of data, and facilitates the integration of testing into the continuous integration environment on the other hand.

Web Service automation testing at the SOAP level

Background introduction

Web Service is a new branch of Web applications that are self-contained, self-describing, modular applications that can be published, positioned, and called through the web. The Web service is published through the WSDL (Web Services Description Language) file.

Soap is a simple Object access protocol, which defines a cross-platform Distributed System communication protocol. Soap needs to be bound to a lower-level transport protocol (such as HTTP, RMI, JMS), and so on. The most common is the HTTP binding. Most Web service implementations are based on the SOAP protocol, so the usual Web service test methods are tested at the SOAP protocol level. At the SOAP level, the packet format for each request is fixed, and different interfaces use different format packets, so testing requires different data preparation and test execution methods for different interfaces.

The existing SOAP-level approach to Web service automation testing is often done using tools to generate local proxy classes, automating the programmatic invocation of local proxy classes. This kind of test method data is simple to construct, but the test program is slightly complicated and the extensibility is not strong.

Data organization

The data organization for automated testing at the SOAP level is the preparation of the local class's parameters and return values.

1. Input data

To get a clearer picture of the input data, here are two graphs of soapui:

?

Figure 1 Interface One input

?

Figure 2 Interface two inputs

These two images show the input data at the soap level. Authheader is the header of a SOAP packet, a data structure common to all interfaces, and Getaccountinforequest and Getchangedidrequest are the body of the soap package, each with a different field. The type and starttime are the data that really needs to be prepared for each interface.

2. Expected output

Also here are two pictures of soap:

?

Figure 3 Interface One output

?

Figure 4 Interface two outputs

The resheader in the output is also the header of the SOAP packet, which is the public output of all interfaces. Getaccountinforesponse and Getchangedidresponse are the body of the soap package, and the respective fields are the expected result data that is really being prepared.

3. Analysis

As can be seen from the example above, the data organization based on the SOAP layer, the data that needs to be prepared is the input and output field values of the local class, so the data construction does not need to consider the format of the SOAP packet, it is more intuitive and simple. However, the number of fields of different interfaces, field names and types are different, so the data read more cumbersome, to different interfaces have different data reader program.

Test program

The test program contains the local proxy class and the program that invokes the proxy class. The generation of native classes is often aided by tools such as Java's Axis2. The program that calls the local proxy class is different for each interface, because each interface needs to invoke different functions, which is cumbersome in the case of many interfaces, and changes in the interface require local classes and test programs to be modified.

Results comparison

The result here is the return value of the local function. In the case where both the input and the expected output are read into memory, the result comparison is simple, which is to compare the expected output and the actual output on a per-field basis.

Web Service Automation testing at the HTTP level

The HTTP-level Web service test step is simple: Prepare the data, send the request packet using HTTP, accept the response using HTTP, parse the response, and verify the results.

Data organization

The HTTP-level data is the complete SOAP package (because most SOAP protocols are XML-based, so it is actually an XML package).

1. Input data

For example, the previous two interfaces, the corresponding input XML packets are:

?

Figure 5 Interface one input XML package

?

Figure 6 Interface two input XML package

2. Expected output

The expected output is also two XML packages, but the content is different, here is no longer an example.

3. Analysis

As can be seen from the above example, the HTTP layer of data is an XML packet. Comparing data constructs at the soap level, knowing only a few field values does not give you an XML package, you need to consider the format of the XML package yourself. The advantage is that the reading of the data is simple, just to read the contents of the entire XML file into memory. It is also convenient to construct input in the wrong format.

Test program

The HTTP-level-based test program is simple: constructs an HTTP request for the body of the input SOAP packet, sends it to the Web Service Server, receives the response, reads the response's HTTP packet body (which is a SOAP package). As you can see from this process, the test procedures for all the interfaces are the same. Therefore, the new interface or the original interface changes will not affect the test program.

Results comparison

At the HTTP level, both the input and output are an XML package. Thus the result comparison can be used to read input and output XML packets into two strings for string comparisons.

Improvement Ideas

According to the above analysis, the advantage of the HTTP level Web service test is that the data format is flexible, easy to read, the test program is extensible and can be applied to all interfaces. But the disadvantage is also obvious: the data construction trouble, but also easy to format error; Compared to mechanical, it is not possible to selectively compare certain fields (sometimes the returned results contain a time-related amount, which is different for each test, so it is necessary to ignore these fields when the results are compared). I made a corresponding improvement on these two shortcomings:

Improved data construction

The difficulty with data construction is that you cannot construct an XML package that conforms to the format from several fields.

First we analyze the input data. Then look at the input of the two interfaces given earlier:

?

Figure 7 Interface One input analysis

?

Figure 8 Interface Two input analysis

It can be seen that the two XML packages have the same part, that is, the part that is framed in red in a diagram. This part is the header of the SOAP packet. A standard SOAP package has this format:

?

Figure 9 SOAP Packet Format analysis

The input header section usually contains the user authentication information, the header format of the different interfaces is the same, the body is the real data, each interface is different. Usually different interfaces use a small number of headers (usually just one or a few users), so we can extract the structure of the header part. Take Dr-api as an example, the header of the input is mainly username, password and token three fields (and a Target field is optional), then we only need to prepare a few sets of Username/password/token.

Input body because it is interface-related, there is no unified method of construction. But we can do it manually with the Soapui tool, which is more convenient than writing XML packages directly. Refer to Figure 1, which is the "Form" label content of the Soapui interface. Under this tab, enter the field value that you want to fill in body, and then select the "XML" tab, and SOAPUI will display the corresponding send XML packet. The content from which the body is copied is saved in the XXX.input.xml.

The output header usually contains interface-independent information, such as processing time, operands, error codes, and so on (at least from the Baidu promotion API and the Google AdWords API). Then we can also extract the output header, in the preparation of the data only need to give a few fields. Because the names and types of these fields are the same for different interfaces, it is convenient for the program to read them.

Preparing the output body can also be aided by the SOAPUI. However, unlike the input body preparation, the latter only needs to have a WSDL file to construct, the output body needs to be provided by the Web service, the use of SOAPUI send a request to be returned, so that the output body can be obtained. Thus the actual preparation of the output body is usually directly constructed in a manual manner, without the use of SOAPUI.

Comparison of improved results

A significant part of the testing of WEB service is the test error code (in the case of the Baidu Promotion API V2 version, there are now about 300 error codes). Such a test does not need to focus on returning the body of the package, just to focus on returning the error code in the header, then we do not need to specifically construct the expected output of the XML package, only need to provide an error code. Similarly, if we focus only on the information returned in the header of the packet, then the process of constructing the expected output XML packet is omitted, and the result is quite simple.

If the test focuses on business data, that is, the body part of the package is returned, there is no better, more general workaround for the time being than a string comparison.

Summary of improvements

The result of the improvement is that the data for each case consists of three files: Input.xml, Output.xml, and a configuration file properties. Input.xml contains the input packet's body,output.xml contains the output package body, the configuration file is configured the input and output header, and the case corresponding to the URL and other information.

Holistic approach

The general idea is as follows: Read all the case data, generate a list of use cases (including the use case data and the interface of the call); For each use case, send the packet using HTTP, parse the return SOAP packet according to the parsing method of XML, verify the result, and use Ficus to perform data-driven testing with the use case list as input. As shown in the following:

?

Figure 10 Testing the overall idea

Design and implementation

Data organization

The main purpose of the data organization has two points: one is to do self-description, through the data can get input and expected output, as well as the requested URL, and the second is to reduce redundant data, such as the SOAP header information extracted. To achieve these two purposes, we used a configuration file and a data folder. The configuration file mainly stores metadata and common data, and the Data folder holds the real scattered data. The contents of the configuration file are as follows:

?

Figure 11 Global configuration file

The contents of the configuration file are mainly divided into five parts. Part A, D, and E Save the case-independent metadata, and B and C hold the generic header data.

The data in part A: The URL prefix for the entire Web service ("Drapi.serviceurl" in the figure), the common SOAP header designator ("Drapi.headersgroup" in the figure), and the Code for all service (" Drapi.servicesgroup ").

Sections B and C are two common headers. The normal and editor are their codes, given in a. Dr-api used in the header information is mainly Username/password/token/target, are given here. These headers are referenced by each case.

The D and E sections are two service. Their codes are given in a. There are three fields per service: name indicates the service, and the URL represents the service's URL suffix.

The Data folder holds the data for each case. The data folder can be layered again: Each subdirectory's name is the name of the service mentioned above, then all case-access URLs under that subdirectory are the URL of the service. Each case has XXX.input.xml, XXX.output.xml and xxx.properties three files, xxx represents the name of the case (in order to apply to Ficus, xxx preferably without spaces, otherwise it will be ficus into arrays).

Ficus data-driven

Data driven uses Ficus's data_driven keyword. Refer to ficus_datadrivenfor this keyword. This keyword requires two main inputs: a CSV file that executes a single case keyword. Each line of the CSV file represents a case,data_driven that invokes a keyword that executes a single case for each row of the CSV file.

We need to write two keywords for this. The first one is to read the configuration file and the Data folder to generate the CSV file. This keyword first reads the public configuration file, gets the service list and the input header list, and then creates a CSV file containing these columns: URL, header name, case name, XXX.input.xml path, XXX.output.xml path, and field in properties. Read each data file in the Data folder and add the case to the CSV file according to the combination of xxx.input.xml/xxx.output.xml/xxx.properties.

The second keyword is used to execute a single case. The input is a row of the CSV file. First, according to the header and XXX.input.xml used to assemble into the input XML package, using HTTP to send a request to the Web Service server, receive HTTP corresponding, parse out the header and body, Compare the body to the XXX.output.xml by comparing the fields in the header with the fields in Xxx.properties.

Ficus calling the Java keyword

We write the keyword using Java, here is a brief talk about how to use Ficus to call the Java keyword (thanks to Junrey classmate's help). In the latest version of the Ficus program, there is a Java directory with a ficus-java-stub.bat file. You can start Ficus java piles by playing your own Java keyword into a jar package, placing it in this directory, and then launching the file. When writing a case, first add the library, in order to use Java pile, need to add Ficusproxy.javaproxy | XXX | YYY | localhost | 2345. Both XXX and yyy are the class names of the public classes where the Java keywords are located, and the public method names for these classes are Ficus keywords.

Its principle is: Ficusproxy.javaproxy is a python keyword, when ficus encountered a keyword, it will be javaproxy to query. Javaproxy will send the request to the Java pile via the socket. Java piles load the jar package where the Java keyword is located, so you can use reflection to find a method with the name of the keyword to implement the call.

Tip, if the custom Java keyword (all Java methods) is where the analogy is more, it is better to ficusproxy.py the baseproxy of the _receive function of the length of the default value a little bit, This value represents the length of data sent per Ficus and Java pile communication. If it is too small, the data sent will be truncated, and the part containing the keyword name and parameters will be truncated and an error will occur.

Summarize

The above is a summary of our web service testing at the HTTP level. This method is convenient for scenarios where the volume of data is small and does not concern the output package body because of the need to manually construct the body of the input and output SOAP packets. Due to the use of data-driven, it is also easy to deal with the interface changes.

Follow-up considerations are migrated to Eficus to achieve the integration of case management and execution.


Web Service Automation Test at Baidu experience-http level

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.