Interface Test Overview definition
API testing is a type of software testing that involves testing application programming interfaces (APIs) directly and as Part of integration testing to determine if they meet expectations for functionality, reliability, performance, and Securi Ty. Since APIs lack a GUI, API testing is performed at the message layer. [2] API testing is now considered critical for automating testing because APIs now serve as the primary interface to Appli Cation logic and because GUI tests is difficult to maintain with the short release cycles and frequent changes commonly u SED with Agile software development and DEVOPS).
Wiki definition: The interface test as part of the integration test, through the direct control API to determine the functional, reliability, performance and security of the system. The API test is not interface-executed at the communication layer. API testing plays an important role in automated testing because APIs are typically the primary interface for application logic , and GUI testing is difficult to maintain in agile development and in fast iterations and frequent changes in DevOps .
Classification
Interface testing is a test of the interface between components of a test system. Interface testing is primarily used to detect the interaction points between external systems and systems , as well as within each subsystem . the focus of testing is to check the exchange of data, transfer and control management processes, and logical dependencies between systems. interface testing is broadly divided into two categories: module interface testing and web interface testing.
Module Interface Test
Module interface testing is the basis of unit testing. It mainly tests the call and return of the module. It is often necessary to write some pile modules and drive modules.
The main test points are as follows:
Check that the data returned by the interface is consistent with the expected results .
Check the fault tolerance of the interface, if it can be handled if the type of the data passed is wrong .
the boundary value of the interface parameter . For example, if the passed parameter is large enough or negative, the interface can be handled normally .
The performance of the interface, the time the interface processes the data is also a method of testing. The internal algorithm and code optimization are involved.
security of the interface
Web Interface Testing
Web interface testing can be divided into two categories: Server Interface Testing and external interface testing.
Server Interface Test : is the interface between the test browser and the server. User input data is input to the front-end page, how to transfer the data in the background? The get and POST requests for the HTTP protocol are implemented to achieve data transfer at the front and back ends . This can also be considered an interface test.
external interface Testing : This is a typical example of third-party payments, such as in our application, when the charge is charged, the third-party payment interface will be called.
The main test points are as follows:
The request is correct , the default request succeeds 200, and if the request error can return 404, 500, and so on.
Check the correctness and format of the returned data ; JSON is a very common format.
The security of the interface, the general web will not be exposed to any online call, need to make some restrictions, such as authentication or authentication.
The performance of the interface, which directly affects the user experience.
Interface Test Tools
SOAPUI
Jmeter
Grinder
Suds Python
The main application of SOAPUI and JMeter in the work. SOAPUI has better support for interface security testing . This article is mainly about the use of JMeter, the focus is on functional testing, for its strengths performance testing , described in a future article.
Test Case design and principles
Because the interfaces tested in the actual work are based on the HTTP protocol, the following test cases and principles are also for such interfaces.
Test Cases
Positive test Cases:
Overwrite all required parameters
Combining optional parameters
Parameter boundary value
If the value range of the parameter is an enumeration variable, all enumerated values need to be overwritten
You should also consider the actual business application scenario to design the combination of input parameters. (These use cases can be used to test functionality as smoke use cases.) It can also be used in the future to simulate actual business scenarios for stress testing, but be careful to ensure the independence of use cases, because stress testing is multithreaded. For example, we test account creation interface, name is not heavy, when writing a test case, the name can be assigned to a time stamp, so that the use case in multi-threaded concurrency test is not a problem)
Negative test Cases:
Verification point:
Status code (under normal circumstances, all requests should return 200)
Response Information data structure (most of the time, the return information is JSON, we should verify the corresponding structure when the data information changes)
Verify the type of node
Verify the value of the node (mainly for fixed values or values following certain rules, we can know the expected result)
For lists, you should also verify that the length of the list is consistent with the expected value based on the request parameters.
Negative test cases, you should verify that error info matches the actual
Testing principles
Tests should be self-contained, readable, variable, and maintainable, which is the principle that all automated tests should follow.
Each test case is independent
Test cases are repeatable (this means that some test data cannot be written to death, and different environmental data may be different.) In the actual work, the solution has two: self-created the required data , such as you want to test the interface requires input parameter AccountId, you can call the Create account API, and then from the response value to get AccountId, When you have finished testing the interface you want to test, then delete the new account, which means a test case is three steps. Another way is to read the database, from the database to obtain data, this method in the test development and test environment is OK, but if the line of the environment is more difficult, because we can not arbitrarily update the above data, and can not put too much test data on it. So I personally prefer the first method, although increase the workload of development cases, but once and for all)
Testing can be run in different environments (the normal test environment will be at least divided dev/test/staging/online, we in the test process, we should put the domain name, Token/apikey, etc. should be placed in a variable, when switching environment, we only need to change the value of the variable can be
Test data is decoupled from the business (test data includes parameter interface data/system data required for test execution)
Try to unify common test environment variables
After the test is complete, delete the unnecessary test data.
JMeter use
In the actual work, I mainly apply jmeter to the interface to do functional testing, so the following mainly introduce the use of JMeter
Basic introduction
Here is a test script for me, usually a file containing the following components. I use the simple controller and debug sampler to organize the management of different interfaces, the verification point mainly by writing some BeanShell scripts to achieve. For some complex operations, if available resources are found on the web, such as jar,class files are referenced directly in BeanShell preprocessor/postprocessor. In addition, writing BeanShell in JMeter is not easy to debug, so it is recommended that complex features be written directly in Eclipse and then build the jar package. About BeanShell use will be introduced in the following
Using BeanShell in JMeter
BeanShell is a scripting language that fully conforms to the Java syntax specification and has its own syntax and methods [official website] (http://www.beanshell.org/)
Almost all of my scripts are validated through BeanShell scripts, and only a few have response Assert applied.
BeanShell common built-in variables
(http://jmeter.apache.org/api/org/apache/jmeter/threads/)
Here are some practical examples
- Gets the information returned by the previous sample (prev)
- Write information to the Jmeter.log file
Log.info ("Log Information")
It is similar to VARs, and the corresponding properties are defined in the file jmeter.properties
- referencing external files (Jar/class/java)
In addition, if you reference an external jar package, you can also configure it in test plan, click the test Plan node in JMeter, and you will see the following interface to add the path of the jar package directly.
Other CSV configuration components use
Csv_data_set_config when sending multiple sets of the same request, but with different parameters, this configuration component can be added
The above variables can then be used with the loop controller in the sampler
Connecting to a database
During the testing process, we need some test data from the database, when we need to connect the database in JMeter
The following is an example of a MySQL database connection
Download MySQL JDBC driver from (http://dev.mysql.com/downloads/connector/j/5.1.html)
Copy this file to the "Lib" folder under the JMeter installation path
Establish "JDBC Connection configuation"
For additional database connections please refer to:
Because we are in the interface test, more time is to obtain the data, so basically use "select". If you want to insert data, you need to select "Callable Statement" in "Query Type"
The following points are noted in the use process:
SQL statements do not add semicolons
If the query condition is a variable, in the statement "?" Value in the following "Parameters value" definition, if there are multiple parameters, the middle is separated by semicolons. Of course, you can also use the ${variable name} (defined in the user-defined variables component)
You can set how many variables you have in the "variable names" datasheet, and a comma placeholder for columns that do not need to be set.
For more specific use details please refer to (http://jmeter.apache.org/usermanual/component_reference.html#JDBC_Request)
Get query result data in BeanShell
Add Listener
Aggregate report is a commonly used listener in Jmeter, which is translated into "aggregated reports", each of which is indicated below.
Label: Each JMeter element (for example, HTTP Request) has a name attribute, which shows the value of the Name property
#Samples: Indicates how many requests you have made in this test, and if you simulate 10 users, each user iterates 10 times, then this shows a 100
Average: Average response time--the average response time of a single Request by default, or the average response time in Transaction when a Transaction Controller is used
Median: Median, which is the response time of 50% users
90% line:90% User's response time
Min: Minimum response time
Max: Maximum response time
error%: Number of requests with errors in this test/total number of requests
Throughput: Throughput--By default, the number of requests completed per second (request per Second), when Transaction Controller is used, can also represent LoadRunner-like Transaction per Second number kb/sec: The amount of data received from the server side per second, equivalent to throughput/sec in LoadRunner
Jmeter and Jekins Integration
Just talk about it. How to execute jmeter at the command line
First configure the Jmeter_home environment variable, the value is you jmeter decompression path
Run jmeter-v on the command line, return to the current version correctly, prove environment OK
Run Jmeter-n-T script.jmx-l log.jtl
The next thing, lose classmate has recommended a link (https://testerhome.com/topics/2580), I think has fully explained the problem, so here is no longer detailed.
(End of this article)
Turn from "https://testerhome.com/topics/4059" and thank the author for his generous sharing.
Use JMeter for Web interface testing