Use JMeter for HTTP interface to do function, performance test

Source: Internet
Author: User

When testing the mobile app, there are a lot of interfaces that need to be tested, and I'm here to test the functionality and performance of the HTTP interface. First we get interface data from developers.

I. Description of TEST requirements

1, the interface of this test is the HTTP server interface

2, Interface: Query function interface

3, Interface Description: The user to query the list of orders

Ip:http://192.168.8.197/biz/api/v1/mobile/doctor/subscribe/orderlist

Request method: Get

Port number: 9090

Request Header Parameters: Token = ffb74003075c4094853c98bfcfd081b7

Request Parameters: StateType = all, Beginline = 1

Response data: Show details of all orders

II. deployment Environment using JMeter

1. First open the JMeter and add the thread group under the test plan.

2. Add the HTTP header manager under the thread group, if the interface does not have a request header parameter, it can be added.

3. Add the HTTP request under the thread group.

4. Add view results in HTTP request

5. Adding aggregation reports under HTTP requests

Third, after the deployment is complete, start filling in the data for testing

1, first fill in the HTTP Information Header Manager, click Add, the request header parameters: Token = ffb74003075c4094853c98bfcfd081b7 fill in.

2, fill in the HTTP request, click Add, the request parameters: StateType = all, Beginline = 1 fill in.

Protocol: HTTP

ip:192.168.8.197

Port number: 9090

Request method: Get

Path:/biz/api/v1/mobile/doctor/subscribe/orderlist

3, set the thread group, according to test requirements to set. We are setting up 10 virtual users, looping 1 times.

Explain the thread group settings here:

(1) Number of threads: that is, the number of users, a virtual user to occupy a process or thread, how many users need to press, set the number of virtual users, here is how many number of threads set.

(2) ramp-up Period (in seconds) Preparation time: The number of virtual users you set will need to be started all the time. For example, the number of threads is 100 and the preparation time is 10, then 10 seconds are required to start 100 threads, that is, 10 threads per second. We set up here for 1 seconds, which means that 10 users all start up in 1 seconds.

(3) Number of cycles: the number of times each thread sends a request. For example, if the number of threads is 10 and the number of cycles is 10, then each thread sends 10 requests, then the total number of requests is 10*10=100. If "Forever" is checked, all threads will continue to send the request until you choose to stop running the script.

(4) Delay thread creation until needed: delays the creation of threads until needed.

(5) Scheduler: You can set the duration or start time and end time of a thread group.

When you choose to configure the scheduler, you need to set the number of cycles to always, to prevent the end of the test due to insufficient number of cycles.

Duration (seconds): How long the test plan lasts, for example, 1 minutes to fill 60, overwriting the end time.

Startup time: When the test plan is started, the startup delay overwrites it. When the startup time has elapsed, it is also overwritten by the current time when the script is run manually (but the start Time page display does not change).

End time: When the test Plan ends, the duration will overwrite it.

Startup delay (seconds): How long the test plan is delayed to start, overwriting the startup time.

4, all set up you can click on the green arrow to run, or click on the "Run" menu bar "Start" list. After the run is complete, let's take a look at the number of results. Click Response data to see if the returned data is consistent with the development department.

5, the request is successful, we look at the aggregation report, this data is our test results.

6. Here we explain the values of the aggregation report parameters, mainly to view the values of average and throughput.

(1) Label: request the corresponding Name property value

(2) Samples: Indicates how many requests have been made in this test, we are here to simulate 10 users, each user iterates 1 times, then this shows 10.

(3) Average: The average response time for a single Request.

(4) Median: Median, 50%-user response time

(5) 90% line:90% User's response time

(6) 95% line:95% User's response time

(7) 99% line:99% User's response time

(8) Min: Minimum response time

(9) Max: Maximum response time

error%: The total number of requests/requests that have occurred in this test

(one) Throughput: throughput, indicating the number of requests completed per second

(Received) Kb/sec: The amount of data received per second from the server side

Sent kb/sec: The amount of data sent from the server side per second

Use JMeter for HTTP interface to do function, performance test

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.