Web site Performance Stress test Tool--apache AB use detailed

Source: Internet
Author: User

AB is an Apache self-brought stress test tool. AB is very practical, it can not only the Apache server site access stress testing, but also other types of servers to carry out stress testing. such as Nginx, Tomcat, IIS, and so on.

Let's start by introducing the use of the AB command:
1, AB's principle
2, the installation of AB
3, AB parameter description
4. AB Performance Index
5, AB actual use

First, the principle of AB
AB is the abbreviation for the Apachebench command.

AB principle: The AB command creates multiple concurrent access threads, simulating multiple visitors accessing a URL address at the same time. Its test target is URL-based, so it can be used to test the load pressure of Apache, as well as other Web servers such as Nginx, Lighthttp, Tomcat, and IIS.

The AB command is very low on the computer that emits the load, and it does not occupy a high CPU or consume a lot of memory. But it will cause a huge load on the target server, which works like a CC attack. You also need to be aware of your own testing, or too much load at a time. May cause the target server resources to run out, serious even causes the panic.

Second, the installation of AB
$ yum Install Httpd-tools
Once the command is complete, you can run AB directly.

Third, the AB parameter description

Here are some instructions for these parameters. As follows:
-N: The number of requests executed in the test session. By default, only one request is executed.

-C: The number of requests produced at one time. The default is one at a time.

-T: The maximum number of seconds the test takes. Its internal implied value is-n 50000, which allows the server to be tested in a fixed total time limit. By default, there is no time limit.

-P: The file that contains the data that needs to be post.

-P: Provides a basic authentication trust to a transit agent. The user name and password are separated by a: and sent in Base64 encoded form. This string is sent regardless of whether the server is required (that is, if a 401 authentication requirement code is sent).

-t:post the Content-type header information used by the data.

-V: Sets the level of detail to display information-4 or greater displays header information, 3 or greater values can display response codes (404,200, etc.), and 2 or greater values can display warnings and other information.

-V: Displays the version number and exits.

-W: Outputs the result in the format of an HTML table. By default, it is a table with a two-column width on a white background.

-I: Executes the head request instead of get.

-X: The string that sets the <table> property.

-X: Use a proxy server for the request.

-Y: Sets the string for the <tr> property.

-Z: Sets the string for the <td> property.

-C: Attach a cookie to the request: line. The typical form is a parameter pair of Name=value, which can be repeated.

-H: Attach additional header information to the request. A typical form of this parameter is a valid header information row that contains a pair of fields and values separated by colons (for example, "Accept-encoding:zip/zop;8bit").

-A: Provides basic authentication trust to the server. The user name and password are separated by a: and sent in Base64 encoded form. This string is sent regardless of whether the server is required (that is, if the 401 authentication requirement code is sent).

-H: Shows how to use it.

-D: Message "percentage served within XX [MS] table" is not displayed (supported for previous versions).

-e: Produces a comma-delimited (CSV) file that contains the corresponding percentage (in subtle units) of time that is required to process requests for each corresponding percentage (from 1% to 100%). This format is more useful than the ' gnuplot ' format because it is "binary".

-G: Writes all test results to a ' gnuplot ' or a TSV (tab-delimited) file. This file can be easily imported into Gnuplot,idl,mathematica,igor or even Excel. One of the first behavior headings.

-I: Executes the head request instead of get.

-K: Enables the HTTP keepalive feature to execute multiple requests in an HTTP session. By default, the KeepAlive feature is not enabled.

-Q: If the number of requests processed is greater than 150,ab per processing of approximately 10% or 100 requests, a progress count is output at stderr. This-Q flag can suppress this information.

Iv. Performance Index of AB
Several indicators are important in the performance testing process:

1. Throughput rate (requests per second)
A quantitative description of the server concurrency processing capability, in REQS/S, which refers to the number of requests processed per unit of time in a concurrent user. The maximum number of requests per unit of time that a concurrent user could process, called the maximum throughput rate.

Remember: The throughput rate is based on the number of concurrent users. This statement represents two meanings:
A, throughput rate and number of concurrent users
b, different number of concurrent users, the throughput rate is generally different

Calculation formula: Total number of requests/time taken to complete these requests, i.e.
Request per second=complete Requests/time taken for tests

It is important to note that this value represents the overall performance of the current machine, and the larger the value the better.

2. Concurrent connections (the number of concurrent connections)
The number of concurrent connections refers to the number of requests received by the server at some point, simply speaking, is a session.

3. Number of concurrent users (Concurrency level)
To be aware of the difference between this concept and the number of concurrent connections, a user may produce multiple sessions at the same time, that is, the number of connections. Under http/1.1, IE7 supports two concurrent connections, IE8 supports 6 concurrent connections, FIREFOX3 supports 4 concurrent connections, so our number of concurrent users will have to be divided by this cardinality accordingly.

4. Average user request wait time (times per request)

Calculation formula: The amount of time spent processing all requests (total number of requests/concurrent users), i.e.:
Time per request=time taken for tests/(complete requests/concurrency level)

5. Average Server request Wait time (Request:across all concurrent requests)

Calculation formula: The amount of time/Total requests processed to complete all requests, namely:
Time taken For/testscomplete requests
As you can see, it is the reciprocal of the throughput rate.

It also equals the average request waiting time/number of concurrent users, i.e.
Time per request/concurrency level

V. Actual use of AB
1, AB command parameters are more, we often use the-C and-n parameters.

Let's test Apache's performance now. Use the following command:

$ ab-n 100-c Http://13.209.21.196:8080/trade-server/test/order/testQueue-n 100 indicates that the total number of requests is 100-c 10 indicates the number of concurrent users is 10http:// 13.209.21.196:8080/trade-server/test/order/testqueue indicates the destination URL of the request

This line represents processing 100 requests and running 10 requests at a time.

Through the test results at a glance, the AB test yields a throughput of: Requests per second:5655.47[#/sec] (mean).

In addition, there are some other information that needs to be explained below:
Server Software represents the name of the Web server software being tested.

The Server Hostname represents the URL host name of the request.

Server Port represents the listening port of the Web server software being tested.

Document Path represents the root absolute path in the requested URL, and with the suffix of the file, we can generally understand the type of the request.

Document length represents the body size of the HTTP response data.

Concurrency level represents the number of concurrent users, which is one of the parameters we set.

Time taken for tests represents the total amount of times that all these requests have been processed to complete.

Complete requests represents the total number of requests, which is one of the parameters we set.

Failed Requests represents the number of failed requests, where the failure is the case where the request is in connection with the server, sending data, and so on, and when there is no response time out. If the header information of the received HTTP response data contains a status code other than 2XX, another statistic named "Non-2xx responses" is displayed in the test results to count the number of requests that are not counted in the failed request.

Total transferred represents the sum of the response data lengths for all requests, including the header information for each HTTP response data and the length of the body data. Note that this does not include the length of the HTTP request data, only the total length of the application layer data that the Web server flows to the user's PC.

The HTML transferred represents the sum of the body data in all the requested response data, minus the length of the header information in the HTTP response data in total transferred.

Requests per second throughput rate, also called QPS, calculation formula: Complete Requests/time taken for tests

Time per request user average request wait time, from the user's point of view, to complete a request for the duration. Calculation formula: Time token for tests/(complete requests/concurrency level).

Time per Requet (across all concurrent request) when the server completes a request, the formula is calculated as: Times taken for Tests/complete requests, is exactly the reciprocal of the throughput rate.
It is also possible to count: Time per request/concurrency level.

Transfer rate represents the network transmission speed, the calculation formula: Total Trnasferred/time taken for tests, this statistic very good explains the server's processing ability reaches the limit, its export broadband demand.

For large file request testing, this value can easily become a system bottleneck. To determine if the value is a bottleneck, you need to understand the network situation between the client and the server under test, including information such as network bandwidth and Nic speed.

Percentage of requests served within a certain time (MS)
This part of the data is used to describe the distribution of each request processing time, such as the above test, 80% of the request processing time is not more than 2ms, this processing time refers to the preceding times per request, that is, for a single user, the average processing time per requests.

The first row of this table indicates that 50% of the requests are done within 2ms, and you can see that this value is closer to the average system response time, and so on.

Connection Times (MS)

Connection times (ms)               min  mean[+/-sd] Median   maxconnect:        0    1   0.1      1       1Processing :     1    1   0.2      1       2Waiting:        1    1   0.2      1       2Total:          1    2   0.2      2       2

The tables made up of these lines are subdivided and counted mainly for response times, the first time per request. The response time of a request can be divided into network link (Connect), System processing (processing) and wait (waiting) three parts. The min in the table represents the minimum value, the mean means the mean, [+/-SD] represents the standard deviation, or the mean variance (mean square error), which is taught in the math class in high school, which indicates the degree of dispersion of the data. The larger the value, the more fragmented the data, and the more unstable the system response time. Median represents the median; Max is of course the maximum value.

It is important to note that total in the table is not equal to the first three rows of data, because the first three rows of data are not collected in the same request, perhaps the network latency of a request is the shortest, but the system processing time is the longest. So total is counted from the point of view of the time required for the entire request. Here you can see that the slowest request cost 2ms (that is, 100% 2 (longest request)).

Web site Performance Stress test Tool--apache AB use detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.