AB is the Apache's own stress testing tool that simulates various conditions and initiates a variety of test requests directly on the server. Next we use AB for a stress test, and then enter at the command line tool:
Ab-v
You can view version information for the AB tool
~ zfs$ ab-v
This is apachebench, Version 2.3 < $Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to the Apache Software Foundation, http://www.apache.org/
All right, all set, start the stress test, look at the command line information below
~ zfs$ AB-N1000-C10 http://localhost/test/demo.html
This is apachebench, Version 2.3 < $Revision: 1554214 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to the Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (Be patient)
Completed Requests
Completed Requests
Completed Requests
Completed Requests
Completed Requests
Completed Requests
Completed Requests
Completed Requests
Completed 900 Requests
Completed 1000 Requests
Finished 1000 requests
Server software:apache/2.4.10
Server Hostname:localhost
Server port:80
Document Path:/test/demo.html
Document length:174 bytes
Concurrency Level:10
Time taken for tests:0.151 seconds
Complete requests:1000
Failed requests:0
Total transferred:405000 bytes
HTML transferred:174000 bytes
Requests per second:6631.70 [#/sec] (mean)
Time per request:1.508 [MS] (mean)
Time/request:0.151 [MS] (mean, across all concurrent requests)
Transfer rate:2622.89 [Kbytes/sec] Received
Connection Times (MS)
Min MEAN[+/-SD] Median max
connect:0 0 0.2 0 1
processing:0 1 0.5 1 4
waiting:0 1 0.4 1 4
Total:1 1 0.5 1 5
Percentage of the requests served within a certain time (MS)
50% 1
66% 2
75% 2
80% 2
90% 2
95% 2
98% 3
99% 3
100% 5 (Longest request)
Please note that when we start AB, we pass in 3 command-line arguments, which represent the prerequisites we mentioned earlier.
-n1000 represents a total request of 1000
-C10 indicates a concurrent user number of 10
Http://localhost/test/demo.html represents the target URL for these requests
The test results at a glance, we see the throughput rate is 6631.70reqs/s. At the same time, there are some other content in the test results that deserves our attention, mainly including the following items
The server Software represents the Web server software name of the test, here is apache/2.4.10, which comes from the header of HTTP response data, so if it is our own Web server software or modify the source code of the open source Web server, You can change the name of this place at will, just as we used to modify the prop properties with the game modifier.
The Server Hostname represents the host part name in the URL of the request, which comes from the header of the HTTP request data, where the URL we requested is http://localhost/test/ Demo.html, so the host name is localhost, which indicates that our request originated from the Web server side.
Server port represents the port on which the tested Web server software listens, and for convenience of testing, we later use a different listening port for several different Web servers.
Document Path represents the root absolute path in the request URL, and it also comes from the header information of the HTTP request data, and by its suffix name, we can generally understand the type of the request.
Document length represents the body length of the HTTP response data.
The concurrency level represents the number of concurrent users, which is the parameter we set.
Time taken a for tests represents the total amount spent on all these requests being processed. Incidentally, some Apache versions, such as the AB attached to the 2.2.4, have some computational bugs, and when the total number of requests is low, the total time of the statistics cannot be less than 0.1s.
Complete requests represents the total number of requests, which is the corresponding parameter we set.
Failed requests represents the number of failed requests, where the failure refers to an exception in connection with the server, sending data, receiving data, and timeouts after no response. For settings for timeouts, you can use the-t parameter of AB.
If the header information for the received HTTP response data contains a status code other than 2xx, then another statistic named "Non-2xx responses" is displayed in the test results, which is used to count the number of requests in this section, which are not considered failed requests.
Total transferred represents the sum of the response data lengths for all requests, including the header and body data length of each HTTP response data. Note that this does not include the length of the HTTP request data, so total transferred represents the overall length of the application layer data flowing from the Web server to the user's PC. You can view detailed HTTP header information by using the-v parameter of AB.
The HTML transferred represents the sum of the body data in all the requested response data, minus the length of the header information in the HTTP response data in total transferred.
Requests per second This is our focus on throughput, which equals Complete requests/time taken for tests.
Times per request This is the average user request wait time mentioned earlier, which equals the taken for tests/(Complete requests/concurrency level).
Time/request (across all concurrent requests) This is the average server request processing times mentioned earlier, which equals the timing taken for Tests/complete requests This is the reverse of throughput Number. Also, it is equal to time of request/concurrency level.
Transfer rate represents the length of data that these requests get from the server within a unit of time, which equals total transferred/time taken for tests this statistic is a good indication of the server's export bandwidth demand when its processing power reaches its limit. With the knowledge of bandwidth described earlier, it is not difficult to calculate the results.
Percentage of the requests served within a certain time (MS) is used to describe the distribution of processing times for each request, for example, in the above test results, the processing time of 80% requests is not more than 2ms, and 99% Requests are no more than 3ms. Note that the processing time here refers to the previous time per request, that is, for a single user, the average processing times for each of the requests.