Common Linux Network Tools: Http stress testing AB and linux stress testing
The full name of AB is Apache stress, which is a self-built Network stress testing tool of Apache. Compared with LR and JMeter, AB is the simplest and most common Http stress testing tool I know.
The AB command has very low requirements on the computer that sends out the load and does not occupy high CPU and memory. However, it can also generate huge loads on the target server and implement basic stress testing.
During stress testing, it is best to directly connect to the server using a vswitch to obtain the maximum network throughput.
The installation of AB is very simple. Apache will be installed automatically. If you want to install AB separately, you can use yum to install it:
yum -y install httpd-tools
AB Command Option
The basic parameters of the AB command are-n and-c:
-N number of executed requests-c number of concurrent requests
Other parameters:
-T test maximum number of seconds-p contains the Content-type header information used by the file-t post data that requires POST-k to enable the HTTP KeepAlive function, that is, multiple requests are executed in an HTTP session. By default, KeepAlive is not enabled.
Command example:
ab -n 1000 -c 100 http://www.baidu.com/
AB performance indicators
For the test result using the AB command, refer to the Chinese explanation:
Document Path: // ### requested resource Document Length: 50679 bytes ### Length returned by the Document, excluding the corresponding header Concurrency Level: 3000 ### number of concurrent times Time taken for tests: 30.449 seconds ### Total request time Complete requests: 3000 ### Total number of requests Failed requests: 0 ### number of Failed requests Write errors: 0 Total transferred: 152745000 bytesHTML transferred: 152037000 bytesRequests per second: 98.52 [#/sec] (mean) ### average number of requests per second Time per request: 30449.217 [MS] (mean) ### average Time consumed by each request per request: 10.150 [MS] (mean, messaging SS all concurrent requests) ### preceding request divided by concurrency Transfer rate: 4898.81 [Kbytes/sec] received ### transfer rate Connection Times (MS) min mean [+/-sd] median maxConnect: 2 54 27.1 55 98 Processing: 51 8452 5196.8 7748 30361 Waiting: 50 6539 5432.8 6451 30064 Total: 54 8506 5210.5 7778 30436 Percentage of the requests served within a certain time (MS) 50% 7778 ### 50% of requests are completed within 7778Ms 66% 11059 75% 11888 80% 12207 90% 13806 95% 18520 98% 24232 99% 24559 100% 30436 (longest request)
The results of stress testing focus on the throughput (Requests per second) and average user request wait Time (Time per request) indicators:
1. throughput (Requests per second ):
The quantitative description of the server's concurrent processing capability, in reqs/s, refers to the number of requests processed per unit time under a number of concurrent users. The maximum number of requests that can be processed per unit time under a number of concurrent users is called the maximum throughput.
Remember: The throughput is based on the number of concurrent users. This sentence represents two meanings:
A. throughput and concurrent users
B. the throughput is generally different for different concurrent users.
Calculation formula: Total number of requests/time taken to process the number of completed requests, that is
Request per second = Complete requests/Time taken for tests
It must be noted that this value indicates the overall performance of the current machine. The larger the value, the better.
2. Average user request wait Time (Time per request ):
Calculation formula: time spent on processing all requests/(total number of requests/number of concurrent users), that is:
Time per request = Time taken for tests/(Complete requests/Concurrency Level)
3. Average Server request wait Time (Time per request: Wait SS all concurrent requests ):
Calculation formula: the time/total number of requests processed, namely:
Time taken for/testsComplete requests
We can see that it is the reciprocal of the throughput.
At the same time, it is equal to the average user request wait time/number of concurrent users, that is
Time per request/Concurrency Level.
Record, for better yourself!