The string for the property. -C attaches a cookie to the request: line. The typical form is a parameter pair of Name=value, which can be repeated. -H attaches additional header information to the request. A typical form of this parameter is a valid header information row that contains a pair of fields and values separated by colons (for example, "Accept-encoding:zip/zop;8bit"). -A provides a Basic authentication trust to the server. The user name and password are separated by a: and sent in Base64 encoded form. This string is sent regardless of whether the server is required (that is, if the 401 authentication requirement code is sent). -H Displays the usage method. -D does not display the message "percentage served within XX [MS] table" (Support for previous versions). -E produces a comma-delimited (CSV) file that contains the corresponding percentage (in subtle units) of time that is required to process requests for each corresponding percentage (from 1% to 100%). This format is more useful than the ' gnuplot ' format because it is "binary". -G writes all test results to a ' gnuplot ' or a TSV (tab-delimited) file. This file can be easily imported into Gnuplot,idl,mathematica,igor or even Excel. One of the first behavior headings. -I executes the head request instead of get. -K enables the HTTP keepalive feature, which is to execute multiple requests in an HTTP session. By default, the KeepAlive feature is not enabled. -Q If the number of requests processed is greater than 150,ab per processing of approximately 10% or 100 requests, a progress count is output at stderr. This-Q flag can suppress this information.AB actual use The command parameters of AB are more, and we often use the-C and-n parameters. For example: [Email protected] ~]# ab-n 100-c http://www.baidu.com/###-n: Number of requests issued-C: concurrency per session This is apachebench, Version 2.0.40-dev < $Revision: 1.146 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus technology LTD, http://www.zeustech.net/ Copyright 2006 the Apache software Foundation, http://www.apache.org/ Benchmarking www.baidu.com (Be patient) ... done Server software:bws/1.0 # # #服务器的信息 Server Hostname:www.baidu.com # # #域名 Server port:80 # # #连接的端口 Document Path:/# #请求的URI Document length:10530 Bytes # # #第一次返回文档的大小. If the size of the document changes during testing, the response is considered an error. Concurrency Level:10 # # #并发数 Time taken for tests:29.32944 seconds # # #开始到结束的时间 Complete REQUESTS:100 # # #成功的请求数 Failed requests:42 # # #失败的请求数 (connect:0, length:42, exceptions:0) # # #详细的多少个连接失败, length exception, read failed Write errors:0 # # #在发送的时候失败的次数 Total transferred:1131908 bytes # # #从服务器接收的字节数. This is the explicit network send byte. HTML transferred:1084140 Bytes # # #html内容传输量 Requests per second:3.44 [#/sec] (mean) # #每秒请求数 Time per request:2903.294 [MS] (mean) # # #每个并发的时间 Time per request:290.329 [MS] (mean, across all concurrent requests) # # #个人理解是每个并发中每个请求的时间 Transfer rate:38.06 [Kbytes/sec] received # # #每秒的网络流量 Connection times (ms) Min MEAN[+/-SD] median max connect:37 1003 809.6 898 4056 # # #socket发出请求到建立连接所花的时间. processing:253 1713 861.2 1800 5643 # # #连接建立后 until the time that HTTP is all received. waiting:42 759 711.5 715 4886 # # #发送http完后, the time to wait for the first byte to be received. total:336 2717 1248.4 2739 6655 # # #conn +processing The AB performance indicator has several metrics that are important during performance testing: 1, throughput rate (requests per second) Server concurrency processing capability, in REQS/S, Refers to the number of requests processed within a unit of time for a concurrent user. The maximum number of requests per unit of time that a concurrent user could process, called the maximum throughput rate. Remember that the throughput rate is based on the number of concurrent users. This sentence represents two meanings: A, throughput and number of concurrent users B, different concurrent users, the throughput rate is generally different from the Calculation formula: Total number of requests/time spent processing these requests, i.e. Request per second= The complete requests/time taken for tests must indicate that this value represents the overall performance of the current machine, and the larger the value the better. 2, concurrent connections (the number of concurrent connections) Number of concurrent connections refers to the number of requests received by the server at some point, simply speaking, is a session. 3, Number of concurrent users (Concurrency level) to distinguish between this concept and the number of concurrent connections, a user may produce multiple sessions at the same time, that is, the number of connections. Under http/1.1, IE7 supports two concurrent connections, IE8 supports 6 concurrent connections, FIREFOX3 supports 4 concurrent connections, so our number of concurrent users will have to be divided by this cardinality accordingly. 4, average user request latency (time per request) Calculation formula: The amount of time spent processing all requests (total number of requests/concurrent users), namely: Request=time taken for tests/ (complete requests/concurrency level) 5, Server average request latency (time per request:across all concurrent requests) Calculation formula: The amount of time/total requests that are processed to complete all requests, that is, taken For/testscomplete requests can see that it is the reciprocal of the throughput rate. at the same time, it is equal to the average user request wait time/number of concurrent users, that is, Request/concurrency level Specific practical cases can be referenced Tengine & nginx performance test Http://tengine.taobao.org/document_cn/benchmark_cn.html Background We implemented the support of So_reuseport [1] in Tengine. To see how it works, we have a simple test. We deployed a total of four identically configured servers in the same LAN, one of which deployed Tengine and Nginx at the same time, listening to different ports separately, the other three deployed AB, three AB simultaneous pressure measurement, from the overall concurrency of 100 gradually increased to 1000, respectively, pressure test tengine and Nginx To access an empty GIF image. Three types of pressure measurement scenarios: Tengine Open So_reuseport,reuse_port on. Nginx default configuration. Nginx optimized configuration, close mutex lock, accept_mutex off. AB Pressure test command: Ab-r-N 10000000-c http://ip:81/empty.gif Test environment Intel (R) Xeon (r) [email protected] 32core Intel Corporation 82599EB 10-gigabit sfi/sfp+ Network Connection Red Hat Enterprise Linux Server release 5.7 (Tikanga) Linux-3.17.2.x86_64 128GB Memroy Software nginx/1.6.2 tengine/2.1.0 (http://tengine.taobao.org) apachebench/2.3 System Configuration Net.ipv4.tcp_mem = 3097431 4129911 6194862 Net.ipv4.tcp_rmem = 4096 87380 6291456 Net.ipv4.tcp_wmem = 4096 65536 4194304 Net.ipv4.tcp_max_tw_buckets = 262144 net.ipv4.tcp_tw_recycle = 0 Net.ipv4.tcp_tw_reuse = 1 Net.ipv4.tcp_syncookies = 1 Net.ipv4.tcp_fin_timeout = 15 Net.ipv4.ip_local_port_range = 1024 65535 Net.ipv4.tcp_max_syn_backlog = 65535 Net.core.somaxconn = 65535 Net.core.netdev_max_backlog = 200000 Limit Soft limit Hard limit Units Max open files 65535 65535 files Server Configuration nginx/1.6.2 configuration file: Worker_processes Auto; Worker_cpu_affinity 00000000000000000000000000000001 00000000000000000000000000000010 00000000000000000000000000000100 00000000000000000000000000001000 00000000000000000000000000010000 00000000000000000000000000100000 00000000000000000000000001000000 00000000000000000000000010000000 00000000000000000000000100000000 00000000000000000000001000000000 00000000000000000000010000000000 00000000000000000000100000000000 00000000000000000001000000000000 00000000000000000010000000000000 00000000000000000100000000000000 00000000000000001000000000000000 00000000000000010000000000000000 00000000000000100000000000000000 00000000000001000000000000000000 00000000000010000000000000000000 00000000000100000000000000000000 00000000001000000000000000000000 00000000010000000000000000000000 00000000100000000000000000000000 00000001000000000000000000000000 00000010000000000000000000000000 00000100000000000000000000000000 00001000000000000000000000000000 00010000000000000000000000000000 00100000000000000000000000000000 01000000000000000000000000000000 10000000000000000000000000000000 ; Worker_rlimit_nofile 65535; Events { Worker_connections 65535; Accept_mutex off; } HTTP { Include Mime.types; Default_type Application/octet-stream; Access_log Logs/access.log; Keepalive_timeout 0; server { Listen backlog=65535; server_name localhost; Location =/empty.gif { Empty_gif; } } } tengine/2.1.0 configuration file: Worker_processes Auto; Worker_cpu_affinity Auto; Worker_rlimit_nofile 65535; Events { Worker_connections 65535; Reuse_port on; } HTTP { Include Mime.types; Default_type Application/octet-stream; Access_log Logs/access.log; Keepalive_timeout 0; server { Listen Bayi backlog=65535; server_name localhost; Location =/empty.gif { Empty_gif; } } } The Tengine and Nginx configurations differ only between Reuse_port and Accept_mutex at two. The worker_cpu_affinity of Tengine is equivalent to the corresponding configuration of Nginx. Status Tengine increases processing power by 200% compared to nginx default configuration. The tengine improves processing power by 60% compared to nginx optimized configuration. |