Transferred from: http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=190176&id=4234854
A preface
When managing a large number of connections, especially only a small number of active connections, Nginx has a relatively good CPU and RAM utilization, now is the era of multi-terminal stay online, but also to let Nginx play this advantage. In this paper, a simple test, nginx maintenance of a normal PC virtual machine 100k HTTP Long connection, and then look at Nginx and system resource utilization.
Two test environments
1. Service-side
Hardware: Dual core 2.3GHZ,2GB Memory software: CentOS 6.5, kernel 2.6.32, gcc 4.4.7, Nginx 1.4.7ip:10.211.55.8
Kernel parameter tuning:
$/sbin/sysctl-w net.netfilter.nf_conntrack_max=102400 # Boost System overall connection number $/sbin/sysctl Net.netfilter.nf_conntrack_max # Verify that it is in effect
Nginx compiles from the source with--with-http_stub_status_module, only the parts that are different from the default settings are listed:
Worker_rlimit_nofile 102400;events {worker_connections 102400;} HTTP {# Set a relatively large timeout, the client can send a head request in a gentle manner to maintain keepalive keepalive_timeout 3600;
#监控连接数, native access location/nginx_status {stub_status on; Access_log off; Allow 127.0.0.1; Deny all; }}
2. Client 1
Hardware: Dual core 2.3GHZ,2GB Memory software: CentOS 6.5, kernel 2.6.32, gcc 4.4.7, Python 3.3.5ip:10.211.55.9
Kernel parameter tuning: $/sbin/sysctl-w net.ipv4.ip_local_port_range= "1024x768 61024" #实际只使用50000个端口 $/sbin/sysctl Net.ipv4.ip_local_ Port_range #验证是否生效 $ vi/etc/security/limits.conf #提升当前用户的最大打开文件数nofile (hard >= soft > 50000) $ ulimit-n #验证是否生效, may require Restart Shell
Python 3.3.5 compiled from the source code, the following configuration: $ pyvenv ~/pyvenv #创建虚拟环境, easy to test $. ~/pyvenv/bin/activate #激活虚拟环境 (pyvenv) $ python get-pip.py #从pip官网下载get-pip.py (pyvenv) $ pip Install Asyncio #安装异步IO模块
Because Apache AB can only bulk request, can not maintain the connection, so I wrote an HTTP long connection test tool asyncli.py, detailed implementation see http://blog.chinaunix.net/uid-190176-id-4223282.html. Basic usage: (pyvenv) $ python asyncli.py--helpusage:asyncli.py [-h] [-C CONNECTIONS] [-k KEEPALIVE] URL
Asyncli
Positional Arguments:url page Address
Optional arguments:-H,--help show this help message and Exit-c CONNECTIONS,--connections CONNECTIONS Number of connections Simultaneously-k KEEPALIVE,--keepalive KEEPALIVE HTTP k Eepalive Timeout
Working mechanism: Create 10 consecutive connections every 10 milliseconds (about 1000 connections per second) until the total number of connections reaches connections, and each connection sleeps [1, KEEPALIVE/2] a random number (in seconds). Then send a head request to the server-side URL to maintain the HTTP KeepAlive, and then repeat the previous sleep step ...
3. Client 2
Exactly the same as client 1 except that IP is 10.211.55.10
Three runs and outputs
1. Server-side System Idle# vmstatprocs-----------memory-------------Swap-------io------System-------CPU-----r B swpd free buff cache Si so bi bo in CS us sy ID WA St 0 0 0 1723336 11624 76124 0 0 62 1 26 28 0 0 100 0 0
2. Server start Nginx, no external Web request# nginx# Vmstatprocs-----------memory-------------Swap-------io------System-------CPU-----r B swpd free buff Cache si so bi bo in CS us sy ID WA St 0 0 0 1681552 11868 76840 0 0 50 1 24 25 0 0 100 0 0
3. Clients 1 and 2 are started, each client initiates 50,000 long connections and is maintained until the server shuts down or times out。 (pyvenv) $ python asyncli.py-c 50000-k 3600 http://10.211.55.8/&
4. About 2 hours later ... View service Side# Curl Http://127.0.0.1/nginx_statusActive Connections:100001server accepts handled requests 165539 165539 1095055reading:0 writing:1 waiting:100000
# ps-p 1899-o pid,%cpu,%mem,rss,comm pid%cpu%mem RSS command 1899 2.0 4.9 94600 ngi nx # vmstat 3procs-----------memory-------------Swap-------io------System-------CPU----- r B swpd free buff cache si so Bi bo in CS US sy ID wa st 0 0 0 654248& nbsp 62920 158924 0 0 6 6 361 108 0 1 98 0 0 0 0 0 654232 62920 158952 0 0 0 85 804 218 0 1 98 0 0 0 0 0 654108 62928 158976 0 0 0 9 813 214 0 1 98 0 0& nbsp; 0 0 0 654108 62928 159004 0 0 0 0 803 220 0 1 99 0 0    ^C
# free total used free shared buffers cachedmem:1918576 1264576 654000 0 62952 159112-/+ buffers/cache:1042512 876064swap:4128760 0 41,287,604 Summary
1. Nginx average memory consumption of each connection is very small, through the RSS of PS see, each connection physical memory occupies about 1k. Most of the memory is consumed by the kernel TCP cache.
2. Nginx maintains a large number of connections (a small number of active connections, the average active connection per second in this article is 1 per thousand of the total number of connections) consumes very little CPU, the above is only 2%.
3. The best optimization is not optimization. The entire test, in addition to these hard limits of the number of files and the number of connections, does not have any parameter tuning, but careful calculation finds that the average per connection memory occupies less than 10k, much less than the default cache size (Net.ipv4.tcp_rmem = 4096 87380 4194304) and (net). Ipv4.tcp_wmem = 4096 16384 4194304)
4. Nginx to maintain the main bottleneck of this type of connection is the available memory size, my 2GB memory virtual machine can actually support 150,000 long connection, but my physical machine does not have memory to continue cloning virtual machine client:-(
5. Although there are more restrictions on kernel parameters, it is no problem to have a large memory server that supports 1 million connections.
Nginx Easy to manage 100,000 long connections---centos 6.5 x86-64 based on 2GB memory