http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=190176&id=4234854
A preface when managing a large number of connections, especially only a small number of active connections, Nginx has a relatively good CPU and RAM utilization, is now a multi-terminal online era, more can let Nginx play this advantage. In this paper, a simple test, nginx maintenance of a normal PC virtual machine 100k HTTP Long connection, and then look at Nginx and system resource utilization. Two test environments
1. Service-sideHardware: Dual core 2.3GHZ,2GB Memory software: CentOS 6.5, kernel 2.6.32, gcc 4.4.7, nginx 1.4.7ip:10.211.55.8 kernel parameter tuning:
$/sbin/sysctl-w net.netfilter.nf_conntrack_max=102400 # Boost System overall connection number $/sbin/sysctl Net.netfilter.nf_conntrack_max # Verify that it is effective nginx compiles from the source with--with-http_stub_status_module, listing only the parts that differ from the default settings:
Worker_rlimit_nofile 102400;events {worker_connections 102400;} HTTP {# Set a relatively large timeout, the client can send a head request in a gentle manner to maintain keepalive keepalive_timeout 3600; #监控连接数, native access location/nginx_status {stub_status on; Access_log off; Allow 127.0.0.1; Deny all; }}
2. Client 1 Hardware: Dual core 2.3GHZ,2GB Memory software: CentOS 6.5, Kernel 2.6.32, gcc 4.4.7, Python 3.3.5ip:10.211.55.9 kernel parameter tuning: $/sbin/ Sysctl-w net.ipv4.ip_local_port_range= "1024x768 61024" #实际只使用50000个端口 $/sbin/sysctl net.ipv4.ip_local_port_range #验证是否生效 $ vi/etc/security/limits.conf #提升当前用户的最大打开文件数nofile (hard >= soft > 50000) $ ulimit-n #验证是否生效, may need to restart shell Python 3.3.5 compiled from the source code, the following configuration: $ pyvenv ~/pyvenv #创建虚拟环境, easy to test $. ~/pyvenv/bin/activate #激活虚拟环境 (pyvenv) $ python get-pip.py #从pip官网下载get-pip.py (pyvenv) $ pip install Asyncio #安装异步IO模块 & nbsp; because Apache AB can only bulk request, can not maintain the connection, so I wrote an HTTP long connection test tool asyncli.py, detailed implementation see http://blog.chinaunix.net/uid-190176-id-4223282.html. Basic usage: (PYVENV) $ python asyncli.py --helpusage: asyncli.py [-h] [-C CONNECTIONS] [-K KEEPALIVE] Url asyncli positional arguments: url page address optional arguments: -H,--help Show this help message and exit -C CONNECTIONS,--connections connections& nbsp; Number of connections simultaneously -K KEEPALIVE,--keepalive keepalive HTTP keepalive timeout working mechanism: Create 10 connections every 10 milliseconds (about 1000 connections per second), until the total number of connections reaches connections, Each connection sleeps [1, KEEPALIVE/2] in a random number (in seconds), then sends a head request to the server-side URL to maintain the HTTP KEEPALIVE, and then repeats the previous sleep step ...
3. Client 2Exactly the same as Client 1, except for IP 10.211.55.10 three runs and outputs
1. Server-side System Idle# vmstatprocs-----------memory-------------Swap-------io------System-------CPU-----r B swpd free buff cache Si so bi bo in CS us sy ID WA St 0 0 0 1723336 11624 76124 0 0 62 1 26 28 0 0 100 0 0
2. Server start Nginx, no external Web request# nginx# Vmstatprocs-----------memory-------------Swap-------io------System-------CPU-----r B swpd free buff Cache si so bi bo in CS us sy ID WA St 0 0 0 1681552 11868 76840 0 0 50 1 24 25 0 0 100 0 0
3. Clients 1 and 2 are started, each client initiates 50,000 long connections and is maintained until the server shuts down or times out。 (pyvenv) $ python asyncli.py-c 50000-k 3600 http://10.211.55.8/&
4. About 2 hours later ... View service Side# curl http://127.0.0.1/nginx_statusactive Connections:100001server accepts handled requests 165539 165539 1095055reading:0 writing:1 waiting:100000 # ps-p 1899-o pid,%cpu,%mem,rss,comm pid%cpu%MEM R SS command 1899 2.0 4.9 94600 nginx # vmstat 3procs-----------memory-------------Swap------- IO------System-------CPU----- r b swpd free buff cache si so bi bo in CS US sy ID WA st 0&nb Sp 0 0 654248 62920 158924 0 0 6 6 361 108 0 1 98 0 0 0 0 0 654232 62920 158952 0 0 0 85 804&NBSp 218 0 1 98 0 0 0 0 0 654108& nbsp 62928 158976 0 0 0 9 813 214 0 1 98 0 0 0 0 0 654108 62928 159004 0 0 0 0 803 220 0 1 99 0 0 ^c # free total used free shared buffers cachedmem: 1918576 1264576 654000 0 62952 159112-/+ buffers/cache: 1042512 876064swap: 4128760 0 4128760 Four summary
1. Nginx average memory consumption of each connection is very small, through the RSS of PS see, each connection physical memory occupies about 1k. Most of the memory is consumed by the kernel TCP cache.
2. Nginx maintains a large number of connections (a small number of active connections, the average active connection per second in this article is 1 per thousand of the total number of connections) consumes very little CPU, the above is only 2%.
3. The best optimization is not optimization. The entire test, in addition to these hard limits of the number of files and the number of connections, does not have any parameter tuning, but careful calculation finds that the average per connection memory occupies less than 10k, much less than the default cache size (Net.ipv4.tcp_rmem = 4096 87380 4194304) and (net). Ipv4.tcp_wmem = 4096 16384 4194304)
4. Nginx to maintain the main bottleneck of this type of connection is the available memory size, my 2GB memory virtual machine can actually support 150,000 long connection, but my physical machine does not have memory to continue cloning virtual machine client:-(
5. Although there are more restrictions on kernel parameters, it is no problem to have a large memory server that supports 1 million connections.
Nginx Easy to manage 100,000 long connections---centos 6.5 x86-64 based on 2GB memory