Some Nginx optimizations (exceeding 100,000 concurrency)

Source: Internet
Author: User
The optimization (configuration file) worker_processes8 in the nginx command; number of nginx processes, which is generally a multiple of the number of CPUs. Worker_cpu_affinity00000000000000000000000000000000000000000000000001000...

Nginx command optimization (configuration file)
Worker_processes 8; number of nginx processes. it is recommended to specify the number of CPUs, which is generally a multiple of them.
 
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000; allocate cpu for each process. in the previous example, eight processes are allocated to eight CPUs. of course, you can write multiple processes, or allocate a process to multiple CPUs.
 
Worker_rlimit_nofile 102400; this command indicates the maximum number of file descriptors opened by an nginx process. the theoretical value is that the maximum number of opened files (ulimit-n) is the same as the number of nginx processes, however, nginx allocation requests are not so uniform, so it is best to keep them consistent with the ulimit-n value.
 
Use epoll; use the I/O model of epoll.
 
Worker_connections 102400; the maximum number of connections allowed by each process. Theoretically, the maximum number of connections per nginx server is worker_processes * worker_connections.
 
Keepalive_timeout 60; keepalive timeout time.
 
Client_header_buffer_size 4 k; the buffer size of the client request header, which can be set according to your system page size. Generally, the size of the header of a request will not exceed 1 k, however, generally, the system page size must be greater than 1 k, so the page size is set here. You can use the getconf PAGESIZE command to obtain the page size.
 
Open_file_cache max = 102400 inactive = 20 s; this will specify the cache for opening the file, which is not enabled by default. max specifies the cache quantity. it is recommended that it be consistent with the number of opened files, inactive refers to the time after which a file is deleted after being requested.
 
Open_file_cache_valid 30 s; this indicates how long it takes to check the cache's valid information.
 
Open_file_cache_min_uses 1; the minimum number of files used in the inactive parameter time in the open_file_cache command. if this number is exceeded, the file descriptor is always opened in the cache. for example, if a file is not used once in the inactive time, it will be removed.
 
Kernel parameter optimization
Net. ipv4.tcp _ max_tw_buckets = 6000timewait. the default value is 180000.
 
Net. ipv4.ip _ local_port_range = 1024 65000 port range that allows the system to open.
 
Net. ipv4.tcp _ tw_recycle = 1 enable timewait fast recovery.
 
Net. ipv4.tcp _ tw_reuse = 1 enable reuse. Allow TIME-WAIT sockets to be re-used for a new TCP connection.
 
Net. ipv4.tcp _ syncookies = 1 enable SYN Cookies. when SYN waits for queue overflow, cookies are enabled for processing.
 
Net. core. somaxconn = 262144the backlog of the listen function in the web application will give us the net of kernel parameters by default. core. somaxconn is limited to 128, while NGX_LISTEN_BACKLOG defined by nginx is 511 by default, so it is necessary to adjust this value.
 
Net. core. netdev_max_backlog = 262144 the maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets.
 
Net. ipv4.tcp _ max_orphans = 262144 the maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. you cannot rely too much on it or artificially reduce the value. you should also increase this value (if the memory is increased ).
 
Net. ipv4.tcp _ max_syn_backlog = 262144 records the maximum number of connection requests that have not received confirmation from the client. For systems with 1024 MB of memory, the default value is 128, while for systems with small memory, the value is.
 
Net. ipv4.tcp _ timestamps = 0 timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.
 
Net. ipv4.tcp _ synack_retries = 1 in order to enable the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.
 
Net. ipv4.tcp _ syn_retries = 1 Number of SYN packets sent before the kernel disconnects the connection.
 
Net. ipv4.tcp _ fin_timeout = 1 if the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even become an unexpected machine. The default value is 60 seconds. 2.2 The kernel value is usually 180 seconds. you can follow this setting, but remember that even if your machine is a lightweight WEB server, there is also a risk of memory overflow due to a large number of dead sockets. the risk of FIN-WAIT-2 is smaller than that of FIN-WAIT-1, because it can only eat K of memory at most, however, they have a longer lifetime.
 
Net. ipv4.tcp _ keepalive_time = 30 the frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.
 
A complete kernel optimization configuration
Net. ipv4.ip _ forward = 0
Net. ipv4.conf. default. rp_filter = 1
Net. ipv4.conf. default. accept_source_route = 0
Kernel. sysrq = 0
Kernel. core_uses_pid = 1
Net. ipv4.tcp _ syncookies = 1
Kernel. msgmnb = 65536
Kernel. msgmax = 65536
Kernel. shmmax = 68719476736
Kernel. shmall = 4294967296
Net. ipv4.tcp _ max_tw_buckets = 6000
Net. ipv4.tcp _ sack = 1
Net. ipv4.tcp _ window_scaling = 1
Net. ipv4.tcp _ rmem = 4096 87380 4194304
Net. ipv4.tcp _ wmem = 4096 16384 4194304
Net. core. wmem_default = 8388608
Net. core. rmem_default = 8388608
Net. core. rmem_max = 16777216
Net. core. wmem_max = 16777216
Net. core. netdev_max_backlog = 262144
Net. core. somaxconn = 262144
Net. ipv4.tcp _ max_orphans = 3276800
Net. ipv4.tcp _ max_syn_backlog = 262144
Net. ipv4.tcp _ timestamps = 0
Net. ipv4.tcp _ synack_retries = 1
Net. ipv4.tcp _ syn_retries = 1
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ mem = 94500000 915000000 927000000
Net. ipv4.tcp _ fin_timeout = 1
Net. ipv4.tcp _ keepalive_time = 30
Net. ipv4.ip _ local_port_range = 1024 65000 a simple nginx optimization configuration file
User www;
Worker_processes 8;
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;
Error_log/www/log/nginx_error.log crit;
Pid/usr/local/nginx. pid;
Worker_rlimit_nofile 204800;
 
Events
{
Use epoll;
Worker_connections 204800;
}
 
Http
{
Include mime. types;
Default_type application/octet-stream;
 
Charset UTF-8;
 
Server_names_hash_bucket_size 128;
Client_header_buffer_size 2 k;
Large_client_header_buffers 4 4 k;
Client_max_body_size 8 m;
 
Sendfile on;
Tcp_nopush on;
 
Keepalive_timeout 60;
 
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels =
Keys_zone = TEST: 10 m
Inactive = 5 m;
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 16 k;
Fastcgi_buffers 16 16 k;
Fastcgi_busy_buffers_size 16 k;
Fastcgi_temp_file_write_size 16 k;
Fastcgi_cache TEST;
Fastcgi_cache_valid 200 302 1 h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1 m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout invalid_header http_500;
 
Open_file_cache max = 204800 inactive = 20 s;
Open_file_cache_min_uses 1;
Open_file_cache_valid 30 s;
 
 
 
Tcp_nodelay on;
 
Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 16 k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;
 
 
Server
{
Listen 8080;
Server_name ad.test.com;
Index. php index.htm;
Root/www/html /;
 
Location/status
{
Stub_status on;
}
 
Location ~ . * \. (Php | php5 )? $
{
Fastcgi_pass 127.0.0.1: 9000;
Fastcgi_index index. php;
Fcinclude GI. conf;
}
 
Location ~ . * \. (Gif | jpg | jpeg | png | bmp | swf | js | css) $
{
Expires 30d;
}
 
Log_format access' $ remote_addr-$ remote_user [$ time_local] "$ request "'
'$ Status $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" $ http_x_forwarded_for ';
Access_log/www/log/access. log access;
}
} Several FastCGI commands
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels = keys_zone = TEST: 10 m inactive = 5 m; this command specifies a path for the FastCGI cache, directory structure level, keyword region storage time and non-active deletion time.
 
Fastcgi_connect_timeout 300; specifies the timeout time for connecting to the backend FastCGI.
 
Fastcgi_send_timeout 300; the timeout time for sending a request to FastCGI. this value refers to the timeout time for sending a request to FastCGI after two handshakes have been completed.
 
Fastcgi_read_timeout 300; the timeout time for receiving the FastCGI response. this value refers to the timeout time for receiving the FastCGI response after two handshakes have been completed.
 
Fastcgi_buffer_size 16 k; specify the buffer size required to read the first part of the FastCGI response. here, you can set the buffer size specified by the fastcgi_buffers command, the above command specifies that it will use a 16 k buffer to read the first part of the response, that is, the response header. In fact, this response header is usually very small (no more than 1 k ), however, if you specify the buffer size in the fastcgi_buffers command, it will also allocate a buffer size specified by fastcgi_buffers to the cache.
 
Fastcgi_buffers 16 16 k; specify the number of local buffers and the number of large buffers to buffer FastCGI responses. as shown above, if the page size generated by a php script is 256 k, 16 16 k buffers will be allocated for the cache. if the buffer size is greater than 256 k, the portion that is increased to k will be cached in the path specified by fastcgi_temp, of course, this is an unwise solution for server load because the data processing speed in the memory is faster than that in the hard disk, generally, this value should be set to the median value of the page size generated by a php script in your site, for example, if the page size generated by most scripts on your site is k, you can set this value to 16 k, 4 64 k, or 64 4 k, but obviously, the last two methods are not good, because if the generated page is only 32 k, if 4 64 k is used, it will allocate one 64 k buffer to cache, if you use 64 k, it will allocate 8 4 K buffer to cache, and if you use 16 16 K, it will allocate 2 16 K to cache the page, which seems more reasonable.
 
Fastcgi_busy_buffers_size 32 k; I don't know what to do with this command. I only know that the default value is twice that of fastcgi_buffers.
 
Fastcgi_temp_file_write_size 32 k; the size of the data block to be used when writing fastcgi_temp_path. the default value is twice that of fastcgi_buffers.
 
Fastcgi_cache TEST enables the FastCGI cache and specifies a name for it. I personally think it is very useful to enable cache, which can effectively reduce the CPU load and prevent 502 errors. However, this cache will cause many problems because it caches dynamic pages. The specific use must also be based on your own needs.
 
Fastcgi_cache_valid 200 302 1 h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1 m; specify the cache time for the specified response code. in the preceding example, 200,302 responses are cached for one hour, 301 responses are cached for one day, and others are cached for one minute.
 
Fastcgi_cache_min_uses 1; the minimum number of times the cache is used in the inactive parameter value of the fastcgi_cache_path command. in the preceding example, if a file is not used once in five minutes, the file will be removed.
 
Fastcgi_cache_use_stale error timeout invalid_header http_500; if you do not know the function of this parameter, it is useless to let nginx know which types of cache. The above are the FastCGI parameters in nginx. In addition, FastCGI also has some configuration needs to be optimized. if you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:
 
60 Number of concurrent requests simultaneously, that is, it will enable a maximum of 60 sub-threads to process concurrent connections.
 
102400 Maximum number of opened files.
 
204800 The maximum number of requests that a process can perform before resetting.
 
Several test results

The static page configures 4 W for me in squid and sends the test file mentioned in that article to run the webbench-c 30000-t 600 http://ad.test.com on six machines at the same time: 8080/index.html command:

Number of connections filtered by netstat:

Php page result in status (php page calls phpinfo ):

Number of php page connections after netstat filtering:

Server load before FastCGI cache is used:

It is difficult to open the php page. you need to refresh the page multiple times to open the page. CPU 0 load is low because the Nic interrupt requests are all allocated to cpu0 during the test, and seven processes are enabled in nginx to the cpu1-7.
After FastCGI is used for caching:

In this case, you can easily open the php page.

This test is not connected to any database, so there is no reference value, but I do not know whether the above test has reached the limit. it does not seem to be available based on the memory and cpu usage, however, there are no unnecessary hosts for me to run webshells.



References

Http://www.2cto.com/ OS /201202/117974.html
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.