Detailed Nginx optimization to cope with high concurrency

Source: Internet
Author: User
: This article mainly introduces Nginx optimization in detail to cope with high concurrency. if you are interested in the PHP Tutorial, please refer to it. Zookeeper

Nginx command optimization (configuration file)

worker_processes 8;

The number of nginx processes. we recommend that you specify the number of CPUs, which is generally a multiple of them.

worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

Allocate cpu to each process. in the previous example, eight processes are allocated to eight CPUs. of course, you can write multiple processes or allocate one process to multiple CPUs.

worker_rlimit_nofile 102400;

This command indicates the maximum number of file descriptors opened by an nginx process. the theoretical value should be that the maximum number of opened files (ulimit-n) is different from the number of nginx processes, however, nginx allocation requests are not so uniform, so it is best to keep them consistent with the ulimit-n value.

use epoll;

Use the I/O model of epoll.

worker_connections 102400;

The maximum number of connections allowed by each process. Theoretically, the maximum number of connections per nginx server is worker_processes * worker_connections.

keepalive_timeout 60;

Keepalive timeout.

client_header_buffer_size 4k;

The buffer size of the client request header, which can be set based on the size of your system page. Generally, the size of the header of a request does not exceed 1 k, but generally the system page size is greater than 1 k, set the page size here. You can use the getconf PAGESIZE command to obtain the page size.

open_file_cache max=102400 inactive=20s;

This will specify the cache for files to be opened. by default, the cache is not enabled. max specifies the number of files to be opened. it is recommended that the cache be deleted when the file is not requested.

open_file_cache_valid 30s;

This refers to how long it takes to check the cache's valid information.

open_file_cache_min_uses 1;

The inactive parameter in the open_file_cache command specifies the minimum number of times files are used. if this number is exceeded, the file descriptor is always opened in the cache. for example, if a file is not used once in the inactive time, it will be removed.

Kernel parameter optimization

net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait instances. the default value is 180000.

net.ipv4.ip_local_port_range = 1024    65000

Port range that can be opened by the system.

net.ipv4.tcp_tw_recycle = 1

Enable timewait quick recovery.

net.ipv4.tcp_tw_reuse = 1

Enable reuse. Allow TIME-WAIT sockets to be re-used for a new TCP connection.

net.ipv4.tcp_syncookies = 1

Enable SYN Cookies. when a SYN wait queue overflow occurs, enable cookies for processing.

net.core.somaxconn = 262144

By default, the backlog of the listen function in web applications limits the net. core. somaxconn of kernel parameters to 128. nginx defines NGX_LISTEN_BACKLOG as 511 by default, so it is necessary to adjust this value.

net.core.netdev_max_backlog = 262144

The maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets.

net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. you cannot rely too much on it or artificially reduce the value. you should also increase this value (if the memory is increased ).

net.ipv4.tcp_max_syn_backlog = 262144

The maximum number of connection requests that have not received confirmation from the client. For systems with 1024 MB of memory, the default value is 128, while for systems with small memory, the value is.

net.ipv4.tcp_timestamps = 0

Timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.

net.ipv4.tcp_synack_retries = 1

To enable the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.

net.ipv4.tcp_syn_retries = 1

Number of SYN packets sent before the kernel disconnects the connection.

net.ipv4.tcp_fin_timeout = 1

If the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even become an unexpected machine. The default value is 60 seconds. 2.2 The kernel value is usually 180 seconds. you can follow this setting, but remember that even if your machine is a lightweight WEB server, there is also a risk of memory overflow due to a large number of dead sockets. the risk of FIN-WAIT-2 is smaller than that of FIN-WAIT-1, because it can only eat K of memory at most, however, they have a longer lifetime.

net.ipv4.tcp_keepalive_time = 30

The frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.

A complete kernel optimization configuration

net.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296net.ipv4.tcp_max_tw_buckets = 6000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 4096        87380   4194304net.ipv4.tcp_wmem = 4096        16384   4194304net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.netdev_max_backlog = 262144net.core.somaxconn = 262144net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_keepalive_time = 30net.ipv4.ip_local_port_range = 1024    65000

A simple nginx optimization configuration file

user  www www;worker_processes 8;worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;error_log  /www/log/nginx_error.log  crit;pid        /usr/local/nginx/nginx.pid;worker_rlimit_nofile 204800;events{  use epoll;  worker_connections 204800;}http{  include       mime.types;  default_type  application/octet-stream;  charset  utf-8;  server_names_hash_bucket_size 128;  client_header_buffer_size 2k;  large_client_header_buffers 4 4k;  client_max_body_size 8m;  sendfile on;  tcp_nopush     on;  keepalive_timeout 60;  fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2                keys_z                inactive=5m;  fastcgi_connect_timeout 300;  fastcgi_send_timeout 300;  fastcgi_read_timeout 300;  fastcgi_buffer_size 16k;  fastcgi_buffers 16 16k;  fastcgi_busy_buffers_size 16k;  fastcgi_temp_file_write_size 16k;  fastcgi_cache TEST;  fastcgi_cache_valid 200 302 1h;  fastcgi_cache_valid 301 1d;  fastcgi_cache_valid any 1m;  fastcgi_cache_min_uses 1;  fastcgi_cache_use_stale error timeout invalid_header http_500;    open_file_cache max=204800 inactive=20s;  open_file_cache_min_uses 1;  open_file_cache_valid 30s;    tcp_nodelay on;    gzip on;  gzip_min_length  1k;  gzip_buffers     4 16k;  gzip_http_version 1.0;  gzip_comp_level 2;  gzip_types       text/plain application/x-javascript text/css application/xml;  gzip_vary on;  server  {    listen       8080;    server_name  ad.test.com;    index index.php index.htm;    root  /www/html/;    location /status    {        stub_status on;    }    location ~ .*\.(php|php5)?$    {        fastcgi_pass 127.0.0.1:9000;        fastcgi_index index.php;        include fcgi.conf;    }    location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$    {      expires      30d;    }    log_format  access  '$remote_addr - $remote_user [$time_local] "$request" '              '$status $body_bytes_sent "$http_referer" '              '"$http_user_agent" $http_x_forwarded_for';    access_log  /www/log/access.log  access;      }}

Several commands about FastCGI

fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_z inactive=5m;

This command specifies a path for the FastCGI cache, the directory structure level, the storage time of the keyword region, and the non-active deletion time.

fastcgi_connect_timeout 300;

Specify the timeout time for connecting to the backend FastCGI.

fastcgi_send_timeout 300;

The timeout time for sending a request to FastCGI. this value refers to the timeout time for sending a request to FastCGI after two handshakes are completed.

fastcgi_read_timeout 300;

The timeout time for receiving the FastCGI response. this value refers to the timeout time for receiving the FastCGI response after two handshakes are completed.

fastcgi_buffer_size 16k;

Specify the buffer size required to read the first part of the FastCGI response. here, you can set the buffer size specified by the fastcgi_buffers command, the above command specifies that it will use a 16 k buffer to read the first part of the response, that is, the response header. In fact, this response header is usually very small (no more than 1 k ), however, if you specify the buffer size in the fastcgi_buffers command, it will also allocate a buffer size specified by fastcgi_buffers to the cache.

fastcgi_buffers 16 16k;

Specify how many buffers are needed locally to buffer FastCGI responses. as shown above, if the page size generated by a php script is 256 kB, 16 16 k buffers will be allocated for the cache. if the buffer size is greater than 256 k, the portion that is increased to k will be cached in the path specified by fastcgi_temp, of course, this is an unwise solution for server load because the data processing speed in the memory is faster than that in the hard disk, generally, this value should be set to the median value of the page size generated by a php script in your site, for example, if the page size generated by most scripts on your site is k, you can set this value to 16 k, 4 64 k, or 64 4 k, but obviously, the last two methods are not good, because if the generated page is only 32 k, if 4 64 k is used, it will allocate one 64 k buffer to cache, if you use 64 k, it will allocate 8 4 K buffer to cache, and if you use 16 16 K, it will allocate 2 16 K to cache the page, which seems more reasonable.

fastcgi_busy_buffers_size 32k;

I don't know what this command is for. I only know that the default value is twice that of fastcgi_buffers.

fastcgi_temp_file_write_size 32k;

The size of the data block to be used when writing fastcgi_temp_path. the default value is twice that of fastcgi_buffers.

fastcgi_cache TEST

Enable FastCGI cache and specify a name for it. I personally think it is very useful to enable cache, which can effectively reduce the CPU load and prevent 502 errors. However, this cache will cause many problems because it caches dynamic pages. The specific use must also be based on your own needs.

fastcgi_cache_valid 200 302 1h;fastcgi_cache_valid 301 1d;fastcgi_cache_valid any 1m;

Specify the cache time for the specified response code. in the preceding example, 200,302 response is cached for one hour, 301 response is cached for one day, and others are cached for one minute.

fastcgi_cache_min_uses 1;

The minimum number of times the file is cached in the inactive parameter value of the fastcgi_cache_path command. in the preceding example, if a file is not used once in five minutes, the file will be removed.

fastcgi_cache_use_stale error timeout invalid_header http_500;

I don't know the role of this parameter. I guess it is useless to let nginx know which types of cache. The above are the FastCGI parameters in nginx. In addition, FastCGI also has some configuration needs to be optimized. if you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:

 
  60
 

Number of concurrent requests simultaneously, that is, it will enable a maximum of 60 sub-threads to process concurrent connections.

 
  102400
 

Maximum number of opened files.

 
  204800
 

The maximum number of requests that a process can perform before resetting.

Several test results

The static page configures 4 W for me in squid and sends the test file mentioned in that article to run the webbench-c 30000-t 600 http://ad.test.com on six machines at the same time: 8080/index.html command:

Number of connections filtered by netstat:

Php page result in status (php page calls phpinfo ):

Number of php page connections after netstat filtering:

Server load before FastCGI cache is used:

It is difficult to open the php page. you need to refresh the page multiple times to open the page. CPU 0 load is low because the Nic interrupt requests are all allocated to cpu0 during the test, and seven processes are enabled in nginx to the cpu1-7.

After FastCGI is used for caching:

In this case, you can easily open the php page.

This test is not connected to any database, so there is no reference value, but I do not know whether the above test has reached the limit. it does not seem to be available based on the memory and cpu usage, however, there are no unnecessary machines to allow me to run webshells.

The above is taken from the Internet.

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

The above describes the Nginx optimization details to deal with high concurrency, including the content, hope to be helpful to friends who are interested in PHP tutorials.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.