Nginx optimization detail, deal with high concurrency

Source: Internet
Author: User
Tags ack epoll php script cpu usage nginx server
 optimizations in the NGINX directive (configuration file)
Worker_processes 8;

Nginx number of processes, it is recommended to be specified according to the number of CPUs, generally a multiple of it.

Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

Allocate CPUs for each process, in the example above, 8 processes are allocated to 8 CPUs, of course, multiple writes can be written, or a process is assigned to multiple CPUs.

Worker_rlimit_nofile 102400;

This instruction is when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) and the number of nginx process, but the Nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Use Epoll;

Using the Epoll I/O model, this is needless to say.

Worker_connections 102400;

The maximum number of connections allowed per process, in theory, the max number of connections per Nginx server is worker_processes*worker_connections.

Keepalive_timeout 60;

KeepAlive timeout time.

Client_header_buffer_size 4k;

Client requests the size of the head buffer, this can be based on your system paging size to set, the general one requested head size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size. The paging size can be obtained using the command getconf pagesize.

Open_file_cache max=102400 inactive=20s;

This will specify caching for open files, which are not enabled by default, max Specifies the number of caches, and the recommended number of files is the same, inactive refers to how long the file has not been requested to delete the cache.

Open_file_cache_valid 30s;

This refers to how long it is to check the cache for valid information.

Open_file_cache_min_uses 1;

The minimum number of times a file is used in a inactive parameter in the open_file_cache instruction, and if this number is exceeded, the file descriptor is always open in the cache, as in the example above, if a file is not used once in inactive time, it will be removed. optimization of kernel parameters

Net.ipv4.tcp_max_tw_buckets = 6000

Number of timewait, default is 180000.

Net.ipv4.ip_local_port_range = 1024    65000

Allows the system to open a range of ports.

Net.ipv4.tcp_tw_recycle = 1

Enable Timewait Quick Recycle.

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows time-wait sockets to be reconnected to a new TCP connection.

Net.ipv4.tcp_syncookies = 1

Open the Syn cookies and enable cookies when the Syn wait queue overflows.

Net.core.somaxconn = 262144

The backlog of the LISTEN function in the Web application defaults to the net.core.somaxconn limit of our kernel parameters to 128, and Nginx defines the Ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

Net.core.netdev_max_backlog = 262144

Each network interface receives packets at a rate that allows the maximum number of packets to be sent to the queue when the kernel processes these packets faster.

Net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system are not associated with any one of the user file handles. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limit is only to prevent simple Dos attacks, not to rely too much on it or artificially reduce this value, but also to increase this value (if memory is added).

Net.ipv4.tcp_max_syn_backlog = 262144

The maximum number of connection requests that have been logged that have not received client confirmation information. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.ipv4.tcp_timestamps = 0

The time stamp avoids the winding of the serial number. A 1Gbps link is sure to encounter a previously used serial number. The timestamp allows the kernel to accept this "exception" packet. We need to turn it off.

Net.ipv4.tcp_synack_retries = 1

In order to open the connection to the end, the kernel needs to send a SYN with an ACK in response to a previous syn. The second handshake in the so-called three handshake. This setting determines the number of Syn+ack packets sent before the kernel discards the connection.

Net.ipv4.tcp_syn_retries = 1

Number of SYN packets sent before the kernel abandons the connection.

Net.ipv4.tcp_fin_timeout = 1

This parameter determines the time it remains in the fin-wait-2 state if the socket is closed by the local request. The right end can be an error and never close the connection, or even accidentally machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, you can press this setting, but keep in mind that even if your machine is a lightweight Web server, there is a risk of memory overflow because of a lot of dead sockets, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat 1.5K of memory, but they have a longer lifetime.

Net.ipv4.tcp_keepalive_time = 30

The frequency with which TCP sends KeepAlive messages when KeepAlive is enabled. The default is 2 hours. a complete kernel-optimized configuration

Net.ipv4.ip_forward = 0 Net.ipv4.conf.default.rp_filter = 1 Net.ipv4.conf.default.accept_source_route = 0 Kernel.sysrq = 0 Kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 KERNEL.MSGMNB = 65536 Kernel.msgmax = 65536 Kernel.shmmax = 6871947 6736 Kernel.shmall = 4294967296 Net.ipv4.tcp_max_tw_buckets = 6000 Net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 n Et.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 16384 4194304 Net.core.wmem_default = 838 8608 Net.core.rmem_default = 8388608 Net.core.rmem_max = 16777216 Net.core.wmem_max = 16777216 net.core.netdev_max_ Backlog = 262144 Net.core.somaxconn = 262144 Net.ipv4.tcp_max_orphans = 3276800 Net.ipv4.tcp_max_syn_backlog = 262144 net. Ipv4.tcp_timestamps = 0 Net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_tw_recycle = 1 Net.ipv4.  Tcp_tw_reuse = 1 Net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = Net.ipv4.ip_local_port_rangE = 1024 65000 
A simple nginx optimization configuration file
User www www.
Worker_processes 8;
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;
Error_log/www/log/nginx_error.log Crit;
Pid/usr/local/nginx/nginx.pid;

Worker_rlimit_nofile 204800;
  events {use Epoll;
Worker_connections 204800;
  } http {include mime.types;

  Default_type Application/octet-stream;

  CharSet Utf-8;
  Server_names_hash_bucket_size 128;
  client_header_buffer_size 2k;
  Large_client_header_buffers 4 4k;

  Client_max_body_size 8m;
  Sendfile on;

  Tcp_nopush on;

  Keepalive_timeout 60; Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=test:10m inactive=5
  M
  Fastcgi_connect_timeout 300;
  Fastcgi_send_timeout 300;
  Fastcgi_read_timeout 300;
  Fastcgi_buffer_size 16k;
  Fastcgi_buffers 16k;
  Fastcgi_busy_buffers_size 16k;
  Fastcgi_temp_file_write_size 16k;
  Fastcgi_cache TEST;
  Fastcgi_cache_valid 302 1h;
 Fastcgi_cache_valid 1d; Fastcgi_cache_valid any 1m;
  Fastcgi_cache_min_uses 1;
  
  Fastcgi_cache_use_stale error timeout Invalid_header http_500;
  Open_file_cache max=204800 inactive=20s;
  Open_file_cache_min_uses 1;
  


  Open_file_cache_valid 30s;
  
  Tcp_nodelay on;
  gzip on;
  Gzip_min_length 1k;
  Gzip_buffers 4 16k;
  Gzip_http_version 1.0;
  Gzip_comp_level 2;
  Gzip_types text/plain application/x-javascript text/css application/xml;


  Gzip_vary on;
    server {listen 8080;
    server_name ad.test.com;
    Index index.php index.htm;

    root/www/html/;
    Location/status {stub_status on; } location ~. *\.
        (PHP|PHP5) $ {fastcgi_pass 127.0.0.1:9000;
        Fastcgi_index index.php;
    Include fcgi.conf; } location ~. *\.
    (GIF|JPG|JPEG|PNG|BMP|SWF|JS|CSS) $ {expires 30d;  Log_format access ' $remote _addr-$remote _user [$time _local] ' $request ' $status $body _bytes_sent "$http _refeRER "" $http _user_agent "$http _x_forwarded_for";
      Access_log/www/log/access.log access; }
}
a few instructions about fastcgi.
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=test:10m inactive=5m;

This directive specifies a path for the fastcgi cache, hierarchy of the directory structure, storage time for the key area, and inactive deletion time.

Fastcgi_connect_timeout 300;

Specifies the time-out period for connecting to the backend fastcgi.

Fastcgi_send_timeout 300;

The timeout for transferring requests to the FASTCGI, which is the timeout for sending requests to fastcgi after two handshakes have been completed.

Fastcgi_read_timeout 300;

The timeout for receiving the fastcgi answer, which is the timeout for receiving fastcgi responses after two handshakes have been completed.

Fastcgi_buffer_size 16k;

Specifies how much buffer is required to read the first part of the fastcgi answer, which can be set to the buffer size specified by the fastcgi_buffers instruction, which specifies that it will use 1 16k buffers to read the first part of the answer, the answer header, In fact, this answer head is usually very small (not more than 1k), but if you specify the size of the buffer in the fastcgi_buffers instruction, it will also allocate a fastcgi_buffers specified buffer size to cache.

Fastcgi_buffers 16k;

Specify how much and how much buffer the local needs to use to buffer fastcgi responses, as shown above, if a PHP script produces a page size of 256k, it allocates 16 16k buffers to cache, and if greater than 256k, the portion that is increased to 256k is cached to the Fastcgi_ The path specified by temp, of course, is unwise for server load because the data is processed faster than the hard disk in memory, and usually the setting of this value should choose the middle value of the page size generated by the PHP script in your site. For example, most of your site script generated page size of 256k can be set this value to 16k, or 4 64k or 4k, but obviously, the latter two are not a good setup method, because if the resulting page is only 32k, if you use 4 64k it will allocate 1 64k buffers to cache, And if you use 4k it will allocate 8 4k buffer to cache, and if you use 16k then it will allocate 2 16k to cache the page, it seems more reasonable.

Fastcgi_busy_buffers_size 32k;

This command I do not know what to do, only know the default value is Fastcgi_buffers twice times.

Fastcgi_temp_file_write_size 32k;

The number of blocks to use when writing to Fastcgi_temp_path, which is twice times the default value of Fastcgi_buffers.

Fastcgi_cache TEST

Turn on the fastcgi cache and make a name for it. Personal feel-enabled caching is useful to reduce CPU load effectively and to prevent 502 errors. But this cache can cause a lot of problems because it caches dynamic pages. Specific use also needs to be in accordance with their own needs.

Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 1d;
Fastcgi_cache_valid any 1m;

Specifies the cache time for the specified answer code, as in the previous example, the 200,302 answer cache is one hour, 301 responses are cached for 1 days, and the other is 1 minutes.

Fastcgi_cache_min_uses 1;

The minimum number of times cached in the fastcgi_cache_path instruction inactive parameter value, as in the above example, if a file is not used 1 times in 5 minutes, the file is removed.

Fastcgi_cache_use_stale error timeout Invalid_header http_500;

Unaware of the effect of this parameter, the conjecture should be to let nginx know which types of caching are useless. The above fastcgi related parameters in Nginx, in addition, FASTCGI has some configuration needs to be optimized, if you use PHP-FPM to manage fastcgi, you can modify the following values in the configuration file:

<value name= "Max_children" >60</value>

The number of concurrent requests processed concurrently, that is, it will open up to 60 child threads to handle concurrent connections.

<value name= "Rlimit_files" >102400</value>

The maximum number of open files.

<value name= "Max_requests" >204800</value>

The maximum number of requests that each process can perform before resetting. several test results

Static page for me in squid configuration 4W concurrent the test file mentioned in the article, the following figure shows the test results after running the webbench-c 30000-t http://ad.test.com:8080/index.html command on 6 machines:

Number of connections filtered using netstat:

The results of the PHP page in status (Php page is called phpinfo):

The number of connections that the PHP page netstat after filtering:

Server load before fastcgi cache is not used:

Opening the PHP page at this time has some difficulties, you need to refresh to open. The low load on the cpu0 is due to the fact that the network card interrupt request was allocated to the cpu0 at the time of the test, and 7 processes were set up in Nginx to cpu1-7.

After using the FASTCGI cache:

You can easily open the PHP page at this time.

This test is not connected to any database, so there is no reference value, but do not know whether the test has reached the limit, according to the memory and CPU usage, it seems not, but there is no redundant machine to allow me to run the webbench.

I am excerpted from the Internet.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.