Here we mainly talk about the Nginx optimization method, and also need to optimize the php-fpm configuration. for the method, refer
/Etc/sysctl. conf
In general, the nginx configuration file has the following effects on optimization:
Worker_processes 8;
The number of nginx processes. it is recommended to specify the number of CPUs, which is generally a multiple of them. it is usually set to 2 times.
How to view the number of cpu Reference: http://blog.haohtml.com/archives/11123 and http://blog.haohtml.com/archives/9236
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
Allocate cpu to each process. in the previous example, eight processes are allocated to eight CPUs. of course, you can write multiple processes or allocate one process to multiple CPUs.
Worker_rlimit_nofile 102400;
This command indicates the maximum number of file descriptors opened by an nginx process. the theoretical value should be that the maximum number of opened files (ulimit-n) is different from the number of nginx processes, however, nginx allocation requests are not so uniform, so it is best to keep them consistent with the ulimit-n value. Ulimit usage reference: http://blog.haohtml.com/archives/9883
Use epoll;
Use the I/O model of epoll.
Worker_connections 102400;
The maximum number of connections allowed by each process. Theoretically, the maximum number of connections per nginx server is worker_processes * worker_connections.
Keepalive_timeout 60;
Keepalive timeout.
Client_header_buffer_size 4 k;
The buffer size of the client request header, which can be set based on your system page size. Generally, the size of a request header does not exceed 1 k, but generally the system page size is greater than 1 k, set the page size here. You can use the getconf PAGESIZE command to obtain the page size.
Open_file_cache max = 102400 inactive = 20 s;
This will specify the cache for files to be opened. by default, the cache is not enabled. max specifies the number of files to be opened. it is recommended that the cache be deleted when the file is not requested.
Open_file_cache_valid 30 s;
This refers to how long it takes to check the cache's valid information.
Open_file_cache_min_uses 1;
The inactive parameter in the open_file_cache command specifies the minimum number of times files are used. if this number is exceeded, the file descriptor is always opened in the cache. for example, if a file is not used once in the inactive time, it will be removed.
Kernel parameter optimization:
Net. ipv4.tcp _ max_tw_buckets = 6000
The number of timewait instances. the default value is 180000.
Net. ipv4.ip _ local_port_range = 1024 65000
Port range that can be opened by the system.
Net. ipv4.tcp _ tw_recycle = 1
Enable timewait quick recovery.
Net. ipv4.tcp _ tw_reuse = 1
Enable reuse. Allow TIME-WAIT sockets to be re-used for a new TCP connection.
Net. ipv4.tcp _ syncookies = 1
Enable SYN Cookies. when a SYN wait queue overflow occurs, enable cookies for processing.
Net. core. somaxconn = 262144
By default, the backlog of the listen function in web applications limits the net. core. somaxconn of kernel parameters to 128. nginx defines NGX_LISTEN_BACKLOG as 511 by default, so it is necessary to adjust this value.
Net. core. netdev_max_backlog = 262144
The maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets.
Net. ipv4.tcp _ max_orphans = 262144
The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. you cannot rely too much on it or artificially reduce the value. you should also increase this value (if the memory is increased ).
Net. ipv4.tcp _ max_syn_backlog = 262144
The maximum number of connection requests that have not received confirmation from the client. For systems with 1024 MB of memory, the default value is 128, while for systems with small memory, the value is.
Net. ipv4.tcp _ timestamps = 0
Timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.
Net. ipv4.tcp _ synack_retries = 1
To enable the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.
Net. ipv4.tcp _ syn_retries = 1
Number of SYN packets sent before the kernel disconnects the connection.
Net. ipv4.tcp _ fin_timeout = 1
If the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even become an unexpected machine. The default value is 60 seconds. 2.2 The kernel value is usually 180 seconds. you can follow this setting, but remember that even if your machine is a lightweight WEB server, there is also a risk of memory overflow due to a large number of dead sockets. the risk of FIN-WAIT-2 is smaller than that of FIN-WAIT-1, because it can only eat K of memory at most, however, they have a longer lifetime.
Net. ipv4.tcp _ keepalive_time = 30
The frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.
The following is a complete kernel optimization setting:
Reference
Net. ipv4.ip _ forward = 0
Net. ipv4.conf. default. rp_filter = 1
Net. ipv4.conf. default. accept_source_route = 0
Kernel. sysrq = 0
Kernel. core_uses_pid = 1
Net. ipv4.tcp _ syncookies = 1
Kernel. msgmnb = 65536
Kernel. msgmax = 65536
Kernel. shmmax = 68719476736
Kernel. shmall = 4294967296
Net. ipv4.tcp _ max_tw_buckets = 6000
Net. ipv4.tcp _ sack = 1
Net. ipv4.tcp _ window_scaling = 1
Net. ipv4.tcp _ rmem = 4096 87380 4194304
Net. ipv4.tcp _ wmem = 4096 16384 4194304
Net. core. wmem_default = 8388608
Net. core. rmem_default = 8388608
Net. core. rmem_max = 16777216
Net. core. wmem_max = 16777216
Net. core. netdev_max_backlog = 262144
Net. core. somaxconn = 262144
Net. ipv4.tcp _ max_orphans = 3276800
Net. ipv4.tcp _ max_syn_backlog = 262144
Net. ipv4.tcp _ timestamps = 0
Net. ipv4.tcp _ synack_retries = 1
Net. ipv4.tcp _ syn_retries = 1
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ mem = 94500000 915000000 927000000
Net. ipv4.tcp _ fin_timeout = 1
Net. ipv4.tcp _ keepalive_time = 30
Net. ipv4.ip _ local_port_range = 1024 65000
The following is a simple nginx configuration file:
User www;
Worker_processes 8;
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;
Error_log/www/log/nginx_error.log crit;
Pid/usr/local/nginx. pid;
Worker_rlimit_nofile 204800;
Events
{
Use epoll;
Worker_connections 204800;
}
Http
{
Include mime. types;
Default_type application/octet-stream;
Charset UTF-8;
Server_names_hash_bucket_size 128;
Client_header_buffer_size 2 k;
Large_client_header_buffers 4 4 k;
Client_max_body_size 8 m;
Sendfile on;
Tcp_nopush on;
Keepalive_timeout 60;
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels =
Keys_zone = TEST: 10 m
Inactive = 5 m;
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 64 k;
Fastcgi_buffers 8 64 k;
Fastcgi_busy_buffers_size 128 k;
Fastcgi_temp_file_write_size 128 k;
Fastcgi_cache TEST;
Fastcgi_cache_valid 200 302 1 h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1 m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout invalid_header http_500;
Open_file_cache max = 204800 inactive = 20 s;
Open_file_cache_min_uses 1;
Open_file_cache_valid 30 s;
Tcp_nodelay on;
Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 16 k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;
Server
{
Listen 8080;
Server_name backup.aiju.com;
Index. php index.htm;
Root/www/html/; # the location here is very important. do not write it in other commands. I have been debugging for a long time before I found this problem.
Location/status
{
Stub_status on;
}
Location ~ . * \. (Php | php5 )? $
{
Fastcgi_pass 127.0.0.1: 9000;
Fastcgi_index index. php;
Fcinclude GI. conf;
}
Location ~ . * \. (Gif | jpg | jpeg | png | bmp | swf | js | css) $
{
Expires 30d;
}
Log_format access' $ remote_addr-$ remote_user [$ time_local] "$ request "'
'$ Status $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" $ http_x_forwarded_for ';
Access_log/www/log/access. log access;
}
}
Several commands about FastCGI (http://wiki.nginx.org/NginxChsHttpFcgiModule ):
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels = keys_zone = TEST: 10 m inactive = 5 m;
This command specifies a path for the FastCGI cache, the directory structure level, the storage time of the keyword region, and the non-active deletion time.
Fastcgi_connect_timeout 300;
Specify the timeout time for connecting to the backend FastCGI.
Fastcgi_send_timeout 300;
The timeout time for sending a request to FastCGI. this value refers to the timeout time for sending a request to FastCGI after two handshakes are completed.
Fastcgi_read_timeout 300;
The timeout time for receiving the FastCGI response. this value refers to the timeout time for receiving the FastCGI response after two handshakes are completed.
Fastcgi_buffer_size 64 k;
Specify the buffer size used to read the first part of the FastCGI response. Generally, the first part of the response will not exceed 1 k. because the page size is 4 k, set this parameter to 4 k.
Fastcgi_buffers 8 64 k;
Specify how many buffers are needed locally to buffer FastCGI responses.
Fastcgi_busy_buffers_size 128 k;
I don't know what this command is for. I only know that the default value is twice that of fastcgi_buffers.
Fastcgi_temp_file_write_size 128 k;
The size of the data block to be used when writing fastcgi_temp_path. the default value is twice that of fastcgi_buffers.
Fastcgi_cache TEST
Enable FastCGI cache and specify a name for it. I personally think it is very useful to enable cache, which can effectively reduce the CPU load and prevent 502 errors.
Fastcgi_cache_valid 200 302 1 h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1 m;
Specify the cache time for the specified response code. in the preceding example, 200,302 response is cached for one hour, 301 response is cached for one day, and others are cached for one minute.
Fastcgi_cache_min_uses 1;
The minimum number of times the file is cached in the inactive parameter value of the fastcgi_cache_path command. in the preceding example, if a file is not used once in five minutes, the file will be removed.
Fastcgi_cache_use_stale error timeout invalid_header http_500;
I don't know the role of this parameter. I guess it is useless to let nginx know which types of cache.
The above are the FastCGI parameters in nginx. In addition, FastCGI also has some configuration needs to be optimized. if you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:
60
Number of concurrent requests simultaneously, that is, it will enable a maximum of 60 sub-threads to process concurrent connections.
102400
Maximum number of opened files.
204800
The maximum number of requests that a process can perform before resetting.
The following figure shows the test result.
The static page configures 4 W for me in squid and sends the test file mentioned in that article to run the webbench-c 30000-t 600 http://backup.aiju.com on six machines at the same time: 8080/index.html command:
Number of connections filtered by netstat:
Php page result in status (php page calls phpinfo ):
Number of php page connections after netstat filtering:
Server load before FastCGI cache is used:
It is difficult to open the php page. you need to refresh the page multiple times to open the page. CPU 0 load is low because the Nic interrupt requests are all allocated to cpu0 during the test, and seven processes are enabled in nginx to the cpu1-7.
After FastCGI is used for caching:
In this case, you can easily open the php page.
This test is not connected to any database, so there is no reference value, but I do not know whether the above test has reached the limit. it does not seem to be available based on the memory and cpu usage, however, there are no unnecessary machines to allow me to run webshells.