Problem: When you start Nginx and php-fpm, use Netstat-tunap to see a large number of time_wait connections
For not knowing the reason, fear is under attack, immediately killall Nginx and PHP-FPM
Could it have been caused by the attack on port 80? Try to modify Nginx 80 port to 8081, but the result is also a large number of time_wait connections
Can't know the problem, Baidu find ways
Reference Link: http://www.heminjie.com/wordpress/3322.html
It's not settled yet.
To come home to the evening and re-test to improve, and found some problems
netstat-tunap and netstat-tunlp cause I didn't see this before.
Finally, refer to each blog to complete the optimization
Vim/etc/sysctl.conf
Add the following parameters at the end:
Net.ipv4.tcp_syncookies = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_fin_timeout = 30
Net.ipv4.tcp_syn_retries = 5
Net.ipv4.tcp_synack_retries = 5
Net.ipv4.tcp_keepalive_time = 1200
Net.ipv4.ip_local_port_range = 1024 65000
Net.ipv4.tcp_max_syn_backlog = 8192
Net.ipv4.tcp_max_tw_buckets = 5000
Detailed reference link:http://leven.blog.51cto.com/1675811/382097
a time_wait cause: 1,nginx existing load balancer module to achieve PHP fastcgi load Balancing,Nginx uses a short connection, so will cause a lot of time_wait State of the connection . 2.Tcp / ipThat's what the designers were designed to do.
There are two main reasons
(1)Prevent packages in the last connection, re-emerge after getting lost, affect new connections
(After2MSL, all duplicate packets in the last connection will disappear)
(2)Reliable shut-offTCPConnection
The last ack (FIN) sent at the active shutdown may be lost, when the passive side will resend
Fin, If the active side is in the CLOSED State at this point , will respond to rst instead of ack. So
the active side should be in a time_wait State, not a CLOSED . two excessive time_wait hazards time_wait does not occupy a significant amount of resources unless it is under attack. as long as the time_wait occupied memory control in a certain range. The general default maximum is 35600 bar time_wait. three-way solutionNet.ipv4.tcp_syncookies = 1Means openSYN Cookies. When it appearsSYNwhen waiting for the queue to overflow, enableCookiesto handle, to prevent a small amount ofSYNattack, default to0, which means close;
Net.ipv4.tcp_tw_reuse = 1means to turn on reuse. Allow thetime-wait socketsre-use for newTCPconnection, default is0, which means close;
Net.ipv4.tcp_tw_recycle = 1Means openTCPin the connectiontime-wait socketsFor fast recovery, the default is0, which means close.
Net.ipv4.tcp_fin_timeout = 30Indicates that if the socket is closed by a local requirement, this parameter determines that it remains in thefin-wait-2the time of the state.
Net.ipv4.tcp_keepalive_time = 1200Represents whenkeepaliveAt the time of theTCPSendkeepalivethe frequency of the message. Default is2hours, instead -minutes.
Net.ipv4.ip_local_port_range = 1024 65000Represents the range of ports used for an outward connection. is small by default:32768to the61000, instead1024x768to the65000.
Net.ipv4.tcp_max_syn_backlog = 8192SaidSYNthe length of the queue, which defaults to1024x768, increase the queue length to8192, which can accommodate more network connections waiting to be connected.
Net.ipv4.tcp_max_tw_buckets = 5000Indicates that the system maintains bothtime_waitThe maximum number of sockets, if more than this number,time_waitthe socket is immediately cleared and the warning message is printed.
Default is180000, instead the. ForApache,Nginxsuch as the server, the parameters of the last few rows can be very good to reducetime_waitnumber of sockets, but forSquid, but not very effective. This parameter can controltime_waitmaximum number of sockets to avoidSquidthe server is heavilytime_waitThe socket is dragged to death.
note :
Net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
Set both parameters: reuse time-wait State socket for the new tcp connection; recyse is accelerating time-wait sockets recycling
Nginx+php generates a large number of time_wait connection solutions