Netstat monitoring a large number of established connections and TIME_WAIT connectivity issues
Problem Description:
Netstat monitors a large number of established connections and time_wait connections, regardless of system load, CPU, memory, and so on.
# Netstat-n | awk '/^tcp/{++y[$NF]} END {for (w in y) Print W, y[w]} ' close_wait 348ESTABLISHED 1240time_wait 5621
Monitor the link port between Apache and Tomcat
#netstat-N | grep 8009 | Wc-l
Question 1: How to solve a lot of time_wait
By adjusting the kernel parameters:
vim/etc/sysctl.conf# Edit the file, add the following: Net.ipv4.tcp_syncookies = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_fin_timeout = 30# then executes/sbin/sysctl-p to let the parameters take effect.
Configuration Description:
Net.ipv4.tcp_syncookies = 1 means that SYN Cookies are turned on. When there is a SYN wait queue overflow, cookies are enabled to protect against a small number of SYN attacks, the default is 0, which means close;
Net.ipv4.tcp_tw_reuse = 1 means turn on reuse. Allows time-wait sockets to be re-used for new TCP connections, which defaults to 0, which means shutdown;
Net.ipv4.tcp_tw_recycle = 1 indicates a fast recovery of time-wait sockets in a TCP connection, and the default is 0, which means close;
NET.IPV4.TCP_FIN_TIMEOUT=30 modifies the default timeout time for the system .
If the above configuration tuning performance is not ideal, you can continue to modify the configuration:
Vi/etc/sysctl.confnet.ipv4.tcp_keepalive_time = #表示当keepalive起用的时候, the frequency at which TCP sends keepalive messages. The default is 2 hours, which is changed to 20 minutes. Net.ipv4.ip_local_port_range = 1024x768 65000 #表示用于向外连接的端口范围. Small by default: 32768 to 61000, 1024 to 65000. Net.ipv4.tcp_max_syn_backlog = 8192 #表示SYN队列的长度, with a default of 1024, and a larger queue length of 8192, which can accommodate more network connections waiting to be connected. Net.ipv4.tcp_max_tw_buckets = #表示系统同时保持TIME_WAIT套接字的最大数量, if this number is exceeded, the time_wait socket is immediately cleared and a warning message is printed. The default is 180000, which changes to 5000. For Apache, Nginx and other servers, the parameters of the last few lines can be a good way to reduce the number of time_wait sockets, but for Squid, the effect is not small. This parameter controls the maximum number of time_wait sockets, preventing squid servers from being dragged to death by a large number of time_wait sockets.
Tuning is complete, then press to see the effect of it.
# Netstat-n | awk '/^tcp/{++y[$NF]} END {for (w in y) Print W, Y[w]} ' established 968
Issue 1: How to resolve the request after the end of the still large number of established not released
The preliminary inference is that there was a problem with the Tomcat Server recycling session, which is generally associated with the server's timeout setting.
View tomcat configuration files server.xml
<connector port= "8080" protocol= "http/1.1" connectiontimeout= "20000" redirectport= "8443" URIEncoding= " UTF-8 "/>
*****
Check the configuration for 20000 milliseconds when the acceptcount= "100", obviously unreasonable, the maximum number of connections is too small bar.
So further optimization:
connectiontimeout= "20000" instead of connectiontimeout= "acceptcount=" to "100" to Acceptcount= "5000"
Optimization completed, continue to pressure measurement ...
System responsiveness has climbed, before LoadRunner error problem until overwhelming * * * and no longer appear.
ACTION.C (380): Error-26608: For "Http://www.cnlogs.com/javame", HTTP status code =504 (Gateway time-out)
Server performance tuning (netstat monitor a large number of established connections and TIME_WAIT connectivity issues)