Problem Description:
Without considering the system load, CPU, memory, etc., netstat monitors a large number of established connections and time_wait connections.
# Netstat-n | awk '/^tcp/{++y[$NF]} end {to (w in y) Print W, y[w]} '
close_wait 348
established 1240
time_wait< c5/>5621
Monitor the link port between Apache and Tomcat
#netstat-N | grep 8009 | Wc-l
Question 1: How to solve a lot of time_wait
By adjusting the kernel parameters:
Vim/etc/sysctl.conf
#编辑文件, add the following:
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout =
#然后执行/sbin/sysctl-p let parameters take effect.
Configuration Description:
Net.ipv4.tcp_syncookies = 1 means to open syn Cookies. When the SYN wait queue overflow occurs, cookies are enabled to handle, to prevent a small number of SYN attacks, the default is 0, indicating shutdown;
Net.ipv4.tcp_tw_reuse = 1 means to turn on reuse. Allows time-wait sockets to be re used for a new TCP connection, which defaults to 0, indicating shutdown;
Net.ipv4.tcp_tw_recycle = 1 means to open the Time-wait Sockets fast recovery in TCP connection, the default is 0, which means close;
NET.IPV4.TCP_FIN_TIMEOUT=30 modifies the system default timeout time.
If the above configuration tuning performance is not ideal, you can continue to modify the configuration:
vi/etc/sysctl.conf
net.ipv4.tcp_keepalive_time = 1200
#表示当keepalive起用的时候, the frequency with which TCP sends keepalive messages. The default is 2 hours, and 20 minutes instead.
Net.ipv4.ip_local_port_range = 1024 65000
#表示用于向外连接的端口范围. Small by default: 32768 to 61000, 1024 to 65000.
Net.ipv4.tcp_max_syn_backlog = 8192
#表示SYN队列的长度, defaults to 1024, increases queue length to 8192, and can accommodate more network connections waiting for connections.
net.ipv4.tcp_max_tw_buckets = 5000
#表示系统同时保持TIME_WAIT套接字的最大数量, if this number is exceeded, the time_wait socket is immediately cleared and the warning message is printed. The
default is 180000, and 5000 is changed. For Apache, Nginx and other servers, the parameters on a few lines can well reduce the number of time_wait sockets, but for Squid, the effect is not. This parameter controls the maximum number of time_wait sockets and avoids the squid server being dragged to death by a large number of time_wait sockets.
Tuning finished, and then pressure to see the effect of it.
# Netstat-n | awk '/^tcp/{++y[$NF]} end {to (w in y) Print W, y[w]} '
established 968
Question 1: How to resolve a large number of established are not released after the request is closed
The initial inference is that the Tomcat server has a problem retrieving session, which is generally associated with the server's timeout settings.
View the Tomcat configuration file Server.xml
<connector port= "8080" protocol= "http/1.1" connectiontimeout= "20000" redirectport= "8443"
UTF-8 "/>
*****
Check the configuration to get 20000 milliseconds when acceptcount= "100", obviously unreasonable, the maximum number of connections is too small bar.
So further optimization:
connectiontimeout= "20000" instead of "connectiontimeout="
acceptcount= "100" to Acceptcount= "5000"
Optimization finished, continue to pressure test ...
System response ability climbed up, before LoadRunner error problem until overwhelming * * * concurrent also never appeared.
ACTION.C (380): Error-26608: For "Http://www.cnlogs.com/javame", HTTP status code =504 (Gateway time-out)