Transferred from: http://www.cnblogs.com/QLeelulu/p/3601499.html
Our DSP system at present basically non-dawn time of the QPS are above 10W, we use Golang to handle these HTTP requests, Web server front end with Nginx to do load balancing, through Nginx Proxy_pass to interact with Golang.
Because Nginx Proxy uses the way of short link and the reason of back-end interaction, it makes the system time_wait TCP connection a lot:
Shell> Netstat-n | awk '/^tcp/{++state[$NF]} END {for (key in) print key, "\ T", State[key]} ' time_wait 250263close_wait 57fin_wait2 3ESTA Blished 2463SYN_RECV 8
SS is faster than Netstat, so you can also use the following command to view:
Shell> Ss-ant | awk ' nr>1 {++s[$1]} END {for (k in s) print K,s[k]} '
This takes up too many ports and wastes system resources, so we have to think of ways to reduce time_wait.
One of the effects of time_wait causing excessive ports is that it consumes the ports we need to use, for example, we have a service that listens on a port of 8012, and a reboot often prompts the port to be occupied.
By looking at the/proc/sys/net/ipv4/ip_local_port_range you can know the port range of the configured Linux kernel auto-assigned port:
Shell> cat/proc/sys/net/ipv4/ip_local_port_range1025 65535
For this setting the system will randomly allocate ports from the 1025~65535 range for the connection, and the 8012 ports we serve are just within this range, so if 8012 is just being used by the system, then when we start the service, we will be prompted for the port to be occupied.
So we need to set up/proc/sys/net/ipv4/ip_local_reserved_ports to tell the system which ports are reserved for us and not for automatic allocation.
shell> vim/etc/sysctl.confnet.ipv4.ip_local_reserved_ports = 8012,11211-11220shell> sysctl-p
More specific information on reserved ports can be found in:
- Network Port Reservation
- Reserved ports Avoid consuming ip_local_reserved_ports
Above we solved the problem of port occupancy, but we still need to solve the problem of time_wait too much.
Nginx 1.1 or above version of the upstream has been supported keep-alive, so we can open the Nginx proxy keep-alive to reduce the TCP connection:
Upstream Http_backend { server 127.0.0.1:8080; KeepAlive 16;} server { ... location/http/{ Proxy_pass http://http_backend; Proxy_http_version 1.1; Proxy_set_header Connection ""; ... }}
Can participate in Nginx's Official document: Http://nginx.org/cn/docs/http/ngx_http_upstream_module.html#keepalive
After opening the keep-alive, time_wait significantly reduced:
Shell> Netstat-n | awk '/^tcp/{++state[$NF]} END {for (key in) print key, "\ T", State[key]} ' time_wait 12612close_wait 11fin_wait1 4fin_w AIT2 1ESTABLISHED 7667SYN_RECV 3
In addition, many articles refer to the ability to modify the/etc/sysctl.conf configuration of the system to reduce the TCP connection of time_wait:
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_tw_recycle = 1
See also: http://blog.s135.com/post/271/
However, opening tcp_tw_recycle may cause some unstable network problems, please refer to:
- Remember once time_wait network fault
- Another time_wait.
For instructions on sysctl related configurations, please refer to:
Https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
Reference article:
- Partial Network Kernel parameter description
- Http://performancewiki.com/linux-tuning.html
- TCP protocol timestamp field causes problem analysis
- http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/
Nginx time_wait too many problems when doing front-end proxy