Today, I checked a basic server and found that time_wait is much higher than 3 K. time_wait itself does not occupy a lot of resources, unless it is under attack, but too many servers are still
It is possible to disable. time_wait 3699 close_wait 52 fin_wait1 32 syn_sent 1 fin_wait2 2 2
Established 17 syn_recv 45 closing 6
The termination of TCP is achieved through four handshakes between the two parties according to the explanation of TCP establishment and termination in TCP/IP explanation. The party initiating the termination will take the initiative to close the service, and the other party in the response will passively close the service.
1. The initiator changes the status to fin_wait_1, closes the application process, and issues a tcp fin segment;
2. when receiving the fin segment, the receiver returns an ACK with a confirmation serial number, and sends an EOF to the corresponding process, and changes the status to close_wait, after receiving ACK, the initiator changes its status to fin_wait_2;
3. the receiver closes the application process, changes the state to last_ack, and sends a tcp fin segment to the other party;
4.
After receiving the fin, the initiator changes the status to time_wait and issues ack confirmation for the fin. After Ack is successfully sent (within 2msl), the TCP status of both parties changes to closed.
It is not difficult to see the meaning of the result shown above. According to the TCP protocol, the party that initiates the shutdown will enter the time_wait status (the TCP implementation must end the connection reliably in two directions (full
(Duplex off), lasting 2 * MSL (max segment lifetime), default is 240 seconds. Why does the time_wait status need to be 2msl?
So long?
The time_wait wait time is 2msl, that is, the maximum survival time. If time_wait
If the status is not long enough (for example, less than 2msl), the first connection is terminated normally. The second connection with the same related quintuple appears (because the party initiating the connection before termination may need to resend
Ack, so the time to stay in this status must be twice that of MSL .), The arrival of the duplicate packets of the first connection interferes with the second connection.
TCP must prevent duplicate packets from appearing after the end of a connection, so the time_wait state is kept long enough (2msl). The TCP packets in the corresponding direction of the connection must be completely responded, or be discarded. The second connection is not obfuscated.
Note: MSL (the most
Large segment lifetime) indicates the maximum survival time of TCP packets over the Internet. A specific MSL value must be selected for each TCP implementation. RFC
1122 is recommended for 2 minutes, but the traditional BSD implementation takes 30 seconds. The maximum retention time of time_wait status is 2 * MSL, that is, 1-4 minutes.
Operations on Apache
HTTP 1.1 requires that the default action is keep-alive, that is, multiple requests/response will be transmitted over TCP connections.
Keepalive on in HTTP, and it is found that time_wait is missing immediately. Only 300 of it looks like.
To sum up, I think there are two reasons.
1. keepalive is not enabled. As a result, a new TCP connection is required for each request. After the request is completed, the connection is closed.
Time_wait status, no duplicates
Keepalive is a persistent connection similar to MySQL. If you set keepalive to on, requests from the same client do not need
You need to connect again to avoid creating a new connection for each request, increasing the burden on the server.
2. keepalive has a high value in the system. The default value is idle connection.
No activity exists within 7200 seconds (2 hours). It will be disconnected. We enable keepalive: keepalive on
Maxkeepaliverequests 120 keepalivetimeout 15
In this way, each connection can send 100 requests with a timeout of 15 seconds (if the time between the second request and the first request exceeds the keepalivetimeout time, the first connection will
Interrupt, and then create a second connection ).
Optimization and adjustment of kernel-level keepalive and time_wait VI/etc/sysctl
Net. ipv4.tcp _ tw_reuse = 1 Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ keepalive_time = 1800 net. ipv4.tcp _ fin_timeout = 30
Net. Core. netdev_max_backlog = 8096 after modification, use sysctl-P to validate the annotation of the preceding parameters.
/Proc/sys/NET/IPv4/tcp_tw_reuse
This file indicates whether to allow re-application of the socket in the Time-Wait Status for the new TCP connection.
/Proc/sys/NET/IPv4/tcp_tw_recycle
Recyse accelerates time-Wait sockets collection
Modifications to tcp_tw_reuse and tcp_tw_recycle may result in. Warning, got duplicate TCP line
Warning, got bogus TCP
Line. the above two parameters refer to the existence of these two completely identical TCP connections, which will happen when a connection is quickly disconnected and reconnected, and the port and address used are the same. But basic
This will not happen. In any case, enabling the above settings will increase the chance of reproduction. This prompt does not cause any harm or reduce system performance, and is currently working.
/Proc/sys/NET/IPv4/tcp_keepalive_time
Indicates the frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.
/Proc/sys/NET/IPv4/tcp_fin_timeout the best value is 30 as it is in BSD
The fin_wait1 status is the status when the initiator actively requests to close the TCP connection and waits for the receiving end to reply to ack after actively sending the fin. For local disconnected socket connection
The time for TCP to remain in the fin-wait-2 state. The other party may disconnect or never end the connection or unexpected process will die.
/Proc/sys/NET/CORE/netdev_max_backlog this file specifies the maximum number of packets allowed to be sent to the queue when the interface receives packets at a rate faster than the kernel processes these packets.