Time_waitStatus:
The port connected to the server after the client establishes a TCP/IP connection with the server and closes the socket
Status: time_wait
Is it true that all sockets that execute active shutdown will enter the time_wait status?
Is there any situation in which the socket that is automatically closed directly enters the closed state?
After the last Ack is sent
It will enter the time_wait status and stay in the 2msl (max segment lifetime) Time
This is essential for TCP/IP, that is, it cannot be solved.
That is, the TCP/IP designer was designed like this.
There are two main reasons:
1. Prevent the package in the last connection from appearing again after getting lost, affecting the New Connection
(After 2msl, all repeated packets in the last connection will disappear)
2. Close TCP connection reliably
The last ack (FIN) sent by the active shutdown party may be lost, and the passive party will resend the ACK (FIN ).
Fin. If the active Party is in the closed state, it will respond to the RST instead of ack. So
The active party must be in the time_wait state, not the closed state.
Time_wait does not occupy a large amount of resources unless it is attacked.
On the squid server, enter the following command:
# Netstat-N | awk '/^ TCP/{++ s [$ NF]} end {for (a in S) print a, s [a]}'
Last_ack 14
Syn_recv 348
Established 70
Fin_wait1 229
Fin_wait2 30
Closing 33
Time_wait 18122
Status: Description
Closed: No connection is active or in progress
Listen: the server is waiting for incoming call
Syn_recv: a connection request has arrived, waiting for confirmation
Syn_sent: The application has started. Open a connection.
Established: normal data transmission status
Fin_wait1: The application says it has been completed
Fin_wait2: the other side has agreed to release
Itmed_wait: wait until all groups die
Closing: both sides attempt to close at the same time
Time_wait: the other side has initialized a release.
Last_ack: waiting for all groups to die
That is to say, this command can classify and summarize the network connection status of the current system.
The following explains why it should be written like this:
A simple pipe operator connects netstat and awk commands.
----------------------
Let's take a look at netstat:
Netstat-n
Active Internet connections (W/O servers)
PROTO Recv-Q send-Q local address foreign address State
TCP 0 0 123.123.123.123: 80 234.234.234.234: 12345 time_wait
When you actually execute this command, you may get thousands of records similar to the above, but we can use one of them.
----------------------
Let's take a look at awk:
/^ TCP/
Filters records starting with TCP to shield irrelevant records such as UDP and socket.
State []
It is equivalent to defining an array named state.
NF
Indicates the number of fields in the record. As shown above, NF is equal to 6.
$ NF
Indicates the value of a field. For the record shown above, $ NF is $6, which indicates the value of the 6th fields, that is, time_wait.
State [$ NF]
Indicates the value of the array element. The record shown above indicates the number of connections in the State [time_wait] State.
++ State [$ NF]
Add one number. The record shown above is to add one to the number of connections in the State [time_wait] State.
End
Indicates the command to be executed in the last stage
For (key in State)
Traverse Arrays
Print key, "/T", State [Key]
Print the keys and values of the array. Separate them with/T tabs in the middle to beautify them.
If a large number of systems existTime_waitStatus connection, which can be solved by adjusting kernel parameters,
Vim/etc/sysctl. conf
Edit the file and add the following content:
Net. ipv4.tcp _ syncookies = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ fin_timeout = 30
Then run/sbin/sysctl-P to make the parameter take effect.
High-concurrencySquidThe number of TCP time_wait sockets on the server is usually two or 30 thousand, and the server is easily dragged to death. By modifying Linux kernel parameters, you can reduce the number of time_wait sockets on the squid server.
VI/etc/sysctl. conf
Add the following lines: Reference
Net. ipv4.tcp _ fin_timeout = 30
Net. ipv4.tcp _ keepalive_time = 1200
Net. ipv4.tcp _ syncookies = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.ip _ local_port_range = 1024 65000
Net. ipv4.tcp _ max_syn_backlog = 8192
Net. ipv4.tcp _ max_tw_buckets = 5000
Note:
Net. ipv4.tcp _ syncookies = 1 indicates enabling syn cookies. When a SYN wait queue overflows, cookies are enabled to prevent a small number of SYN attacks. The default value is 0, indicating that the process is disabled;
Net. ipv4.tcp _ tw_reuse = 1 indicates enabling reuse. Allow time-Wait sockets to be re-used for a New TCP connection. The default value is 0, indicating that the TCP connection is disabled;
Net. ipv4.tcp _ tw_recycle = 1 indicates to enable quick recovery of Time-Wait sockets in TCP connections. The default value is 0, indicating to disable it.
Net. ipv4.tcp _ fin_timeout = 30 indicates that if the socket is disabled by the local end, this parameter determines the time it remains in the fin-wait-2 state.
Net. ipv4.tcp _ keepalive_time = 1200 indicates the frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours, which is changed to 20 minutes.
Net. ipv4.ip _ local_port_range = 1024 65000 indicates the port range used for external connection. The default value is small: 32768 to 61000, Which is changed to 1024 to 65000.
Net. ipv4.tcp _ max_syn_backlog = 8192 indicates the length of the SYN queue. The default value is 1024. The length of the queue is 8192, which can accommodate more network connections waiting for connection.
Net. ipv4.tcp _ max_tw_buckets = 5000 indicates that the system maintains the maximum number of time_wait sockets at the same time. If this number is exceeded, the time_wait socket is immediately cleared and warning information is printed. The default value is 180000, Which is changed to 5000. For servers such as Apache and nginx, the number of time_wait sockets can be greatly reduced by parameters in the previous lines, but the effect on squid is not great. This parameter can control the maximum number of time_wait sockets to prevent the squid server from being dragged to death by a large number of time_wait sockets.
Run the following command to make the configuration take effect:
/Sbin/sysctl-P