Closing the socket is divided into active shutdown (active closure) and passive shutdown (passive closure). The former refers to a local host to initiate the shutdown, while the latter refers to the local host detects the remote host after the shutdown, to respond, thereby shutting down the entire connection.
Its state diagram is shown in the following illustration:
at first each socket is closed state, and when the client first makes a connection, he sends a SYN packet to the server, and the client enters the syn_sent state.
The server receives the SYN packet, feedback a syn-ack packet, the client returns a ACK packet after receiving a established state, if a long time did not receive Syn-ack packet, the client timeout into the closed state.
when the server binds and listens to an end port, the socket state is listen, and when the client attempts to establish a connection, the server receives a SYN packet and feeds the Syn-ack packet. The server state becomes SYN_RCVD, and when the client sends an ACK packet, the server socket becomes the established state.
When a program is in the established state, there are two paths to turn it off, the first is active shutdown, and the second is passive shutdown. If you want to shut down voluntarily, send a fin package. When your program closesocket or shutdown (tag), your program sends a FIN packet to peer, and your socket becomes fin_wait_1 state. Peer feedback an ACK package, your socket into the fin_wait_2 state. If peer is also shutting down the connection, then it will send a FIN packet to your computer, you feedback an ACK packet, and turn it into a time_wait state.
time_wait status and 2MSL wait status. MSL means that the maximum segment lifecycle (Maximum Segment Lifetime) indicates the time that a package exists on the network and is discarded. Each IP packet has a TTL (time_to_live), and when it is reduced to 0 o'clock the packet is discarded. Each router causes the TTL to be reduced and the packet is routed. When a program enters the TIME_WAIT state, he has 2 MSL time, this allows TCP to repeat the final ACK, in case the final ACK is lost, so that the fin is retransmitted. After the 2MSL wait state is complete, the socket enters the closed state.
Passive shutdown: When the program receives a FIN packet from the peer and feeds an ACK packet, the socket of the program is transferred to the close_wait state. Since the peer has been closed, no news can be sent. But the program is still OK. To close the connection, the program itself has been sent to the FIN, the program's TCP socket state into a last_ack state, when the program received the ACK packets from peer, the program into the closed state.
/////////////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////
Root permissions execute lsof-n httpd | grep TCP to see if there are a large number of close_wait
Absrtact: This article explains why the socket connection is locked in a close_wait state, and what measures are used to avoid it.
Not long ago, my socket client program encountered a very embarrassing error. It should have consistently sent data to the server on a socket long connection, and if the socket connection is disconnected, the program will automatically retry the connection.
One day the discovery program is trying to establish a connection, but it always fails. With netstat view, this program unexpectedly has thousands of socket connections in the close_wait state, so that reached the upper limit, so unable to establish a new socket connection.
Why is that so?
Why are they all in the close_wait state?
The reason for the generation of close_wait state
First we know that if our client program is in a close_wait state, it means that the socket is closed passively.
Because if the server side actively disconnect the current connection, then the two sides to shut down this TCP connection a total of four packet:
Server---> FIN---> Client
Server <---ACK <---Client
The server end is in a fin_wait_2 state, and our program is in a close_wait state.
Server <---FIN <---Client
Then the client sends the fin to the server,client to be placed in the Last_ack state.
Server---> ACK---> Client
The server responds with an ACK, then the client socket is actually placed in the closed state.
Our program is in a close_wait state, not a last_ack state, indicating that the fin has not been sent to the server, so there may be a lot of data to send or something else to do before closing the connection, causing the fin packet to not be sent.
Why, then, why not send a fin bag, there are so many things to do before shutting down the connection.
Elssann For example, when the other party calls Closesocket, my program is calling recv, this time it is possible that the other side sent the FIN packet I did not receive, but by the TCP back to an ACK packet, so my socket into the close_wait state.
So he suggests here to determine if the return value of the RECV function has been faulted, and then actively closesocket it, so that the fin packet is not received.
Because we have set the Recv timeout time to 30 seconds, if it is really a timeout, the error received here should be wsaetimedout, in which case you can also actively shut down the connection.
There is also a question as to why thousands of connections are in this state. Did the server always take the initiative to dismantle our connection during that time?
In any case, we must prevent a similar situation from happening again.
First, we want to make sure that the original port can be reused, which can be done by setting the SO_REUSEADDR socket option:
Reusing local addresses and ports
I used to be a port for a new use, so it caused thousands of ports to enter the Close_wait state. If the next time this embarrassing situation occurs, I would like to add a qualification, only the current port is in a close_wait state.
In the call
sockconnected = socket (af_inet, sock_stream, 0);
After that, we want to set the options for this socket to reuse:
Allow reuse of local addresses and ports:
The advantage of this is that even if the socket is broken, the preceding socket function will not occupy another, but is always a port
This prevents the socket from always being connected, and then, in accordance with the original practice, will continue to change the port.
int nreuseaddr = 1;
SetSockOpt (sockconnected,
Sol_socket,
SO_REUSEADDR,
(const char*) &nreuseaddr,
sizeof (int));
That's what the textbook says: If the server shuts down or exits, causing the local address and port to be in time_wait state, then SO_REUSEADDR is very useful.
Maybe we can't avoid being frozen in the close_wait state never appears, but at least it guarantees that the new port will not be occupied.
Second, we want to set the so_linger socket option:
Calmly or forcibly shut down.
Linger is the meaning of "procrastination".
By default (Win2K), the so_dontlinger socket option is 1;so_linger option, linger is {l_onoff:0,l_linger:0}.
If the closesocket () was invoked during the process of sending the data (the Send () was not completed, and the data was not sent), the previous measures we generally took were "calmly shut down":
Because before exiting the service or every time I re-establish the socket, I will call
The two-way traffic is closed first.
Shutdown (sockconnected, sd_both);
For security, close this old connection each time you establish a socket connection
Closesocket (sockconnected);
This time we're going to do this:
Set the So_linger to 0 (that is, the L_onoff field in the linger structure is non-zero but l_linger to 0) without fear that the closesocket call will enter a "locked" state (waiting to be completed), regardless of whether the queued data is not sent or not acknowledged. This shutdown is called "forced shutdown" because the virtual circuit of the socket is immediately reset, and all data that has not yet been emitted is lost. The recv () call at the far end will fail and return a wsaeconnreset error.
Set this option after Connect is successfully established:
Linger M_slinger;
M_slinger.l_onoff = 1; (in Closesocket () call, but there is no data sent to complete the time allowed to stay)
M_slinger.l_linger = 0; (Allowed to stay for 0 seconds)
SetSockOpt (sockconnected,
Sol_socket,
So_linger,
(const char*) &m_slinger,
sizeof (linger));
Summarize
We may not be able to avoid the close_wait state freeze again, but we'll minimize the impact, and hopefully that reusable socket option will kick the CLOSE_WAIT state the next time the connection is made again.