TCP sliding window (send window and receive window)

Source: Internet
Author: User
As a reliable flow-oriented transport protocol, TCP protocol is guaranteed by sliding window protocol, and congestion control is implemented by control window combined with a series of control algorithms.
First, sliding window protocol
About this part of oneself do not know how to describe just good, because the part of understanding more, the following with their own understanding to introduce the essence of TCP: Sliding window protocol.
The so-called sliding window protocol, self-understanding has two points: 1. The "window" corresponds to a sequence of bytes that can be sent by the sender, whose continuous range is called "Window"; 2. "Swipe" means that this "allowed range" can vary with the sending process, in order to "swipe". Before I introduce an example to this agreement, I think it is necessary to understand the following premises first:
-1. The two ends of the TCP protocol are sender A and receiver B, because it is a full-duplex protocol, so A and B should maintain a separate send buffer and receive buffer, because of the equivalence (a B and B receive), we send a B received case as an example;
-2. The sending window is part of the send cache, which is the part that can be sent by the TCP protocol, in fact all the data that the application layer needs to send is put into the sender's send buffer;
-3. There are four concepts related to the sending window: data that has been sent and received for confirmation (no longer sent to the window and send buffer), data sent but not received (in the sending window), data that is allowed to be sent but not yet sent, and data that is temporarily not allowed to be sent in the send buffer outside the sending window;
-4. Each time the data is successfully sent, the sending window is moved sequentially in the send buffer, and the new data is included in the window ready to be sent;
TCP establishes the initial connection, B tells a its own receive window size, for example, ' 20 ':
Byte 31-50 is the Send window

A sends 11 bytes, the sending window position is unchanged, B receives the disorderly data grouping:

Only when a successfully sends the data, that is, the data sent is confirmed by B, the sliding window is moved away from the sent data, while B confirms the continuous data grouping, and the packet is received first, and the network is not repeated:

Second, flow control
There are two main points to be mastered in flow control. First, TCP uses the sliding window to realize the flow control mechanism, and the second is how to consider the transmission efficiency in the flow control.
1. Flow control
The so-called flow control, mainly the receiver to transmit information to the sender, so that it does not send data too fast, is an end-to-end control. The primary way is to return an ACK that contains the size of its own receive window, and uses the size to control the sending of the sender's data:

This involves a situation, if B has already told a its own buffer is full, so a stop sending data, wait for a period of time, B's buffer has been surplus, so send a message to a to tell a my rwnd size of 400, but this message was unfortunately lost, so there is a wait for B notice | | b Wait for a deadlock state to send the data. In order to deal with this problem, TCP introduces a continuous timer (persistence timer), when a receives the other 0 window notification, the timer is enabled, time to send a 1-byte probe message, the other side will respond to their own receiving window size, if the result is still not 0, Reset the persistent timer to continue waiting.
2. Transfer efficiency
One obvious problem is that single acknowledgment of single send bytes, and the window having a spare that notifies the sender of sending a byte, undoubtedly increases the number of unnecessary messages in the network (think of the 40-byte header added for a byte of data). , so our principle is to send as many bytes as possible, or to notify the sender to send more than one byte at a time when the window is more vacant. For the former we use the Nagle algorithm extensively, namely:
* *. If the sending application process sends the sent data byte by bit to the TCP send cache, the sender sends the first data byte first, and caches the latter bytes first;
* *. When the sender receives the acknowledgement of the first byte (also obtains the network condition and the receiver's receiving window size), then sends out the buffer's remaining bytes to make the appropriate size message;
. When the data reached has reached half the size of the sending window or to reach the maximum length of the message segment, a message segment is sent immediately;
For the latter, it is often the case that the receiver waits for a period of time, or the receiver gets enough space to hold a segment of the message or wait until the cache is half free, and then notifies the sender to send the data.
Third, congestion control
Both the link capacity in the network and the caches and processors in the switching nodes have a working limit, and congestion occurs when the requirements of the network exceed their operating limits. Congestion control is to prevent excessive data from being injected into the network, so that routers or links in the network are not overloaded. The usual methods are:
1. Slow start, congestion control
2. Fast re-transmission, fast recovery
The basics of everything are slow to start, and this approach is the idea:
-1. The sender maintains a variable called "congestion Window", which together determines the sender's sending window;
-2. When the host begins to send data, avoid injecting a large number of bytes into the network at once, causing or increasing congestion, and choosing to send a 1-byte heuristic message;
-3. When the first byte of data is received, a 2-byte message is sent;
-4. If a 2-byte acknowledgment is received again, 4 bytes are sent, followed by a 2-digit exponential;
-5. Finally, a preset "slow start threshold" is reached, such as 24, which sends 24 groups at a time, following the conditions below:
* *. CWnd < Ssthresh, continue to use the slow start algorithm;
* *. CWnd > Ssthresh, stop using the slow start algorithm and use the congestion avoidance algorithm instead;
. CWnd = Ssthresh, both the slow-start algorithm and the congestion avoidance algorithm can be used;
-6. The so-called congestion avoidance algorithm is: each through a round trip time RTT will send the sender of the Congestion window +1, that is, the congestion window to increase slowly, in accordance with the linear law growth;
-7. When there is network congestion, such as packet loss, the slow start threshold is set to half, then the CWnd is set to 1, performing the slow start algorithm (lower starting point, exponentially increasing);

The purpose of this method is to reduce the number of packets sent by the host to the network in a sequential manner, so that the congested routers have enough time to complete the backlog of packets in the queue. Slow start and congestion control algorithms are often used as a whole, and fast retransmission and fast recovery is to reduce the retransmission time caused by packet loss due to congestion, thus avoiding the passing of useless data to the network. The mechanism for fast retransmission is:
-1. The receiver establishes such a mechanism that if a packet is lost, the retransmission request for the packet continues to be sent to the subsequent package;
-2. Once the sender receives three identical confirmations, it is known that an error occurred after the packet, and the packet is re-transmitted immediately;
-3. At this point the sender begins to execute the "fast Recovery" algorithm:
* *. The slow start threshold is halved;
* *. CWnd is set to the value after the slow start threshold is halved;
. Implement congestion avoidance algorithm (high starting point, linear growth);

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.