TCP traffic control and congestion control

Source: Internet
Author: User

Traffic Control for TCPthe so-called flow control is to let the sender of the transmission rate is not too fast, so that the receiving party in time to accept. The sliding window mechanism can be used to control the sender's traffic conveniently on the TCP connection. The window unit of TCP is a byte, not a message segment, and the sender's sending window cannot exceed the value of the receiving window given by the receiving party. , which shows that the variable window size is used for flow control. Set host A to send data to Host B. The window value determined by both sides is 400. Set each message segment to a length of 100 bytes, the initial value of the sequence number is seq=1, the arrow above the upper case ACK, indicating that the header is considered an ACK, and a lowercase ack indicates the value of the confirmation field. The receiver's Host B has three traffic control. The first time the window is set to rwind=300, the second one is reduced to rwind=100 and finally reduced to rwind=0, that is, the sender is not allowed to send the data again. This state that causes the sender to pause sending continues until Host B re-emits a new window value. if B sends a 0-window segment to a, soon after, B's receive cache has some storage space. b sends a RWIND=400 message segment to a, but the message segment is lost in the transmission. A has been waiting for notification of a non-0 window sent by B, and B has been waiting for the data sent by a. It's a dead lock. To resolve this deadlock state, TCP has a continuous timer for each connection. Whenever a TCP connection receives a 0-window notification from the other party, it initiates a continuous timer, and if the duration of the timer setting expires, it sends a 0-window to explore the segment (with only 1 bytes of data), and the other party gives the current window value when confirming the detection segment. The choice of sending time of TCP message segmentThere are several options for the short-time TCP message delivery. 1) TCP maintains a variable that is equal to the maximum segment length of MSS, as long as the data stored in the cache reaches the MSS byte to be assembled into a TCP message segment to send out. 2) by the sender's application indicates the request to send the message segment, that is, TCP-supported push operations3) is the sender of a timer period to the time, then the current existing cache data loaded into the message segment sent out. Congestion control of TCP1. Principle of congestion Controlat some point in time, if the demand for a resource in the network exceeds the available parts of the resource, the performance of the network will change, a situation called congestion. Network congestion is often caused by a number of factors, the simple increase in the speed of the node processor or the expansion of the storage space of nodes cache does not solve the congestion problem. For example, when a node cache capacity expands to a very large size, all packets arriving at that node can be queued in the cache queue of the node without any restrictions. Since the capacity of the output link and the speed of the processor are not improved, the majority of the packets in this queue will be significantly increased in the queuing time, resulting in the upper software having to retransmit them. Therefore, the congestion problem is often the whole system of the various parts of the mismatch, only the various parts of the balance, the problem will be resolved. 2. Differences in congestion control and flow controlThe so-called congestion control is to prevent too much data from being injected into the network, so that routers or links in the network will not be overloaded. congestion control has to do with a premise that the network can withstand the existing network load. Congestion is a global problem, involving all hosts, all routers, and all the factors that are associated with reducing network transmission performance. Flow control is often referred to as point-to-point traffic control, is an end-to-end problem. The flow control is to control the sending side to send the data rate, so that the receiving end time to accept. 3. Congestion Control Designcongestion control is difficult to design because it is a dynamic problem, in many cases, even the congestion control mechanism itself becomes the cause of network performance deterioration or even deadlock. From the point of view of control theory, the problem of congestion control can be divided into two methods: open-loop control and closed-loop control. Open-loop control is the design of the network in the current affairs of all the factors related to congestion considerations, once the system is running can not be corrected halfway. closed-loop control is based on the concept of feedback loops, including the following measures:1) Monitoring the network system to detect when and where congestion occurs2) Transmission of information about congestion to actionable locations3) Adjust the action of the network system to solve the problem. 4. Congestion Control MethodInternet recommendation Standard RFC2581 defines four algorithms for congestion control, namely slow start (Slow-start), congestion avoidance (congestion avoidance), fast retransmission (fast restrangsmit), and fast reply (fast Recovery). We assume that1) The data is transmitted in a single direction, while the other direction only transmits the confirmation2) The receiver always has large enough cache space, because the size of the sending window is determined by the degree of congestion in the network. slow start and congestion avoidance

The sender maintains a state variable called the congestion window CWnd (congestion windows) . The size of the congestion window depends on the degree of congestion of the network and is dynamically changing. The sender makes its sending window equal to the Congestion window, and the sending window may be smaller than the congestion window, taking into account the receiver's ability to receive. The principle of the sender to control the congestion window is that as long as the network is not congested, the congestion window increases, so that more packets are sent out. But as long as the network becomes congested, congestion windows are reduced to reduce the number of packets injected into the network.

The idea behind the slow start algorithm is that the initial TCP sends a large number of packets to the network after the connection is established, which can easily cause the router cache to run out of space in the network, thus congestion occurs. Therefore, the newly established connection can not send a large number of packets at the beginning, but only gradually increase the amount of data sent each time according to the network situation, in order to avoid the above phenomenon. Specifically, when a new connection is created, CWnd is initialized to 1 maximum segment (MSS) size, the sender starts to send data according to the Congestion window size, and whenever a message segment is confirmed, CWnd increases up to 1 MSS size. In such a way to gradually increase the congestion window CWnd.

Here is an example of the congestion window size using the number of message segments to illustrate the slow start algorithm, in which the real-time congestion window size is measured in bytes. Such as:

A slow-start threshold ssthresh state variable is also required to prevent the increase in CWnd from causing network congestion . the usage of Ssthresh is as follows:

When Cwnd<ssthresh, the slow start algorithm is used.

When Cwnd>ssthresh, the congestion avoidance algorithm is used instead.

When Cwnd=ssthresh, slow starts with congestion avoidance algorithm arbitrary.

congestion avoidance algorithm idea: Let the congestion window grow slowly, that is, every round trip time RTT will send the sender of the congestion window CWnd plus 1, instead of doubling. This congestion window grows slowly in linear order.

either in the slow start or in the congestion avoidance phase , as long as the sender determines that the network is congested (based on the fact that no acknowledgement is received, although there is no acknowledgement that the packet is missing for another reason, it is handled as congestion because it cannot be determined) , the slow start threshold is set to half the size of the sending window when congestion occurs. Then set the Congestion window to 1 and execute the slow start algorithm. The purpose of this is to quickly reduce the number of packets sent to the network by the host, so that the congested routers have enough time to complete the backlog of packets in the queue.
Such as:

Multiplication decreases and addition increases

 Multiplication reduction: Refers to halving the slow-start threshold, which is set to half of the current congestion window (at the same time, performing a slow-start algorithm), whenever a timeout occurs, regardless of the slow start or congestion avoidance phase. When the network is congested frequently, the Ssthresh value drops quickly to significantly inject small packets into the network.

  Addition increase: The Congestion avoidance algorithm is a slow-growing congestion window to prevent the network from being congested prematurely.

fast retransmission and fast recoveryA TCP connection can sometimes be idle for a long time due to the timeout of waiting for retransmission, slow start and congestion avoidance is not a good solution to such problems, so a fast retransmission and fast recovery congestion control method is proposed. The fast retransmission algorithm does not cancel the retransmission mechanism, but in some cases the earlier retransmission of the lost segment (if the sending side receives three duplicate acknowledgment ACK, then the packet loss is determined, immediately retransmit the missing segment, without waiting for the retransmission timer timeout). Fast retransmission requires the receiving party to issue a duplicate acknowledgement immediately after receiving an out-of-sequence message segment (in order for the sender to know earlier that a message segment has not arrived) and not wait for the sender to confirm when sending the data itself. The fast retransmission algorithm stipulates that the sender should immediately retransmit a message segment that has not yet been received, without having to wait for the set retransmission timer time to expire, as long as it receives three duplicate confirmations in a row. such as:

 Fast re-transmission with the use of fast recovery algorithm, there are the following two points :

① when the sender receives three duplicate confirmations consecutively, the "multiplication reduction" algorithm is executed, and the Ssthresh threshold is halved. However, the slow-start algorithm is not executed next.

② Considering that if the network is congested, it will not receive several duplicate confirmations, so the sender now thinks the network may not be congested. Therefore, instead of performing the slow start algorithm, the CWnd is set to the size of Ssthresh and then the congestion avoidance algorithm is executed. such as:

When using the fast recovery algorithm, the slow start algorithm is only used when the TCP connection is established and when the network time-out occurs.

The Accept window is also called the notification window. Therefore, from the receiver to the sender's traffic control point of view, the sender of the sending window must not exceed the other party to give the rwnd of the receiving window.

That is, the upper limit of the sending window is =min[rwnd,cwnd].

Random early detection red

The above congestion avoidance algorithm is not associated with the network layer, in fact, the network layer strategy of congestion avoidance algorithm is the biggest impact of the router's discard strategy. In simple cases, routers usually process incoming groupings in the first-in, FIFO strategy. When the router's cache is not grouped, it discards the incoming packet, which is called the trailing discard policy. This results in packet loss and the sender considers the network to be congested. What is more serious is that there are many TCP connections in the network, and the segment of the packets in these connections is usually the multiplexed routing path. In the event of a trailing drop of the router, many TCP connections can be affected, and the result is that many TCP connections Enter a slow-start state at the same time. This is referred to as global synchronization in the terminology. Global synchronization causes a sudden drop in network traffic, and after the network returns to normal, its traffic suddenly increases a lot.

To avoid global synchronization in the network, routers use random early detection (red:randomearly detection). The algorithm points to the following:

The queue of the router maintains two parameters, that is, the queue is the longest gate limit min and max, whenever a packet arrives,Red calculates the average queue length. Then treats the coming groupings in a separate situation:

The average queue length of ① is less than the minimum threshold--queues the newly arrived packets into a queue.

② average queue Length between the minimum and maximum threshold-the packet is discarded according to a probability.

③ average queue Length is greater than maximum threshold-discards newly arrived groupings.

Red does not wait until congestion has occurred to discard all packets at the end of the queue, but when the early warning of network congestion is detected (that is, when the average queue length of the router exceeds a certain threshold), randomly drops the packet with probability p, so that the congestion control is only in the individual TCP connections, thus avoiding global congestion control.

The key to red is to select three parameter minimum threshold, gate limit, drop probability and calculate average queue length. The gate line must be large enough to ensure a higher utilization of the router's output link. and the gate limit and the minimum limit should be large enough, yes in a TCP round-trip time the normal growth of the RTT Squadron is still within the limits of the gate. Experience proves that it is appropriate to make the maximum threshold equal to twice times the minimum threshold.

  The average queue length uses the weighted average method to calculate the average queue length, which is the same as the round trip time (RTT) calculation strategy.

TCP traffic control and congestion control

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.