The advantages and disadvantages of TCP congestion control algorithm for environmental performance analysis

Source: Internet
Author: User
Tags ack

"Abstract" This paper gives a brief description of TCP congestion control algorithms, and points out their advantages and disadvantages, and their applicable environment.

"Keyword" TCP congestion control algorithm advantages and disadvantages of the application of environmental fairness

Fair sex

Fairness is the ability to equitably share the same network resources (such as bandwidth, cache, and so on) at each source end (or a different TCP connection or UDP datagram established at the same source) when congestion occurs. The source at the same level should get the same number of network resources. The root cause of fairness is that congestion inevitably leads to packet loss, and packet loss leads to competition between data streams to compete for limited network resources, and more damage will be caused to the data stream with weak scrambling ability. Therefore, there is no problem of fairness if there is no congestion.

The problem of fairness on the TCP layer is manifested in two ways:

(1) The connection-oriented TCP and the non-connected UDP in the congestion occurs when the congestion indication of the different response and processing, resulting in the unfair use of network resources problems. In the case of congestion, TCP traffic with congestion control reaction mechanism enters congestion avoidance phase according to congestion control step, thereby proactively reducing the amount of data sent into the network. However, for non-connected datagram UDP, because there is no end-to-end congestion control mechanism, even if the network sends a congestion indication (such as packet loss, duplicate ACK received, etc.), UDP does not reduce the amount of data sent to the network as TCP does. Results The TCP data flow that obey congestion control gets less and fewer network resources, and UDP with no congestion control will get more and more network resources, which leads to the serious unfairness of network resources allocation at each source end.

The unfairness of network resource allocation will in turn aggravate congestion and may even cause congestion to crash. Therefore, it is a hotspot to study congestion control to determine whether each data flow strictly adheres to TCP congestion control and how to "punish" the behavior of non-compliance with congestion control protocol when congestion occurs. The fundamental way to solve the fairness problem of congestion control at the transport layer is to use the end-to-end congestion control mechanism comprehensively.

(2) There is also a problem of fairness between some TCP connections. The problem arises because some TCP uses large window sizes before congestion, or their rtt is smaller, or packets are larger than other TCP, so they also account for more bandwidth.

RTT unfairness

AIMD Congestion Window Update policy also has some shortcomings, and the type of increase policy allows the sender to send traffic congestion window in a round trip delay (RTT) to increase the size of a packet, so when different traffic to network bottleneck bandwidth competition, The congestion window increase rate of TCP traffic with smaller RTT will be faster than TCP traffic with large RTT, which will occupy more network bandwidth resources.

Additional Instructions

The quality of the lines between China and the US is not very good, the RTT is longer and often drops packets. TCP protocol is also lost packets, lost packets; TCP is designed to solve the problem of reliable transmission on unreliable lines, that is, in order to solve the loss of packets, but packet loss makes the TCP transmission speed greatly decreased. The HTTP protocol uses the TCP protocol at the transport layer, so the speed of the page download depends on the speed of the TCP single-threaded download (because the Web page is a single-threaded download).
Packet loss is the main reason for the significant decrease of TCP transmission speed, which is the TCP congestion control algorithm, which is the mechanism of packet loss retransmission.
A number of TCP congestion control algorithms are available in the Linux kernel, which have been loaded into the kernel and can be seen through kernel parameter net.ipv4.tcp_available_congestion_control.

1. Vegas

In 1994, BRAKMO proposed a new congestion control mechanism, TCP Vegas, to control congestion from another point of view. As can be seen from the front, TCP congestion control is based on packet loss, once the packet drops, and then adjust the congestion window, however, because the packet loss is not necessarily due to the network into congestion, but because the RTT value and network operation is relatively close relationship, so TCP Vegas uses the change of the RTT value to determine whether the network is congested, thereby adjusting the congestion control window. If the RTT is found to be increasing, the Vegas thinks that the network is congestion, and then begins to reduce the congestion window, if the RTT becomes smaller, Vegas thinks the network congestion is being gradually lifted, and then increases the congestion window again. Since Vegas is not using packet loss to determine the bandwidth available to the network, it uses the RTT variation to determine the bandwidth available to the network more accurately, thus making it much more efficient. However Vegas has a flaw and can be said to be deadly, ultimately affecting TCP Vegas and is not massively used on the Internet. The problem is that the bandwidth of a stream using TCP Vegas is less competitive than a stream that does not use TCP Vegas, because the routers in the network, as long as the data is buffered, will cause the RTT to become larger, and if the buffer does not overflow, there will be no congestion, but because the cached data will cause processing delay, Thus, the RTT becomes larger, especially on the network with small bandwidth, so long as the data is transmitted at the beginning, the RTT will increase sharply, which is especially obvious on the wireless network. In this case, TCP Vegas to reduce their congestion window, but as long as there is no packet loss, from the above to see the standard TCP is not to reduce their own windows, so the two began to be unfair, and so the cycle continues, TCP Vegas efficiency is very low. In fact, if all TCP uses Vegas Congestion control mode, the fairness between the flow is better, and the competition is not the problem of Vegas algorithm itself.

Applicable environment: Difficult to apply on the Internet on a large scale (low bandwidth competitiveness)

2. Reno

Reno is currently the most widely used and mature algorithm. The algorithm includes slow start, congestion avoidance, fast retransmission, and fast recovery mechanism, which is the basis of many existing algorithms. It is easy to see from the Reno operating mechanism that, in order to maintain a dynamic balance, a certain amount of loss must be generated periodically, coupled with the AIMD mechanism-reduction of fast, slow growth, especially in a large window environment, due to the loss of a datagram caused by the window reduction takes a long time to recover, so, Bandwidth utilization is unlikely to be high and as the network's link bandwidth continues to increase, this drawback will become more apparent. Fairness, according to statistics, the fairness of Reno has been quite affirmed, it can be in a larger network within the ideal to maintain the principle of fairness.

The Reno algorithm is widely used because of its simplicity, effectiveness and robustness.

However, it cannot effectively handle multiple groupings from the same data window in case of loss. This problem is solved in the new Reno algorithm.

Protocol based on Packet loss feedback

In recent years, with the popularization of high bandwidth delay network (HI bandwidth-delay product network), there are many new TCP protocol improvements based on packet loss feedback, including hstcp, STCP, Bic-tcp, Cubic and h-tcp.

Generally speaking, the protocol based on packet loss feedback is a kind of passive congestion control mechanism, which is based on the packet loss event in network to make the network congestion judgment. Even if the load in the network is high, the Protocol will not actively reduce its sending speed if congestion drops are not generated. This protocol maximizes the amount of bandwidth available to the network and improves throughput. However, because of the aggressiveness of the packet-loss feedback protocol in the near-saturation of network, the bandwidth utilization of the network is greatly improved. On the other hand, for the congestion control protocol based on packet loss feedback, greatly improving the network utilization means that the next congestion packet loss event is far off. Therefore, these protocols increase the network bandwidth utilization and also indirectly increase the packet loss rate, causing the jitter of the whole network to intensify.

Friendly nature

Bic-tcp, Hstcp, stcp and so on, the protocol based on packet loss feedback has greatly improved its throughput rate, and also seriously affected the throughput rate of Reno flow. Packets based on packet-loss feedback generate such a poor TCP-friendly group because of the aggressive congestion window management mechanism of these protocol algorithms, which usually assume that the network must have excess bandwidth as long as it does not produce packet loss, and thus continuously improve its sending rate. The transmission rate shows a trend of concave shape from the macroscopic angle of time, and the faster the peak transmission rate of network bandwidth increases. This not only brings a large number of congestion drops, but also maliciously annexing the bandwidth resources of other coexistence streams in the network, resulting in a fair decline of the entire network.

3. HSTCP (High speed TCP)

HSTCP (High speed Transmission Control Protocol) is a new congestion controlling algorithm based on AIMD (additive growth and multiplicative reduction) in high speed network, which can improve the throughput rate of network more effectively in high speed and delay network. It modifies the standard TCP congestion avoidance algorithm and reduces the parameters to achieve a fast window growth and slow down, so that the window is kept in a large enough range to make full use of the bandwidth, it can get much higher bandwidth than TCP Reno in the high-speed network, But it has a very serious RTT inequity. Fairness refers to the equality of network resources that are held between multiple streams that share the same network bottleneck.

The TCP send side dynamically adjusts the increment function of the hstcp congestion window by the desired packet loss rate of the network.

Window growth mode when congestion avoidance: CWnd = CWnd + A (CWnd)/CWnd

The window drops after the packet loss method: CWnd = (1-b (CWnd)) *cwnd

Among them, a (CWnd) and B (CWnd) is two functions, in standard TCP, a (CWnd) =1,b (CWnd) = 0.5, in order to achieve TCP friendliness, in the case of a low window, that is, in a non-BDP network environment, HSTCP uses the same A and B as the standard TCP to ensure the friendliness of the two. When the window is large (critical value lowwindow=38), new A and B are adopted to achieve high throughput requirements. You can see the RFC3649 documentation for details.

4. Westwood

In wireless networks, it is found that Tcpwestwood is an ideal algorithm based on a large number of researches, and its main idea is to make bandwidth estimation by continuously detecting the arrival rate of ACK at the transmitting end, when congestion occurs, the congestion window and the slow start threshold are adjusted with the bandwidth estimate, using Aiad ( Additive increase and adaptive decrease) congestion control mechanism. It not only improves the throughput of the wireless network, but also has good fairness and interoperability with the existing network. The problem is that congestion drops and wireless drops are not well differentiated during transmission, resulting in frequent call of congestion mechanism.

5. H-tcp

High performance Network in the overall performance of the best algorithm is: H-TCP, but it has the RTT unfairness and low bandwidth of the unfriendly issues.

6. Bic-tcp

Bic-tcp disadvantage: The first is the preemptive strong, bic-tcp growth function in small link bandwidth latency is relatively short of the standard TCP to preempt the strong, it in the detection phase quite so restart a slow start algorithm, and TCP in the stable after the window is always linear growth, The slow boot process is not performed again. Secondly, BIC-TCP's window control stage is divided into binary search increase, max probing, and then Smax and Smin, which increases the difficulty of the algorithm, and also adds complexity to the analysis model of protocol performance. In low-RTT networks and low-speed environments, the BIC may be too "aggressive", so the BIC has been further improved, i.e. cubic. is the default algorithm for Linux prior to adopting Cubic.

7. CUBIC

Cubic simplifies the bic-tcp window adjustment algorithm in the design, in the Bic-tcp window adjustment will appear a concave and convex (here concave and convex refers to the mathematical meaning of concave and convex, concave function/convex function) growth curve, Cubic used a three-time function (that is, a cubic function), There is also a concave and convex part in the three-time function curve, which is similar to the BIC-TCP curve, which replaces the bic-tcp growth curve. In addition, the key point in cubic is that its window growth function depends only on the time interval value of the successive two congestion events, thus the window growth is completely independent of the network latency RTT, and the hstcp described earlier has a serious RTT inequity, The independent nature of the rtt of the cubic allows cubic to maintain a good RTT fairness between multiple TCP connections that share a bottleneck link.

CUBIC is a congestion control protocol for TCP (Transmission Control Protocol) and thecurrent default TCP algorithm in Lin Ux. The protocol modifies the linear window growth function of existing TCP standards to bes a cubic function in order to Impro ve the scalability of TCP over fast and long distance networks. It also achieves more equitable bandwidth allocations among flows with different RTTs (round trips times) by making the win Dow growth to is independent of rtt–thus those flows grow their congestion window at the same. During steady state, CUBIC increases the window size aggressively when the window was far from the saturation point, and th E slowly when it was close to the saturation point. This feature allows CUBIC to being very scalable when the bandwidth and delay product of the network are large, and at the SAM E time, be highly stable and also fair to standard TCP flows.

8. Stcp

Stcp,scalable TCP.

The STCP algorithm was proposed by Tom Kelly in 2003 to adjust the size of the sending window to accommodate high-speed network environments by modifying the TCP window to increase and decrease the parameters. The algorithm has high link utilization and stability, but the mechanism window increases inversely with the RTT, there is a certain degree of the RTT unfairness, and with the traditional TCP flow coexistence, excessively occupy the bandwidth, its TCP friendliness is also poor.

The advantages and disadvantages of TCP congestion control algorithm for environmental performance analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.