Bbr-based cloud disk speed-up practices

Source: Internet
Author: User

Bbr-based cloud disk speed-up practices

Female Declaration

The speed of cloud disks is a hard indicator in the industry, and is a product reputation and image. Traditional acceleration methods are mostly based on proxy servers, and suitable proxies are used to connect users and storage servers. This method has some effect, but it is not solved based on the domestic network conditions and network principles. The bbr congestion control algorithm is very effective for long fat networks and is suitable for wide area networks. After practice, the speed increases rapidly. This article is from Qihoo 360 cloud disk division. Let's take a look at how 360 cloud disks are accelerated using the bbr congestion control algorithm.

PS: a wide range of first-line technologies and diversified forms of expression, all in the "HULK first-line technical discussion", please pay attention to it!

Introduction

As a data storage product, the speed of cloud disks, whether used by individuals or companies, is the first indicator. It is also a key factor for users to judge whether a cloud disk is good or bad. The speed improvement will bring a good user experience and user cohesion. Therefore, acceleration becomes an urgent demand.

Traditional tcp congestion control

1

Wide Area Network Environment

Currently, wide area networks (WANs) are widely used in scenarios with high bandwidth, high latency, and certain packet loss rates. There are two types of network packet loss: congestion packet loss and error packet loss. The error packet loss may be caused by exceptions during network transmission, with a probability of about one thousandth of packets.


There are many second-level operators in China, most of which share bandwidth, and their network buffer is also shared. When the network shared buffer is full, packet loss will occur. This type of packet loss causes a sliding window half and a sudden drop in transmission rate. In fact, the bandwidth of each user is not fully filled.


This type of network is collectively referred to as the longfat network, that is, the round-trip time is long, but the bandwidth is large.


2

Traditional tcp congestion control algorithm

Traditional tcp congestion control aims to maximize network bandwidth. A link is like a pipe. To fill it up, you need to estimate the internal capacity of the pipe.

Internal capacity = pipe width (link bandwidth) * pipe length (round-trip delay)

Congestion Control Process: slow start, addition, and multiplication. The START index increases the sending window. When packet loss occurs, the sending window is quickly halved to reduce the sending rate.

3

Tcp congestion control cannot solve the following problems:

Unable to locate the cause of Packet Loss

It is impossible to distinguish whether packet loss is caused by congestion or errors. If packet loss is caused by network transmission errors, the bandwidth is not fully occupied yet. In a long fat network with a certain packet loss rate, the sending window will converge to a very small value, resulting in a very small packet sending rate.

Buffer Expansion

The Network buffer expands and the network has some buffer used to absorb fluctuating traffic. In the initial phase, packets are quickly sent at an exponential rate, resulting in the buffer rapidly filling up. packet loss occurs when the buffer is full. Packet loss causes a sudden drop in the sending window, and then the sending window and buffer will gradually decrease and converge. In this case, the bandwidth and buffer usage cannot be fully occupied. This type of packet loss is considered to be full bandwidth. In fact, this is not the case. It is only because the buffer bandwidth is full due to rapid growth.

Figure 2.1 buffer Expansion

Bbr Congestion Control

1

Solve the above two problems

1. Ignore packet loss because it is impossible to distinguish between congestion packet loss and error packet loss.


2. Buffer expansion is caused by estimation of bandwidth and latency at the same time. Because the sending window requires these two parameters to calculate the internal capacity, but the calculation at the same time will lead to inaccurate. For example, if you want to test the maximum bandwidth, you need to fill the water pipe, and the delay must be high because the buffer zone is full and the packet queuing time is required. To test the minimum latency, the network traffic is low. The buffer zone is basically empty and the latency is low, but the bandwidth valuation in the tube is also low. Therefore, it is impossible to measure the best bandwidth and latency at the same time, that is, the maximum bandwidth and the lowest latency. This is the essence. Why is it difficult for traditional tcp to fill the bandwidth in the longfat network.

Solution: estimate the bandwidth and latency respectively to calculate the most suitable internal capacity.

2

Bbr Congestion Control Process

Slow Start

Exponential growth of packets, ignore packet loss, do not fold half the window, only check whether the Effective Bandwidth continues to grow until the effective bandwidth does not grow. Valid bandwidth indicates that the buffer has not been occupied.

Emptying stage

After the slow start, the packet sending volume is still three times the internal capacity of the tube. At this time, the packet sending rate is reduced to avoid the excess packets occupying the buffer in the tube, resulting in packet loss.

Bandwidth detection phase

Every eight round trips is a cycle. For the first round trip, bbr tries to increase the packet sending rate by 5/4 to estimate whether the bandwidth is full. For the second round, the packet sending rate is reduced by 3/4, to empty redundant packets in the buffer to avoid expansion. The remaining six round trips are sent at a new bandwidth estimation rate. This is a cycle, which is constantly tested until the real bandwidth is fully occupied, as shown in Figure 3.1.

Latency detection phase

Every 10 seconds, if no new minimum latency is found. At this time, the sending window is reduced to four packages, and the minimum delay of sending packets during this period is estimated. Then, the sending window is returned to the previous status.

Figure 3.1 bandwidth detection continues to grow. Green shows the number of packages and blue shows the latency.

Figure 3.2 packet loss rate and valid bandwidth. Green is bbr, and red is traditional tcp


3

Bbr Summary

At the beginning of the bbr phase, the pipeline will not be quickly filled, mainly to avoid packet loss and latency caused by the expansion of the buffer, and to detect bandwidth and latency alternately in the future. When detecting the bandwidth, increase the sending rate first and then decrease, which also avoids the buffer expansion problem. The packet loss rate is reduced and the valid ack is received continuously, and the sending window is continuously increased. In this way, the maximum bandwidth is obtained in the cycle. When detecting latency, the sending window is reduced to four packets. At this time, the buffer zone is not full and the channel is unobstructed. The detected latency is also low and accurate. Alternate detection of bandwidth and latency to obtain accurate internal capacity. emptying can avoid packet loss and delay caused by buffer expansion.

4

Bbr applicable scenarios

1. There is a high bandwidth and high latency network with a certain packet loss rate.

2. slow network access with a small buffer.

Bbr practices in cloud Disks

Kernel upgrade

"

Upgrade Proxy Server kernel to 4.9 or above

Enable bbr congestion control algorithm

Echo "net. core. default_qdisc = fq">/etc/sysctl. conf

Echo "net. ipv4.tcp _ congestion_control = bbr">/etc/sysctl. conf

Sysctl-p

Sysctl net. ipv4.tcp _ available_congestion_control

Sysctl-n net. ipv4.tcp _ congestion_control


Adjust tcp Kernel Parameters

Adjust the tcp kernel parameters so that the sliding window size exceeds 64 KB

Sysctl net. ipv4.tcp _ window_scaling = 1

Acceleration result

Per capita speed improvement

Figure 4.1 per capita speed chart

Per capita speed increase: around 50%

Increase the percentage of speed regions

Figure 4.2 Ratio of the speed area, Blue: 1 Mb/s-2 Mb/s, Green: 2 Mb/s or more


The percentage of people above 1 M increased by about 100%

References:

[1] Cardwell, Neal, et al. "BBR: Congestion-Based Congestion Control." Queue14.5 (2016): 50.

Scan the QR code below to learn more


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.