A smooth weight-based polling algorithm

Source: Internet
Author: User
Tags greatest common divisor
This is a creation in Article, where the information may have evolved or changed.

Polling algorithm is a very common scheduling/load balancing algorithm. According to Baidu Encyclopedia on the explanation:

Round-robin, polling scheduling, a strategy for channel scheduling in communications, which enables users to take turns using shared resources without considering instantaneous channel conditions. Polling scheduling can be considered a fair dispatch from the perspective that the same number of wireless resources (the same dispatch time period) are assigned to each communication link. However, polling scheduling is unfair from the point of view of providing the same quality of service to all communication links, where more wireless resources (more time) must be allocated for communication links with poor channel conditions. In addition, because the polling schedule does not take into account the instantaneous channel condition in the scheduling process, it will result in lower overall system performance, but it has a more balanced service quality between the communication links compared with the maximum load-dry ratio scheduling.

A wider range of polling scheduling applications can be found in the breadth of service scheduling, especially in service-oriented or microservices-oriented architectures, than in many well-known software such as LVs, Nginx, Dubblo, etc. But as the above introduction of the Baidu Encyclopedia, polling scheduling has a big problem, that is, it thinks all the performance of the service is the same, each server is fair scheduling, the performance of the server has a significant difference in the environment, the performance of the poor server was dispatched the same number of times, this is not what we expected. So this article is to introduce a weighted polling algorithm, polling algorithm can be seen as a weighted polling algorithm of a special case, in which the weight of each server is the same.

This paper introduces two algorithms of Nginx and LVS, compares their advantages and disadvantages, and provides a common implementation of the Go language weighted polling algorithm library: Weighted, can be used in load balancing/scheduling/micro-service gateway and other occasions.

WRR (Weighted Round-robin) is also a round-robin polling of packet service resources, but the difference is that the WRR algorithm assigns a weight to each service resource, and when it polls a service, it determines whether it can provide services based on the size of the weights it has. Because WRR is based on polling, it is only fair to be displayed at a time greater than a polling cycle.

In this paper, we introduce the weight-based polling algorithm of Nginx and LVs, both of which can be used to provide service objects each time, but have their own characteristics.

Nginx algorithm

Nginx based on the weight of the polling algorithm implementation can refer to its one-time code submission: Upstream:smooth Weighted round-robin balancing.

It not only realizes the weight-based polling algorithm, but also realizes the smoothing algorithm. The so-called smooth, that is, in a period of time, not only the server is selected by the number of distributions and their weights consistent, and the scheduling algorithm is also more evenly select the server, and will not concentrate for a period of time only select a high-weighted server. If the use of random algorithm selection or ordinary weight-based polling algorithm, it is relatively easy to cause a service set is called too much pressure.

For example, for example, {a:5, b:1, c:1) a group of servers with weights, Nginx's smooth polling algorithm chooses a sequence that is { a, a, b, a, c, a, a } clearly { c, b, a, a, a, a, a } smoother and more reasonable than the sequence, and does not result in a centralized access to the server.

The algorithm is as follows:

On each peer selection we increase current_weight of all eligible peer by it weight, select peer with G Reatest Current_weight and reduce its current_weight by total number of weight points distributed
among peers.

 1234567891011121314151617181920212223242526272829 
 span class= "keyword" >func  nextweighted (servers []*weighted) (best *weighted) {total: = 0
       for  i: = 0 ; I < len  (servers); i++ {w: = Servers[i]if  w = = nil  { Continue }w.currentweight + = w.effectiveweighttotal + w.effectiveweightif  w. Effectiveweight < w.weight {w.effectiveweight++}if  best = =  Nil  | | W.currentweight > Best. currentweight {best = w}}if  best = = nil  {

If you use the weighted library, you can use this algorithm for scheduling using the following lines of code:

123456789101112
func Examplew_ () {w: = &w1{}w.add ("A", 5) W.add ("B", 2) W.add ("C", 3 ) for i: = 0; I < ten; i++ {fmt. Printf ("%s", W.next ())}//output:a c B A A c a B c a}

LVS algorithm

LVS uses another algorithm, its algorithm introduction can refer to its website wiki.

The algorithm is represented by pseudo-code as follows:

1234567891011121314151617181920
Supposing thatThere isA serverSetS = {S0, S1, ..., sn-1}; W (Si) indicates theWeight ofSi;i indicates theServer selected Last  Time, andI isInitialized with-1; CW is  theCurrent weightinchscheduling, andcw isInitialized withZero Max (S) is  theMaximum weight ofAll theServersinchS;GCD (S) is  theGreatest common divisor ofAll server weightsinchS while(true) {i = (i +1)MoDNif(i = =0) {CW = CW-GCD (S);if(CW <=0) {CW = max (S);if(CW = =0)returnNULL; }    }if(W (Si) >= CW)returnSi;}

Can see its code logic is relatively simple, so the performance is also very fast, but if the server weight difference is more, it will not be as smooth as nginx, you can in a short period of time on the weight of the server is too much pressure.

Using the weighted library is as simple as the above, just replacing the type W W2 :

123456789101112
func Examplew_ () {w: = &w2{}w.add ("A", 5) W.add ("B", 2) W.add ("C", 3 ) for i: = 0; I < ten; i++ {fmt. Printf ("%s", W.next ())}//Output:a A A C a B c a b c}

Performance comparison

As you can see, the use of the above two methods is very simple, just generate a corresponding W object, and then join the server and the corresponding weights can Next be obtained by means of the next server.

If the weight of the server is very different, for the sake of smoothing, avoid the impact on the server in a short time, you can choose Nginx algorithm, if the server difference is not very large, you can consider using the LVS algorithm, because the test can see its performance is better than Nginx algorithm:

12
Benchmarkw1_next-4      20000000                50.1 ns/op             0 b/op          0 allocs/opbenchmarkw2_next-4      50000000                29.1 Ns/op             0 b/op          0 allocs/op

In fact, both of the performance is very fast, 10 servers each scheduling is dozens of nanoseconds level, and no additional object allocation, so regardless of which algorithm to use, this scheduling should not be the bottleneck of your entire system.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.