6 kinds of load balancing algorithms

Source: Internet
Author: User
Tags addall get ip server array

Transferred from: https://www.cnblogs.com/xrq730/p/5154340.html

What is load balancing

Load balancing, the English name is load Balance, refers to the multiple servers in a symmetric way to form a collection of servers, each server has an equivalent status, can be provided separately from external services without the assistance of other servers. With a load-sharing technique, requests sent externally are evenly distributed to one server in the symmetric structure, while the server receiving the request responds independently to the client's request. Load Balancing distributes customer requests to the server array evenly, providing fast access to important data to address a large number of concurrent access service problems, which can be achieved with minimal investment in the performance of a large host.

Load balancing is divided into software load balancing and hardware load balancing, the former Representative is the LVS developed by Dr. Ali Zhangwensong, the latter is a balanced server such as F5, of course, this is just mention, not the focus.

This article is about " sending requests externally to a single server in a symmetric structure " of the various algorithms, and in Java code to demonstrate the specific implementation of each algorithm, OK, the following into the topic, before entering the topic, write a class to simulate the IP list:

1 public class IpMap 2 {3     ///IP list to be routed, key for Ip,value represents the weight of the IP 4 public     static hashmap<string, integer> server Weightmap =  5             new hashmap<string, integer> (), 6      7     static 8     {9         serverweightmap.put (" 192.168.1.100 ", 1);         serverweightmap.put (" 192.168.1.101 ", 1);         //Weight 412         serverweightmap.put (" 192.168.1.102 ", 4);         serverweightmap.put (" 192.168.1.103 ", 1);         serverweightmap.put (" 192.168.1.104 ", 1 );/         /weight is         serverweightmap.put ("192.168.1.105", 3),         serverweightmap.put ("192.168.1.106", 1); The         weight is 219         serverweightmap.put ("192.168.1.107", 2),         serverweightmap.put ("192.168.1.108", 1); 21         serverweightmap.put ("192.168.1.109", 1);         serverweightmap.put ("192.168.1.110", 1);     }24}

Polling (Round Robin) method

The polling method is the round robin method, and its code is implemented roughly as follows:

1 public class Roundrobin 2 {3     private static Integer pos = 0; 4      5 public     static String Getserver () 6     {7         //Rebuild a map to avoid concurrency problems caused by the server's upper and lower line 8         map<string, integer> servermap =  9                 new hashmap<string, integer> (),         Servermap.putall ( IPMAP.SERVERWEIGHTMAP)         //Get IP address List13         set<string> keySet = Servermap.keyset ();         arraylist<string> keylist = new arraylist<string> ();         Keylist.addall (KeySet);         String Server = null;18         synchronized (POS),         {             if (pos > Keyset.size ())                 pos = 0;22             Server = Keylist.get (POS);             pos ++;24         }25         return server;27     }28}

Because the address list in Serverweightmap is dynamic, there may be a machine on-line, offline or down, in order to avoid possible concurrency problems, the method internally to create a new local variable servermap, the contents of Servermap are copied to the thread local, To avoid being modified by multiple threads. This may introduce a new problem, the replication later Serverweightmap changes can not be reflected to the servermap, that is, this round to select the server process, the new server or offline server, load Balancing algorithm will not be known. Add no matter, if there is a server offline or down, you may be able to access the non-existent address. Therefore, the service caller needs to have appropriate fault-tolerant handling, such as re-initiating the server selection and invocation .

For the current polling position variable POS, in order to ensure the order of the server selection, it is necessary to lock the operation, so that only one thread at a time can modify the value of POS , or when the POS variable is modified concurrently, there is no guarantee the order of the server selection, It is even possible to cause the keylist array to cross out.

The advantage of the polling method is that it attempts to achieve the absolute equilibrium of request transfer .

The disadvantage of polling method is that, in order to achieve the absolute equilibrium of request transfer, it must pay a considerable price, because in order to ensure the mutex of POS variable modification, it is necessary to introduce the heavy pessimistic lock synchronized, which will cause the concurrent throughput of the polling code of this section to decrease significantly .

Stochastic (Random) method

The system random function, according to the size value of the back-end server list to randomly select one to access. The probability statistic theory can be learned that, with the increase of the number of calls, its actual effect is more and more close to the average distribution of traffic to each back-end server, which is the effect of polling.

The code implementation of the stochastic method is roughly as follows:

1 public class Random 2 {3 public     static String Getserver () 4     {5         //rebuild a Map to avoid concurrency problems caused by the server's upper and lower line 6         map<st Ring, integer> servermap =  7                 new hashmap<string, integer> (); 8         Servermap.putall ( IPMAP.SERVERWEIGHTMAP); 9         //Get IP address List11         set<string> keySet = Servermap.keyset ();         arraylist<string> Keylist = new arraylist<string> ();         Keylist.addall (KeySet);         java.util.Random Random = new Java.util.Random (),         randompos int = random.nextint (Keylist.size ()), and         return Keylist.get ( Randompos);     }20}

The overall code idea and polling method are consistent, first rebuild the Servermap, and then get to the server list. When the server is selected, a random value of the 0~keylist.size () interval is taken from the random Nextint method, which is then randomly fetched from the server list to a server address for return. based on the theory of probability and statistics, the higher the throughput, the closer the effect of the stochastic algorithm is to the polling algorithm .

Source address hash (hash) method

The idea of the source address hash is to obtain the IP address value of the client access, calculate a value by the hash function, use this value to modulo the size of the server list, the result is the number of the server to be accessed. The code implementation of the source address hashing algorithm is roughly as follows:

1 public class Hash 2 {3 public     static String Getserver () 4     {5         //rebuilds a Map to avoid concurrency problems caused by the server's upper and lower line 6         Map<stri Ng, integer> servermap =  7                 new hashmap<string, integer> (); 8         Servermap.putall ( IPMAP.SERVERWEIGHTMAP); 9         //Get IP address List11         set<string> keySet = Servermap.keyset ();         arraylist<string> Keylist = new arraylist<string> ();         Keylist.addall (KeySet);         The HttpServlet Getremoteip method can be used in Web applications to obtain the         String Remoteip = "127.0.0.1", and the         int hashcode = Remoteip.hashcode ( );         int serverlistsize = Keylist.size ();         int serverpos = hashcode% serverlistsize;20         return Keylist.get (Serverpos);     }23}

The first two parts, like the polling method and the random method, do not say, the difference is in the routing section. Through the client's IP is REMOTEIP, get its hash value, the size of the server list to modulo, the result is the choice of server in the server list of the index value.

the advantage of source address hashing is that: ensure that the same client IP address will be hashed to the same back-end server until the back-end server list is changed. Based on this feature, stateful session sessions can be established between the service consumer and the service provider .

The disadvantage of the source address hashing algorithm is: unless the server in the cluster is very stable, the basic will not go up and down, otherwise, once the server is online, offline, then through the source address hash algorithm routed to the server is the server on-line, downline to the server is very low probability, If the session is not session, if it is the cache may cause "avalanche". If this explanation is not suitable for understanding, you can read my previous article Memcache ultra-detailed interpretation, consistent hash algorithm part.

Weighted polling (Weight Round Robin) method

Different servers may not have the same load on the machine configuration as the current system, so they also have different compressive capabilities, assigning higher weights to high-profile, low-load machines, and allowing them to handle more requests, while low-profile, high-load machines assign lower weights and lower system loads. Weighted polling is a good way to handle this problem and assign the order of requests to the backend by weight. The code implementation of the weighted polling method is roughly as follows:

 1 public class Weightroundrobin 2 {3 private static Integer POS, 4 5 public static String Getserver () 6 {7//rebuild a Map to avoid concurrency problems caused by the server's upper and lower line 8 map<string, integer> servermap = 9 new HASHMAP&L T String, integer> (); Servermap.putall (IPMAP.SERVERWEIGHTMAP); 11 12//Get IP address List13 Set         <String> KeySet = Servermap.keyset (); iterator<string> Iterator = Keyset.iterator (); 15 16             list<string> serverlist = new arraylist<string> (); + (Iterator.hasnext ()) 18 {19  String Server = Iterator.next (), int weight = servermap.get (server), and for (int i = 0; I < weight; i++) Serverlist.add (server);}24 String Server = null;26 Synchroniz Ed (POS) (pos > Keyset.size ()) + pos = 0;30 Server = Serverlis T.get (POS ++;32}33 return server;35}36} 

Similar to polling, just add a weight calculation before getting the server address, according to the size of the weight, the address is repeatedly added to the server address list, the greater the weight, the server each round of the number of requests to get more.

Weighted random (Weight random) method

Similar to weighted polling, weighted randomization also configures different weights based on the different configuration and load conditions of the backend servers. The difference is that it chooses the server randomly, not the order, according to the weights. The code implementation of the weighted stochastic method is as follows:

 1 public class Weightrandom 2 {3 public static String Getserver () 4 {5//rebuilds a map to avoid concurrency problems caused by the server's upper and lower lines 6 map<string, integer> servermap = 7 new hashmap<string, integer> (); 8 Servermap.putall (IPMAP.SERVERWEIGHTMAP); 9 10//Get IP address List11 set<string> keySet = Servermap.keyset (); ITERATOR<STRING&G T         iterator = Keyset.iterator (); list<string> serverlist = new arraylist<string> (); 15 while (Iterator.hasnext ()), () () () () () () () () () = Iterator.next (), weight = Serverm         Ap.get (server), (int i = 0; i < weight; i++) Serverlist.add (server); 21}22 Java.util.Random Random = new Java.util.Random (), Randompos int = Random.nextint (serverlist.s Ize ()); Serverlist.get return (Randompos);}28} 

This code is equivalent to the combination of random and weighted polling method, it is better to understand, do not explain.

Minimum number of connections (Least Connections) method

There are several ways to achieve the balance of service consumer request allocation, of course, this is true, you can distribute the workload for multiple servers on the backend and maximize the utilization of the server, but is the actual situation really so? In fact, can the balance of requests really represent the load balance? This is a question worth thinking about.

The above problem, in another way, is: the perspective of the backend server to observe the load of the system, rather than requesting the initiator to observe . The minimum join method belongs to this class.

The minimum number of connections algorithm is flexible and intelligent, because the configuration of the backend server is not the same, for the processing of the request is fast and slow, it is based on the current connection situation of the back-end server, dynamically select one of the current backlog of the minimum number of servers to handle the current request, as far as possible to improve the efficiency of the backend The load is properly diverted to each machine. Because the minimum number of connections design server connection count and perception, design and implementation is more cumbersome, here is not to say its implementation.

6 kinds of load balancing algorithms

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.