Load balancing algorithm and means

Source: Internet
Author: User
Tags http redirect response code

Load Balancer

It can be a dedicated device or an application that runs on a common server. Distribute requests to servers that have the same content or provide the same services. Special purpose devices are generally only Ethernet interfaces, which can be said to be a multi-layer switch. The load balancer is typically assigned a virtual IP address, and all requests from the client are done for the virtual IP address. The load balancer forwards requests from the client to the actual IP address of the server through the load balancing algorithm.

Load Balancing algorithm
Private map<string,integer> Servermap = new hashmap<string,integer> () {        put ("192.168.1.100", 1);        Put ("192.168.1.101", 1);        Put ("192.168.1.102", 4);        Put ("192.168.1.103", 1);        Put ("192.168.1.104", 1);        Put ("192.168.1.105", 3);        Put ("192.168.1.106", 1);        Put ("192.168.1.107", 2);        Put ("192.168.1.108", 1);        Put ("192.168.1.109", 1);        Put ("192.168.1.110", 1);}    ;
1. Stochastic algorithms
    • Random randomly, set random probability by weight. The probability of collisions on a cross section is high, but the larger the number of calls, the more evenly the distribution, and the more uniform by the probability of the use of weights, which is beneficial to dynamically adjust the provider weight.

public void random () {        list<string> keylist = new arraylist<string> (Servermap.keyset ());        Random random = new random ();        int idx = Random.nextint (Keylist.size ());        String Server = keylist.get (idx);        SYSTEM.OUT.PRINTLN (server);    }
    • Weightrandom

public void Weightrandom () {        set<string> keySet = Servermap.keyset ();        list<string> servers = new arraylist<string> ();        for (iterator<string> it = Keyset.iterator (); It.hasnext ();) {            String server = It.next ();            int weithgt = servermap.get (server);            for (int i=0;i<weithgt;i++) {                servers.add (server);            }        }        String server = null;        Random random = new random ();        int idx = Random.nextint (Servers.size ());        Server = Servers.get (idx);        SYSTEM.OUT.PRINTLN (server);    }
2. Polling and weighted polling
    • Polling (Round Robbin) This algorithm is best used when the processing power of servers in a server farm is the same, and there is a small difference in the amount of business processing per transaction. Round-robin, set the round-robin ratio according to the weight of the Convention. There are slow providers of cumulative request issues, such as: The second machine is slow, but not hanging, when the request to the second set is stuck there, over time, all the requests are stuck on the second stage.

Private Integer pos = 0;public void Roundrobin () {        list<string> keylist = new Arraylist<string> (servermap . KeySet ());        String server = null;        Synchronized (POS) {            if (pos > Keylist.size ()) {                pos = 0;            }            Server = Keylist.get (POS);            pos++;        }        SYSTEM.OUT.PRINTLN (server);    }
    • Weighted polling (Weighted Round Robbin) is an algorithm that attaches a certain weight to each server in the poll. such as server 1 weight 1, server 2 weight 2, server 3 weight 3, the order is 1-2-2-3-3-3-1-2-2-3-3-3-...

public void Weightroundrobin () {        set<string> keySet = Servermap.keyset ();        list<string> servers = new arraylist<string> ();        for (iterator<string> it = Keyset.iterator (); It.hasnext ();) {            String server = It.next ();            int weithgt = servermap.get (server);            for (int i=0;i<weithgt;i++) {               servers.add (server);            }        }        String server = null;        Synchronized (POS) {            if (pos > Keyset.size ()) {                pos = 0;            }            Server = Servers.get (POS);            pos++;        }        SYSTEM.OUT.PRINTLN (server);    }
3. Minimum connection and weighted minimum connection
    • The least-connected (Least Connections) algorithm that communicates with servers that handle the fewest number of connections (sessions) in multiple servers. Even if the processing power of each server is different, and the amount of processing per transaction is not the same, it can reduce the load of the server to some extent.

    • The weighted minimum connection (Weighted Least Connection) is an algorithm for attaching weights to each server in the least-connected algorithm, which allocates the number of processing connections per server in advance and the client request to the server with the fewest number of connections.

4. Hashing algorithm
    • Normal Hash

public void hash () {        list<string> keylist = new arraylist<string> (Servermap.keyset ());        String Remoteip = "192.168.2.215";        int hashcode = Remoteip.hashcode ();        int idx = hashcode% keylist.size ();        String Server = Keylist.get (Math.Abs (IDX));        SYSTEM.OUT.PRINTLN (server);    }
    • Consistent hash consistency hashes, requests for the same parameters are always sent to the same provider. When a provider hangs, the request originally sent to that provider, based on the virtual node, is spread to other providers without causing drastic changes.

5.IP Address Hash

By managing the hash of the sender IP and destination IP addresses, the grouping (or grouping sent to the same destination) from the same sender is uniformly forwarded to the same server's algorithm. When a client has a series of business needs to process and must communicate with a server repeatedly, the algorithm is able to flow (session) to ensure that traffic from the same client can be processed all the time on the same server.

6.URL Hash

An algorithm that forwards requests sent to the same URL to the same server by managing a hash of the client's request URL information.

The method of load balancing algorithm ( DNS->数据链路层->IP层->Http层) 1. DNS domain name resolution load Balancing ( 延迟)

It is another common scenario to use DNS to handle the simultaneous load balancing of domain name resolution requests. Configure multiple A records in the DNS server, such as: Www.mysite.com in a 114.100.80.1, www.mysite.com in a 114.100.80.2, and www.mysite.com in a 114.100.80.3.

Each domain name resolution request calculates a different IP address return based on the load balancing algorithm, so that multiple servers configured in a record form a cluster and can be load balanced.

DNS domain name resolution load Balancing has the advantage of handing out load balancing to DNS, omitting the hassle of network management, and the drawback is that DNS may cache A records, not controlled by the site.

In fact, a large web site is always partially using DNS domain name resolution as a first-level load balancing method, and then the second-level load balancing is done internally.

2. Data Link Layer load balancing ( LVS)

Data link layer load Balancing refers to modifying the MAC address to load balance in the data link layer of the communication protocol.

This data transmission mode is also known as the triangle transmission model, the load balanced data distribution process does not modify the IP address, only modify the purpose of the MAC address, by configuring the real physical server cluster all machine virtual IP and Load Balancer server IP address, so as to achieve load balancing, This load balancing method is also known as the direct routing method (DR).

In, when a user requests to reach the Load Balancer server, the Load Balancer server modifies the destination MAC address of the request data to the MAC address of the Web server, does not modify the destination IP address of the packet, so the data can reach the target Web server normally. After processing the data, the server can go through the network Management Server instead of the Load Balancer server to reach the user's browser directly.

Link-layer load balancing using triangular transfer mode is the most widely used load balancing method in large-scale web sites nowadays. The best link layer load Balancing open source product on the Linux platform is LVS (Linux virtual Server).

3. IP Load Balancing ( SNAT)

IP load Balancing: Load balancing at the network layer by modifying the request destination address.

After the user requests the packet to reach the Load Balancer server, the Load Balancer server obtains the network packet in the operating system kernel, calculates a real Web server address according to the load balancing algorithm, and then modifies the IP address of the packet to the real Web server address, which does not need to be processed by the user process. After the real Web server finishes processing, the corresponding packet goes back to the Load Balancer server, and the load Balancer server then modifies the packet source address to its own IP address to send to the user's browser.

The key here is how the real Web server corresponding packet is returned to the Load Balancer server, one is the Load Balancer server modifies the destination IP address while modifying the source address, the packet source address to its own IP, that is, source address translation (SNAT), Another option is to use the Load Balancer server as the gateway server for the real physical server, so that all data arrives at the Load Balancer server.

IP load Balancing performs data distribution in the kernel process, and has better processing performance than reverse proxy equalization. However, the load-balanced network card bandwidth becomes a bottleneck for all requests that respond to packets that require a load-balanced server.

4. HTTP REDIRECT Load balancer ( 少见)

The HTTP redirect server is a normal application server whose only function is to compute a real server address based on the user's HTTP request and write the actual server address to the HTTP redirect Response (response status 302) back to the browser, and then the browser automatically requests the real server.

The advantage of this load balancing scheme is relatively simple, the disadvantage is that the browser needs to request two times a server to complete a visit, poor performance, using HTTP302 response code redirection, may be search engine judgment for SEO cheat, reduce search rankings. The ability to handle the redirection server itself can be a bottleneck. Therefore, this kind of scheme does not see much in the actual use.

5. Reverse Proxy load Balancing ( nginx)

The traditional proxy server is located at one end of the browser, and the proxy browser sends HTTP requests to the Internet. The reverse proxy server is located on the side of the Web site, and the Proxy Web server receives HTTP requests.

The role of the reverse proxy is to secure the site, and all Internet requests must go through a proxy server, which is equivalent to creating a barrier between the Web server and possible cyber attacks.

In addition, the proxy server can also configure cache acceleration Web requests. When the user accesses the static content for the first time, the static memory is cached on the reverse proxy server, so that when other users access the static content, it can be returned directly from the reverse proxy server, speeding up the response speed of the Web request and alleviating the load pressure on the Web server.

In addition, the reverse proxy server can also achieve load balancing functions.

Because the reverse proxy server forwards the request at the HTTP protocol level, it is also called the application layer load balancer. The advantage is that the deployment is simple and the disadvantage is the bottleneck of the possible success system.

Original: 1190000004492447

Load balancing algorithm and means

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.