The pros and cons of load balancing strategies and their ease of implementation have two key factors:
(1) Load Balancing algorithm
(2) The detection mode and ability of network system condition
1.   round robin (Round Robin): Every request from the network is assigned to an internal server in turn, starting from 1 to N and then restarting. This equalization algorithm is suitable for all servers in the server group with the same hardware and software configuration and the average service request is relatively balanced.
2, weighted round balance (Weighted Round Robin): According to the different processing power of the server, each server is assigned different weights, so that it can accept the corresponding value of the service request. For example: The weight of server A is designed to 1,b the weight of 3,c is 6, then server A, B, and C will receive service requests of 10%, 30%, 60% respectively. This equalization algorithm ensures that the high-performance server gets more usage and avoids overloading the server with low performance.
3. Stochastic equalization (random): Randomly assign requests from the network to multiple servers in the interior.
4. Weighted stochastic equalization (Weighted random): This kind of equalization algorithm is similar to the weighted round-robin algorithm, but it is a random selection process when processing the request sharing.
5. Response Speed Equalization (Response time): The load balancer sends a probe request (such as ping) to the internal servers and then determines which server responds to the client's service request based on the fastest response time of the internal servers to the probe request. This equalization algorithm can better reflect the current running state of the server, but the fastest response time is simply the fastest response time between the load balancer device and the server, not the fastest response time between the client and the server.
6, the minimum number of connections (Least Connection): The client of each request service at the time of the server can be significantly different, with longer working hours, if the use of simple round robin or random equalization algorithm, The connection process on each server can be very different and does not achieve true load balancing. The least Connection equalization algorithm has a data record for each server in the internal load, records the number of connections currently being processed by the server, and, when there is a new service connection request, assigns the current request to the server with the least number of connections, making the balance more realistic and load balanced. This equalization algorithm is suitable for long-time processing of request services, such as FTP.
7, Processing capacity equalization : This equalization algorithm will assign the service request to the internal processing load (based on the server CPU model, number of CPUs, memory size and current number of connections, etc.) the lightest server, due to the internal server processing capacity and the current network health, So this equalization algorithm is relatively more accurate, especially suitable for use in the case of the seventh Layer (application layer) load balancing.
8, DNS Response equalization (Flash DNS): On the Internet, whether it is HTTP, FTP or other service requests, the client is usually through the domain name resolution to find the exact IP address of the server. Under this equalization algorithm, the load balancer device in different geographic locations receives the domain name resolution request from the same client and resolves the domain name to the IP address of the corresponding server (that is, the IP address of the server with the load balancer in the same location) and returns it to the client at the same time. The client will continue to request the service by resolving the IP address of the first received domain name, ignoring other IP address responses. It is meaningless for local load balancing when the equilibrium strategy is suitable for global load balancing.
Although there are a variety of load balancing algorithms can be better to allocate data traffic to the server to load, but if the load balancing policy does not have the network system condition detection mode and ability, once in a server or a load balancing device and server network failure between the case, The Load Balancer device still directs a portion of the data traffic to that server, which is bound to cause a large number of service requests to be lost, without the need for uninterrupted availability. Therefore, a good load balancing strategy should have the ability to detect network failure, server system failure, application service failure, and so on:
Service failure detection methods and capabilities:
(1)  Ping Detection  : By pinging the server and network system status, this method is simple and fast, but can only roughly detect the network and the operating system on the server is normal, the application service detection on the server is powerless.
(2) TCP Open detection : Each service will open a TCP connection, detect a TCP port on the server (such as 23 port of Telnet, HTTP 80 port, etc.) is open to determine whether the service is normal.
(3) http URL detection : for example, to send an HTTP server to the Main.html file access request, if you receive an error message, the server is considered to be faulty.
Load Balancing Policy