&http://www.aliyun.com/zixun/aggregation/37954.html ">nbsp; With the increase of traffic volume and the rapid growth of traffic volume and data flow, the processing capacity and computing strength of the existing network have been increased correspondingly, so that single server equipment cannot afford it at all. In this case, if you throw away an existing device to do a lot of hardware upgrades, this will result in the waste of existing resources and, if faced with the next increase in business volume, this will lead to a high cost of hardware upgrades again, and even the most performance-enhancing equipment will not meet the current business growth requirements.
A cheap, effective and transparent way to extend the bandwidth of existing network devices and servers, to increase throughput, enhance network data processing capabilities, and improve network performance, configure the same name for multiple addresses in DNS, so the client that queries this name gets one of the addresses. So that different customers access to different servers, to achieve load balancing purposes. DNS load balancing is a simple and efficient method, but it does not differentiate between servers and the server's current state of operation.
1, DNS load balancing the earliest load balancing technology is implemented through DNS, in DNS for multiple addresses to configure the same name, so the client queried this name will get one of the addresses, so that different customers access to different servers, to achieve load balancing purposes. DNS load balancing is a simple and efficient method, but it does not differentiate between servers and the server's current state of operation.
2, proxy server load balancing using a proxy server, you can forward the request to the internal server, using this acceleration mode can obviously improve the access speed of static Web pages. However, it is also possible to consider a technique that uses a proxy server to forward requests evenly to multiple servers to achieve load balancing purposes.
3, address translation Gateway load Balancing support load balanced address translation gateway, you can map an external IP address to multiple internal IP address, each TCP connection request dynamic use of one of the internal address, to achieve load balancing purposes.
4, the agreement to support the internal load balance in addition to these three load balancing methods, some of the protocols internally support load balancing related functions, such as the HTTP protocol, such as the redirection capability, HTTP running at the highest level of TCP connections.
5. Nat load Balancing NAT (receptacle address translation) simply translates an IP address into another IP address and is typically used to convert unregistered internal addresses to legitimate, registered Internet IP addresses. Apply to solve the Internet IP address tension, do not want to let the network outside to know the internal network structure and so on occasions.
6, reverse proxy load Balancing ordinary proxy is the proxy internal network user access to the server on the Internet connection request, the client must specify a proxy server, and will be sent directly to the Internet server's connection request sent to the Proxy server processing. A reverse proxy (Reverse proxy) means to accept a connection request on the Internet with a proxy server, then forward the request to a server on the internal network and return the results obtained from the server to the client requesting the connection on the Internet, At this point the proxy server behaves as a server. The reverse proxy load balancing technique is to dynamically transfer the connection requests from the Internet to multiple servers on the internal network in a reverse proxy way to achieve load balancing.
7. Hybrid load balancing in some large networks, due to the differences in hardware devices, sizes and services provided by multiple server groups, we can consider using the most appropriate load balancing method for each server group, Then again, load-balanced or clustered across the multiple server clusters to provide services to the outside world (that is, to treat these multiple server groups as a new server farm) for optimal performance. We call this approach a hybrid load-balancing. This approach is sometimes used in situations where the performance of a single equalization device cannot satisfy a large number of connection requests.
At present, whether in the Enterprise network, Park network or in the WAN such as the Internet, the development of business is beyond the past most optimistic estimates, the internet boom surging, new applications, even in accordance with the optimal configuration of the network at that time, will soon feel unbearable. In particular, the core of each network, its data flow and computational strength, making a single device impossible to afford, and how to achieve a reasonable distribution of traffic between multiple network devices that perform the same function, so that it does not appear that a device is too busy, and other devices are not fully functioning, becomes a problem, Therefore, the load balancing mechanism arises.
Load balancing is built on existing network structures, providing a cheap and efficient way to extend server bandwidth and increase throughput, enhance network data processing capabilities, and improve network flexibility and availability. It mainly completes the following tasks: Solve the network congestion problem, provide the nearest service, realize the geographical independence, provide the users with better access quality, improve the server response speed, improve the utilization efficiency of the server and other resources, and avoid the single point failure in the network key parts.
Custom
In fact, load balancing is not a traditional "equilibrium", in general, it is only the possibility of congestion in a place of the load to many places to share. If you renamed it "Load sharing", perhaps better understand some. In layman's terms, the role of load balancing in the network is like taking turns on the duty system, giving the task to everyone to complete so as not to make a person sweat. However, the balance in this sense is generally static, the predetermined "rotation" strategy.
Different from the rotating duty system, dynamic load balancing analyzes the data packet in real time through some tools, grasps the data traffic condition in the network, and assigns the task rationally. Structure is divided into local load balancing and regional load balancing (global load Balancing), the former refers to the local server cluster load balancing, the latter is to be placed in different geographical location, in different networks and server clusters for load balancing.
Each service node in a server cluster runs a separate copy of the required server program, such as Web, FTP, Telnet, or e-mail server programs. For some services, such as those running on a Web server, a copy of the program runs on all the hosts in the cluster, while Network Load balancing allocates the workload among those hosts. For other services, such as e-mail, only one host handles the workload, and for these services, Network Load Balancing allows network traffic to flow to a host and move traffic to other hosts when the host fails.