Introduction of hardware load Balancing equipment

Source: Internet
Author: User

The most commonly used is F5 and Citrix NetScaler

Load balancing is divided into global load balancing and local load balancing.

Load balancing is the load balancing of the local server group, and the global load balancing refers to the load balancing of the server groups placed in different geographical locations and different network structures.

Circular DNS

Is the next IP in the IP loop list each time the domain name is parsed.

Load Balancing Routers

A policy that sends requests to the fastest-responding server, and can satisfy failover/failback. But the load-balancing router itself needs to be maintained, typically with two, to prevent a single point of failure.

For example, the big-ip of Alteon 180 and F5 Network

Load balancing can be targeted at different levels of the network

Link Aggregation (second-tier load balancing) is the use of multiple physical links as a single aggregated logical link, the network data traffic is shared by all the physical links in the aggregated logic link, which increases the capacity of the link logically to meet the need of increasing the bandwidth.

4 to 7 layers of load balancing are now frequently used.

Layer fourth load balance maps a legally registered IP address on the Internet to the IP address of multiple internal servers and dynamically uses one of the internal IP addresses for each TCP connection request to achieve load balancing purposes. In the fourth layer switch, this kind of equalization technology is widely used, one target address is the server group VIP (virtual ip,virtual IP address) connection request packet flow through the switch, the switch according to the source and destination IP addresses, TCP or UDP port number and a certain load balancing strategy, Mapping between server IP and VIP, select the best server in the server farm to handle connection requests.

Layer Seventh load balancing control the content of Application layer service provides a high level control mode for access traffic, which is suitable for the application of HTTP server group. The seventh Layer load balancing technique performs load balancing tasks by examining the HTTP headers flowing through, according to the information in the header.

The seventh layer load balancing advantages are shown in the following aspects:

1. By checking the HTTP headers, you can detect error messages for the HTTP400, 500, and 600 series, thus transparently redirecting connection requests to another server to avoid application layer failures.

2. Depending on the type of data that flows through (such as the decision packet is an image file, compressed file or multimedia file format, etc.), the data traffic to the corresponding content of the server to deal with, increase system performance.

3. According to the type of connection request, such as ordinary text, image and other static document requests, or ASP, CGI and other dynamic document requests, the corresponding request to the corresponding server to deal with, improve the performance and security of the system.

Disadvantage: Layer Seventh load balancing is supported by the protocol restrictions (generally only HTTP), which limits the breadth of its application, and check the HTTP header will occupy a lot of system resources, will affect the performance of the system, in the case of a large number of connection requests, Load balancing device itself is easy to become the bottleneck of network overall performance

Load Balancing Policy:

1. Round Robin: Each request from the network is alternately assigned to the internal server, from 1 to N and then restarted. This equalization algorithm is suitable for all servers in the server group to have the same hardware and software configuration and the average service request is relatively balanced.

2. Weight rotation equalization (Weighted Round Robin): According to the different processing capacity of the server, to each server to assign different weights, so that it can accept the corresponding weight number of service requests. For example, the weight of server A is designed to 1,b the weight value of 3,c is 6, then the server A, B, C will receive 10%, 30%, 60% of the service request respectively. This equalization algorithm ensures that high-performance servers get more usage, and that low performance servers are overloaded.

3. Stochastic equalization (RANDOM): Randomly assigns requests from the network to multiple servers in the interior.

4. Weight random equalization (Weighted Random): This kind of equalization algorithm is similar to the weighted round-robin algorithm, but it is a random selection process when processing request sharing.

5. Response Speed Equalization (Response time): Load balancing devices issue a probe request (such as ping) to the internal servers, and then determine which server responds to the client's service request based on the fastest response times of the internal servers to the probe request. This kind of equalization algorithm can better reflect the current running state of the server, but the quickest response time refers only to the fastest response time between the load-balancing device and the server, rather than the fastest response time between the client and the server.

6. Minimum connection number equalization (least Connection): The time that the client requests a service to stay on the server may vary considerably, and as the working time is lengthened, the connection process on each server may be significantly different, if a simple round robin or stochastic equalization algorithm is adopted, does not achieve a true load balance. The least-connected number equalization algorithm has a data record for each server that needs to be loaded internally. Records the number of connections currently being processed by the server, and when new service connection requests are made, the current request is assigned to the server with the fewest number of connections, making the balance more realistic and more balanced in load. This equalization algorithm is suitable for long time processing of request services, such as FTP.

7. Balance of processing capacity: This equalization algorithm assigns service requests to the lightest servers in the internal processing load (based on server CPU model, number of CPUs, memory size, and current connections), considering the processing power of the internal server and the current network health, So this kind of equalization algorithm is relatively more accurate, especially suitable for applying to the seventh layer (application layer) load balance.

8. DNS Response equalization (Flash DNS): On the Internet, whether HTTP, FTP or other service requests, the client is generally through domain name resolution to find the server's exact IP address. Under this equalization algorithm, a load balancing device located in a different geographical location receives the domain resolution request of the same client and resolves the domain name to the IP address of the corresponding server (that is, the IP address of the server in the same location as this load balancing device) and returns it to the client at the same time. The client will continue to request the service with the first domain resolution IP address received, ignoring the other IP address response. In the case that the equilibrium strategy is suitable for global load balancing, it is meaningless for local load balancing.

Service failure detection methods and capabilities:

1. Ping detection: Through ping detection of the server and network system condition, this way is simple and fast, but can only roughly detect the network and server operating system is normal, the server on the application service detection can do nothing.

2. TCP Open Detection: Each service will open a TCP connection to detect whether a TCP port on the server (such as Telnet 23, HTTP 80, etc.) is open to determine whether the service is normal.

3. HTTP URL detection: For example, send a request to the HTTP server for a main.html file, and if you receive an error message, the server is considered to have failed.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.