[Switch] F5 Load Balancing Principle

Source: Internet
Author: User

F5 Load Balancing Principle

I. Server Load balancer Technology

The server Load balancer technology provides a cheap, effective, and transparent method based on the existing network structure, to expand the bandwidth of network devices and servers, increase throughput, enhance network data processing capabilities, and improve network flexibility and availability.

1. flowchart of Server Load balancer:


 

1. The customer sends a service request to the VIP

2. bigip receives the request, changes the destination IP address in the data packet to the IP address of the selected backend server, and then sends the data packet to the selected backend server.

3. the backend server sends the response packet back to the bigip address based on its route.

4. After receiving the response packet, bigip changes the source address to the VIP address and sends it back to the client. This completes a standard server load balancing process.

 

2. Typical load balancing process

Routing · interception of suitable load balancing traffic through VIP

· Server monitoring and health check to keep abreast of the availability of SERVER CLUSTERS

· Server Load balancer and application exchange functions, which are directed to appropriate servers through various policies

 

2.1 use VIP to intercept suitable traffic that requires Load Balancing

You can set a VIP address on bigip to intercept traffic that requires load balancing. This vip address can be a combination of an independent host address and port (for example, 202.101.112.115: 80) it can also be a combination of network addresses and ports (for example, 202.101.112.0: 80). When traffic passes through bigip, all traffic that hits the VIP will be intercepted and load balancing will be performed according to the rules.

 

2.2 server Health Monitoring and Inspection

Server (node)-Ping (ICMP)

Bigip periodically detects the IP address of the backend server through an ICMP packet. If you receive an ICMP response from the IP address within the specified time, the server is deemed to be able to provide services.

Service (port)-Connect

Bigip can periodically inspect the backend server's service port through a TCP packet. If you receive a response from the server port within the specified time, the server is considered to be able to provide services.

Extended content verification (ECV: Extended content verification)-ECV

ECV is a very complex service check, mainly used to confirm whether the application can return the corresponding data to the request. If an application responds to the service check and returns the corresponding data, the big/IP controller identifies the server as working well. If the server cannot return the corresponding data, the server is marked as down. Once the downtime is fixed, the big/IP will automatically verify that the application has responded to the customer's request correctly and resume sending to the server. This feature enables big/IP to extend protection to backend applications such as web content and databases. The big/ip ecv function allows you to send queries to Web servers, firewalls, cache servers, proxy servers, and other transparent devices, and then check the returned response. This will help confirm that what you provide to customers is exactly what they need.

Extended Application verification (EAV: Extended Application verification)

EAV is another service check used to check whether applications running on a server can respond to customer requests. To complete this check, the big/IP controller uses a client program called an external service Examiner, which provides a fully customized service check function for the big/IP, but it is located outside the big/IP controller. For example, the external service examiner can check whether an application on the Internet or Intranet can retrieve data from the background database and display data on an HTML webpage can work normally. EAV is a unique feature provided by big/IP. It provides administrators with the ability to customize big/IP addresses and access various applications, this feature enables big/IP to obtain the most important feedback, such as server, application, and content availability, in addition to providing standard availability verification. This feature is critical to e-commerce and other applications and is used to test your site from the customer's perspective. For example, you can simulate all the steps required by the customer to complete the exchange-connect to the site, select a project from the directory, and verify the credit card used by the transaction. Once the big/IP Address has the "availability" information, it can use the load balancing to make the resource reach the highest availability.

Big/IP has been used to test the health status and status of Internet services. The predefined extended application verification (EAV) has two user interfaces: browser and CLI configuration. Pre-defined application checks for big/IP: FTP, nntp, SMTP, POP3, and MSSQL.

 

Ii. Classification of Server Load balancer:

Server Load balancer applications are much narrower than other network technologies. From a technical perspective, Server Load balancer is divided into three types:

1. Load Balancing of links

Load Balancing of links mainly refers to the applications that have multiple ISP network outlets, such as China Telecom + China Netcom, China Telecom + China tietong, etc, load balancing on the link is also the most professional technology to solve the current communication between China Telecom and China Netcom. the implementation principle is calculated based on the server Load balancer algorithm. If the data of the target address is least delayed, the link will be prioritized. this is the difference between the Server Load balancer device and the rule routing + IP address library for China Netcom's route selection.

2.Server Load balancer

In a strict sense, Server Load balancer is a load balancing for the same application, but it has little to do with the server itself. Only the same applications have the concept of Server Load balancer. Different applications cannot implement Server Load balancer. for example, if we have one FTP server and one web server, there will be no Server Load balancer applications between the two servers. currently, all Server Load balancer manufacturers implement Server Load balancer based on the virtual IP technology. The Server Load balancer device performs health check on the server and then lists the health check results in the server status list of the device, based on this check result, it is best to determine the server to which a request is sent. The f5. ltm series is widely used.

F5-BIG-LTM-3600-4G-R

3.Load Balancing for Wan 

Server Load balancer is mainly applied to some large websites, and some people call it Remote Server Load balancer. for example, we have two Web servers, one at the Beijing IDC (China Netcom) and the other at the Guangzhou IDC (China Telecom). The content of the two web servers is the same. load Balancing for these two servers is a WAN load balancing. the 3dns Device of F5 is widely used.


 

Iii. Load Balancing Algorithms

Server Load balancer devices are all based on Server Load balancer algorithms. Server Load balancer algorithms are classified into two types: static Server Load balancer algorithms and Dynamic Server Load balancer algorithms.

· Round robin (roundrobin): requests are sequentially and cyclically connected to each server. When a server fails at Layer 2 to layer 7th, bigip extracts the server from the ordered cyclic queue and does not participate in the next round robin until it returns to normal. Ratio: assigns a weighted ratio to each server. The root ratio distributes user requests to each server. When a server suffers a second-to-7th-layer fault, bigip pulls the server from the server Queue and does not participate in the allocation of the next user request until it returns to normal. Bytes

· Priority: this parameter is used to group all servers and define priority for each group. Requests from bigip users are allocated to the server group with the highest priority (in the same group, the round-robin or rate algorithm is used, distribute user requests). When all servers with the highest priority fail, bigip sends the requests to the server group with the highest priority. This method provides you with a hot backup method. Bytes

· Least connection: Transfers new connections to the servers that perform least connection processing. When a server suffers a second-to-7th-layer fault, bigip pulls the server from the server Queue and does not participate in the allocation of the next user request until it returns to normal. Bytes

· Fastest: transmits the connection to the server with the fastest response. When a server suffers a second-to-7th-layer fault, bigip pulls the server from the server Queue and does not participate in the allocation of the next user request until it returns to normal.

· Observation mode (observed): the number of connections and response time are based on the best balance between the two items and select a server for the new request. When a server suffers a second-to-7th-layer fault, bigip pulls the server from the server Queue and does not participate in the allocation of the next user request until it returns to normal. Bytes

· Predictive: bigip uses the current performance metrics of the collected server for Prediction and Analysis, and selects a server in the next time segment, its performance will meet the needs of the appropriate users on the server. (Detected by bigip) Success

· Dynamicratio-APM: the performance parameters of applications and application servers collected by bigip, and the traffic distribution is dynamically adjusted. Bytes

· Dynamic server supplement (dynamicserver act.): When the number of Primary server clusters decreases due to failures, the backup server is dynamically added to the primary server group. Bytes

· QoS: data streams are allocated based on different priorities. Bytes

· Service type (ToS): Data streams are allocated according to different service types (identified in type of field. Bytes

· Rule mode: You can set traffic distribution rules for different data streams. bigip uses these rules to implement traffic direction control.

[Switch] F5 Load Balancing Principle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.