Linux Load Balancing Summary description (four-layer load/seven-layer load)

Source: Internet
Author: User
Tags mixed snmp haproxy
In the regular operation and maintenance work, load balancing services are often used. Load balancing is divided into four-layer load and seven-layer load, so what is the difference between the two?
Without further ado, the detailed explanation is as follows:
One, what is load balancing
1) Load Balance is built on the existing network structure, which provides a cheap, effective and transparent method to expand the bandwidth of network equipment and servers, increase throughput, strengthen network data processing capabilities, and increase network flexibility And availability. Load balancing has two meanings: first, a large amount of concurrent access or data traffic is divided and processed on multiple node devices to reduce the time for users to wait for a response; second, a single heavy-load operation is shared on multiple node devices for parallelism Processing, after the processing of each node device is completed, the results are summarized and returned to the user, and the processing capacity of the system is greatly improved.
2) To put it simply: one is to forward a large amount of concurrent processing to multiple nodes on the back end for processing, reducing the response time; the second is to forward a single heavy workload to multiple nodes on the back end for processing, and then return after processing Give it to the load balancing center and return it to the user. Most of the current load balancing technology is used to improve the availability and scalability of Internet server programs such as Web servers, FTP servers, and other mission-critical servers.
Second, load balancing classification
1) Layer 2 load balancing (mac)
According to the OSI model, the layer 2 load is generally in the form of virtual mac addresses, external requests for virtual MAC addresses, and load balancing to receive the actual MAC address response after receiving)
2) Three-layer load balancing (ip)
Generally use the virtual IP address method, external request for virtual IP address, load balancing and receive the actual back-end IP address after receiving the response)
3) Four-layer load balancing (tcp)
On the basis of three times of load balancing, use ip + port to receive the request, and then forward it to the corresponding machine.
4) Seven-layer load balancing (http)
According to the virtual URL or IP, the host name receives the request, and then turns to the corresponding processing server).
The most common layer 4 and layer 7 load balancing in our operation and maintenance, here focus on these two types of load balancing.
1) Layer 4 load balancing is based on IP + port load balancing: on the basis of layer 3 load balancing, by publishing the layer 3 IP address (VIP), and then adding the layer 4 port number, to determine which traffic needs to be loaded Balance, perform NAT processing on the traffic that needs to be processed, and forward it to the background server, and record which server handles the TCP or UDP traffic. All subsequent traffic for this connection is also forwarded to the same server for processing.
The corresponding load balancer is called a four-layer switch (L4 switch), which mainly analyzes the IP layer and the TCP / UDP layer to achieve four-layer load balancing. This type of load balancer does not understand application protocols (such as HTTP / FTP / MySQL, etc.).
The software to achieve four-layer load balancing are:
F5: A hardware load balancer with good functions but high cost.
lvs: heavy-duty four-layer load software
nginx: Lightweight four-layer load software with cache function, regular expression is more flexible
haproxy: Simulates layer 4 forwarding, which is more flexible
2) Seven-layer load balancing is based on virtual URL or host IP load balancing: on the basis of four-layer load balancing (without four layers it is absolutely impossible to have seven layers), and then consider the characteristics of the application layer, such as the same A Web server's load balancing, in addition to distinguishing the traffic that needs to be processed based on the VIP plus port 80, can also decide whether to load balance based on the seven-layer URL, browser type, and language. For example, if your web server is divided into two groups, one group is in Chinese language and one group is in English language, then seven layers of load balancing can automatically identify the user's language when users visit your domain name, and then choose The corresponding language server group performs load balancing processing.
The corresponding load balancer is called a L7 switch. In addition to supporting layer 4 load balancing, it also analyzes information at the application layer, such as HTTP protocol URI or cookie information, to achieve layer 7 load balancing. This type of load balancer can understand the application protocol.
The software that achieves seven layers of load balancing is:
haproxy: inherent load balancing skills, full support for seven-layer proxy, session maintenance, marking, path transfer;
nginx: The function is better only on http protocol and mail protocol, the performance is similar to haproxy;
Apache: poor function
Mysql proxy: function is acceptable.
In general, it is generally lvs for layer 4 load; nginx for layer 7 load; haproxy is more flexible, and layer 4 and layer 7 load balancing can be done
3. The difference between the two
1) Analysis from technical principles
      The so-called four-layer load balancing is to determine the internal server that is ultimately selected through the target address and port in the message, plus the server selection method set by the load balancing device.
      Taking the common TCP as an example, when the load balancing device receives the first SYN request from the client, it selects an optimal server in the above manner, and modifies the target IP address in the message (changed to the back-end server IP), forwarded directly to the server. TCP connection establishment, that is, the three-way handshake is established directly by the client and the server, and the load balancing device only plays a forwarding action similar to a router. In some deployment situations, in order to ensure that the packet returned by the server can be correctly returned to the load balancing device, the original source address of the packet may be modified while forwarding the packet.
technology sharing
      The so-called seven-layer load balancing, also known as "content exchange", is to determine the internal server that is ultimately selected through the truly meaningful application layer content in the message, plus the server selection method set by the load balancing device.
      Taking common TCP as an example, if the load balancing device needs to select a server based on the content of the real application layer, it can only receive the real application sent by the client after proxying the final server and establishing a connection (three-way handshake) The content of the layer, and then based on the specific field in the message, plus the server selection method set by the load balancing device, determine the final selection of the internal server. In this case, the load balancing device is more like a proxy server. Load balancing and front-end clients and back-end servers will establish TCP connections separately. Therefore, from the perspective of this technical principle, seven-layer load balancing obviously has higher requirements for load balancing equipment, and the ability to handle seven-layer is bound to be lower than the four-layer mode of deployment.
technology sharing
2) Analysis from the requirements of application scenarios
     The benefit of the seven-layer application load is to make the entire network more "intelligent". You can refer to this: http application optimization and acceleration instructions-load balancing, you can basically understand the advantages of this method. For example, user traffic visiting a website can forward requests for image classes to specific image servers and can use caching technology through seven layers; requests for text classes can be forwarded to specific text servers and compression can be used technology. Of course, this is just a small case of seven-layer application. From the technical principle, this way can modify the client's request and the server's response in any sense, greatly improving the flexibility of the application system at the network layer. Many functions deployed in the background, such as Nginx or Apache, can be moved to load balancing devices, such as Header rewriting in client requests, keyword filtering in server responses, or content insertion.
     Another feature that is often mentioned is security. The most common SYN Flood attack in the network, that is, hackers control many source clients, use fake IP addresses to send SYN attacks to the same target, usually this kind of attack will send a large number of SYN packets, exhausting the relevant resources on the server to reach Denial The purpose of Service (DoS). It can also be seen from the technical principle that in the four-layer mode, these SYN attacks will be forwarded to the back-end server; in the seven-layer mode, these SYN attacks will naturally end on the load balancing device, and will not affect the normal operation of the background server. . In addition, the load balancing device can set multiple strategies at the seven-layer level to filter specific packets, such as specific injection methods at the application level such as SQL Injection, and further improve the overall system security from the application level.
     The current 7-layer load balancing mainly focuses on the application of HTTP protocol, so its application range is mainly based on B / S developed systems such as numerous websites or internal information platforms. Layer 4 load balancing corresponds to other TCP applications, such as ERP and other systems developed based on C / S.
3) Issues to be considered for seven-layer applications
1. Is it really necessary? The seven-layer application can indeed improve the intelligence of traffic, and it will inevitably bring about problems such as complex equipment configuration, increased load balancing pressure, and complexity in troubleshooting. When designing the system, it is necessary to consider the mixed situation of simultaneous application of four layers and seven layers.
2. Whether it can really improve security. For example, in the SYN Flood attack, the seven-layer mode does block these traffic from the server, but the load balancing device itself must have strong anti-DDoS capabilities. Otherwise, even if the server is normal and the load balancing device scheduled as a hub fails, the entire application will crash.
3. Whether there is enough flexibility. The advantage of the seven-layer application is that it can make the traffic of the entire application intelligent, but the load balancing device needs to provide a perfect seven-layer function to meet the application-based scheduling of customers according to different situations. The simplest assessment is whether it can replace the scheduling function on the background Nginx or Apache and other servers. A load balancing device that can provide a seven-layer application development interface can allow customers to set functions arbitrarily according to their needs, and it is really possible to provide powerful flexibility and intelligence.
4) Overall comparison
1. Intelligence
Since the seven-layer load balancing has all the functions of the OIS seven-layer, it can be more flexible in handling user needs. In theory, the seven-layer model can modify all the requests of the user and the server. For example, add information to the file header and classify and forward it according to different file types. The four-layer model only supports forwarding based on the requirements of the network layer, and cannot modify the content requested by the user.
2. Security
Because the seven-layer load balancing has all the functions of the OSI model, it is easier to resist attacks from the network; the four-layer model will, in principle, directly forward user requests to the back-end nodes, and cannot directly resist network attacks.
3. Complexity
The four-layer model is generally a relatively simple architecture, easy to manage, and easy to locate problems; the seven-layer model architecture is relatively complex, and it is usually necessary to consider the mixed use of the four-layer model.
4. Efficiency ratio
The four-layer model is based on a lower-level setting, which is usually more efficient, but the scope of application is limited; the seven-layer model requires more resource consumption and is theoretically stronger than the four-layer model. http application.
Four, load balancing technical program description
At present, there are many different load balancing technologies to meet different application requirements. The following is from the device objects used in load balancing (software / hardware load balancing), the OSI network level of the application (load balancing at the network level), and the application Geographical structure (local / global load balancing) etc. to classify.
1) Software / hardware load balancing
     Software load balancing solution refers to the installation of one or more additional software on the corresponding operating system of one or more servers to achieve load balancing, such as DNS Load Balance, CheckPoint Firewall-1 ConnectControl, Keepalive + ipvs, etc., its advantages It is based on a specific environment, simple configuration, flexible use, low cost, and can meet the general load balancing needs. There are also many disadvantages of software solutions, because installing additional software on each server will consume a certain amount of system resources. The more powerful the module, the more it will consume, so when the connection request is particularly large, the software itself will Become a key to the success of the server work; software scalability is not very good, limited by the operating system; due to bugs in the operating system itself, often cause security problems.
     The hardware load balancing solution is to install a load balancing device directly between the server and the external network. This device is usually a piece of hardware independent of the system, which we call a load balancer. Because specialized equipment completes specialized tasks and is independent of the operating system, overall performance has been greatly improved. Together with diversified load balancing strategies and intelligent traffic management, the best load balancing needs can be achieved. There are various forms of load balancers. In addition to being a load balancer in an independent sense, some load balancers are integrated in the switching equipment and placed between the server and the Internet link, and some are connected by two network adapters. The functions are integrated into the PC, one is connected to the Internet, and the other is connected to the internal network of the back-end server farm.
     Comparison of software load balancing and hardware load balancing:
     The advantages of software load balancing are that the demand environment is clear, the configuration is simple, the operation is flexible, the cost is low, and the efficiency is not high, which can meet the needs of ordinary enterprises; the disadvantage is that it depends on the system and increases the resource overhead; the quality of the software determines the performance of the environment; the system The security of the software and the stability of the software will affect the security of the entire environment.
     The advantages of hardware load balancing are independent of the system, the overall performance is greatly improved, and it is better than the software in terms of function and performance; intelligent traffic management, multiple strategies are optional, and the best load balancing effect can be achieved; the disadvantage is that it is expensive.
2) Local / global load balancing
     Load balancing is divided into local load balancing (Local Load Balance) and global load balancing (Global Load Balance, also known as regional load balancing) from the geographical structure of its application. Local load balancing refers to load balancing the local server group. Load balancing refers to load balancing among server farms that are placed in different geographic locations and have different network structures.
     Local load balancing can effectively solve the problems of excessive data traffic and heavy network load, and it does not require expensive expenditures to purchase servers with excellent performance, make full use of existing equipment, and avoid the loss of data traffic caused by a single server failure. It has flexible and diverse balancing strategies to reasonably distribute data traffic to the servers in the server farm to share the burden. Even if the existing server is expanded and upgraded, it simply adds a new server to the service group without changing the existing network structure or stopping the existing service.
     Global load balancing is mainly used for a site with its own server in a multi-region, in order to enable global users to access the nearest server with only one IP address or domain name, so as to obtain the fastest access speed It can also be used by large companies with subsidiaries and widely distributed sites to achieve the purpose of uniform and reasonable resource allocation through Intranet (intranet).
3) Load balancing at the network level
     In view of the different bottlenecks of overloading the network, starting from different levels of the network, we can use the corresponding load balancing technology to solve existing problems.
With the increase of bandwidth and increasing data traffic, the data interface of the core part of the network will face a bottleneck problem. The original single line will be difficult to meet the demand, and the upgrade of the line is too expensive or even difficult to achieve. At this time, you can consider adopting Link aggregation (Trunking) technology.
     Link aggregation technology (Layer 2 load balancing) uses multiple physical links as a single aggregated logical link, and network data traffic is shared by all physical links in the aggregated logical link, thereby increasing logically The capacity of the link is increased so that it can meet the demand for increased bandwidth.
     Modern load balancing technology usually operates at the fourth or seventh layer of the network. Layer 4 load balancing maps an IP address legally registered on the Internet to the IP addresses of multiple internal servers, and dynamically uses one of the internal IP addresses for each TCP connection request to achieve load balancing. In the fourth-layer switch, this balancing technology is widely used. A target address is a server group VIP (Virtual IP, Virtual IP address) connection request. The data packet flows through the switch. The switch uses the source and destination IP addresses, TCP Or UDP port number and a certain load balancing strategy, mapping between the server IP and VIP, select the best server in the server group to handle the connection request.
 
Seven-layer load balancing controls the content of application-layer services, providing a high-level control of access traffic, suitable for applications to HTTP server farms. The seventh layer load balancing technology performs the load balancing task based on the information in the header by checking the HTTP header flowing through.
The advantages of layer 7 load balancing are as follows:
1) HTTP 400, 500, and 600 series error messages can be detected by checking the HTTP header, so that the connection request can be transparently redirected to another server to avoid application layer failures.
2) According to the type of data flowing (such as judging whether the data packet is an image file, a compressed file or a multimedia file format, etc.), the data flow can be directed to the corresponding content server for processing to increase system performance.
3) According to the type of connection request, such as static document request such as ordinary text and image, or dynamic document request such as asp, cgi, etc., the corresponding request can be directed to the corresponding server to be processed to improve the performance and security of the system.
The disadvantages of Layer 7 load balancing are as follows:
1) Seven-layer load balancing is limited by the protocols it supports (generally only HTTP), which limits its wide application.
2) The seven-layer load balancing check HTTP header will occupy a lot of system resources, which will inevitably affect the performance of the system. In the case of a large number of connection requests, the load balancing device itself will easily become the bottleneck of the overall network performance.
Five, load balancing strategy
In practical applications, we may not want to simply distribute the client's service request to the internal server evenly, regardless of whether the server is down. It is to make the Pentium III server accept more service requests than the Pentium II server. A server that handles fewer service requests can be assigned more service requests. The server that fails will no longer accept service requests until the failure is restored. Wait. Choose an appropriate load balancing strategy, so that multiple devices can complete the task well together, eliminating or avoiding the bottleneck of the existing network load distribution unevenness, data traffic congestion and long reaction time. In each load balancing method, for different application requirements, the load balancing of the second, third, fourth, and seventh layers of the OSI reference model has a corresponding load balancing strategy.
There are two key factors in the pros and cons of the load balancing strategy and the ease of implementation: the load balancing algorithm; the detection methods and capabilities of the network system status.
1. Load balancing algorithm
1) Round Robin: Each request from the network is distributed to the internal servers in turn, from 1 to N and then restarted. This kind of balancing algorithm is suitable for all servers in the server group have the same software and hardware configuration and the average service request is relatively balanced.
2) Weighted round robin (Weighted Round Robin): According to the different processing capabilities of the server, each server is assigned different weights so that it can accept service requests with corresponding weights. For example, the weight of server A is designed to be 1, the weight of B is 3, and the weight of C is 6, then servers A, B, and C will receive 10%, 30%, and 60% service requests, respectively. This balancing algorithm can ensure that high-performance servers get more usage and avoid overloading low-performance servers.
3) Random equilibrium (Random): Randomly distribute requests from the network to multiple servers in the interior.
4) Weighted Random (Weighted Random): This balancing algorithm is similar to the weighted round robin algorithm, but it is a random selection process when processing request sharing.
5) Response time balance (Response Time): The load balancing device sends a probe request (such as Ping) to each internal server, and then determines which server responds to the client according to the fastest response time of the internal server to the probe request. Request for service. This balancing algorithm can better reflect the current running state of the server, but the fastest response time only refers to the fastest response time between the load balancing device and the server, not the fastest response time between the client and the server.
6) Least Connection: The time of each client request service staying on the server may vary greatly. As the working time increases, if a simple round robin or random balancing algorithm is used, each The connection process on the server may be very different, and it does not achieve true load balancing. The minimum connection number balancing algorithm has a data record for each server in the internal load, which records the number of connections currently being processed by the server. When there is a new service connection request, the current request will be assigned to the least connected number The server makes the balance more in line with the actual situation and the load is more balanced. This balancing algorithm is suitable for long-time processing request services, such as FTP.
7) Processing capacity balance: This balancing algorithm will allocate service requests to the internal processing load (converted according to the server CPU model, number of CPUs, memory size, and current number of connections). The lightest server, because the internal server is considered The processing capacity and current network operating conditions, so this balancing algorithm is relatively more accurate, especially suitable for the seventh layer (application layer) load balancing.
8) DNS response balance (Flash DNS): On the Internet, whether it is HTTP, FTP or other service requests, the client generally finds the exact IP address of the server through domain name resolution. Under this balancing algorithm, the load balancing devices located in different geographical locations receive the domain name resolution request of the same client, and at the same time resolve this domain name to the IP address of their corresponding server (that is, the load balancing device IP address of the server in the same geographic location) and returned to the client, the client will continue to request the service with the domain name resolution IP address received first, and ignore other IP address responses. When this kind of balancing strategy is suitable for application in global load balancing, it is meaningless for local load balancing.
2. The detection method of network system status
Although there are a variety of load balancing algorithms that can better distribute data traffic to the server to load, but if the load balancing strategy does not detect the network system status and ability, once on a server or a certain load balancing device and server In the event of a failure between the networks, the load balancing device still directs part of the data traffic to that server, which will inevitably cause a large number of service requests to be lost and fail to meet the requirements of uninterrupted availability. Therefore, a good load balancing strategy should have detection methods and capabilities for network failures, server system failures, and application service failures:
1) Ping detection: The server and network system status is detected by ping. This method is simple and fast, but it can only roughly detect whether the operating system on the network and server is normal, and it is powerless to detect the application service on the server.
2) TCP Open detection: Each service will open a connection through TCP, detect whether a TCP port on the server (such as Telnet 23 port, HTTP 80 port, etc.) is open to determine whether the service is normal.
3) HTTP URL detection: For example, an access request to the main.html file is sent to the HTTP server. If an error message is received, the server is considered to be malfunctioning.
3. Other factors
In addition to the two factors mentioned above, the pros and cons of the load balancing strategy are affected. In some application cases, we need to allocate all requests from the same client to the same server to bear, for example, the server registers the client and purchases. In the case of a local database where service request information is saved, it is important to distribute client sub-requests to the same server for processing. There are several ways to solve this problem:
1) One is to allocate multiple requests from the same client to the same server for processing based on the IP address, and the corresponding information between the client's IP address and the server is stored on the load balancing device;
2) The second is to make a unique identifier in the client browser cookie to allocate multiple requests to the same server for processing, which is suitable for clients that access the Internet through a proxy server.
3) There is also an out of path return mode (Out of Path Return). When the client connection request is sent to the load balancing device, the center load balancing device directs the request to a server, and the server's response request no longer returns to the center. The load balancing device, which bypasses the traffic distributor, returns directly to the client. Therefore, the central load balancing device is only responsible for accepting and forwarding requests, and its network burden is greatly reduced, and the client is provided with a faster response time. This mode is generally used for HTTP server farms. A virtual network adapter must be installed on each server, and its IP address should be set as the VIP of the server farm, so that the server can successfully reach the three handshake when directly responding to the client request.
Six, load balancing implementation elements
1) Performance
Performance is an important issue that we need to consider when introducing a balanced solution, but it is also the most difficult to grasp. When measuring performance, the number of packets per second through the network can be used as a parameter. The other parameter is the maximum number of concurrent connections that the server farm can handle in the balancing scheme. However, assume that a balancing system can handle millions of concurrent connections. The number of connections, but it can only be forwarded at a rate of 2 packets per second, which obviously has no effect. The pros and cons of performance are closely related to the processing capacity of the load balancing device and the balancing strategy used, and there are two points to note: First, the performance of the balancing scheme on the overall server farm, which is the key to responding to the client connection request speed; Balance the performance of the device itself to avoid insufficient performance when a large number of connection requests become service bottlenecks. Sometimes we can also consider a hybrid load balancing strategy to improve the overall performance of the server farm, such as the combination of DNS load balancing and NAT load balancing. In addition, for sites with a large number of static document requests, cache technology can also be considered, which is relatively more cost-effective and can improve response performance; for sites with a large amount of ssl / xml content transmission, ssl / xml should be considered. Accelerate technology.
2) Scalability
IT technology is changing with each passing day. The latest product a year ago may now be the product with the lowest performance in the network. The rapid increase in business volume. The network a year ago now needs a new round of expansion. Appropriate balancing solutions should meet these needs, can balance the load between different operating systems and hardware platforms, can balance the load of different servers such as HTTP, mail, news, agents, databases, firewalls, and caches, and can be Some resources are added or deleted dynamically in a completely transparent way.
3) Flexibility
The balanced solution should be able to flexibly provide different application requirements and meet the ever-changing application requirements. When different server groups have different application requirements, there should be a variety of balancing strategies to provide a wider range of choices.
4) Reliability
At sites with high requirements for service quality, the load balancing solution should provide complete fault tolerance and high availability for the server farm. However, when the load balancing device itself fails, there should be a good redundancy solution to improve reliability. When using redundancy, multiple load balancing devices in the same redundant unit must have an effective way to monitor each other to protect the system from the loss of major failures as much as possible.
5) Ease of management
Whether it is a balanced solution through software or hardware, we all hope that it has a flexible, intuitive and safe management method, which is easy to install, configure, maintain and monitor, improve work efficiency, and avoid errors. On the hardware load balancing device, there are currently three main management methods to choose from: First, the command line interface (CLI: Command Line Interface), which can be managed by connecting the serial interface of the load balancing device through the super terminal, or remote login management by telnet In the initial configuration, the former is often used; Second, graphical user interfaces (GUI: Graphical User Interfaces), there are management based on ordinary web pages, and also through Java Applet for security management, generally need to install a certain management terminal Version of the browser; Third, SNMP (Simple Network Management Protocol, Simple Network Management Protocol) support, through third-party network management software to manage SNMP-compliant devices.
 

from: http://www.cnblogs.com/kevingrace/p/6137881.html

 

Summary of Linux load balancing (four-layer load / seven-layer load)

Tags: and hardware maximum operating system time int database score

Original address: http://www.cnblogs.com/crazylqy/p/7741990.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.