Overview
Load Balancing
As the current core parts of the existing network increase with the increase in business volume and the rapid increase in access volume and data traffic, their processing capabilities and computing strength have also increased correspondingly, making it impossible for a single server device to bear it. In this case, if you throw away the existing equipment and do a lot of hardware upgrades, this will cause a waste of existing resources, and if you face the next increase in business volume, this will lead to another high hardware upgrade. Cost investment, even equipment with excellent performance can't meet the demands of current business volume growth.
Classification of
load balancing implementation methods
1: Software load balancing technology
This technology is suitable for some small and medium-sized website systems and can meet general load balance requirements. Software load balancing technology is a load balancing technology implemented by installing one or more corresponding load balancing software on multiple servers in one or more interactive network systems. The software can be easily installed on the server and achieve a certain load balance function. Software load balancing technology is simple to configure, easy to operate, and most importantly, low cost.
2: Hardware load balancing technology
Because hardware load balancing technology requires an additional load balancer, the cost is relatively high, so it is suitable for large-scale website systems with high traffic. However, in the current relatively large-scale enterprise networks and government websites, hardware load balancing equipment is generally deployed (reason 1. The hardware equipment is more stable, 2. It is also the purpose of compliance). The hardware load balancing technology is multiple Install corresponding load balancing equipment between servers, that is, load balancer to complete the load balancing technology. Compared with software load balancing technology, it can achieve a better load balancing effect.
3: Local load balancing technology
Local load balancing technology is to perform load balancing processing on a local server group. This technology optimizes the performance of the servers so that the traffic can be evenly distributed among the servers in the server group. The local load balancing technology does not require the purchase of expensive servers or the optimization of the existing network structure.
(For example, Microsoft's NLB network load balancing technology. This technology implements load balancing through applications on multiple servers. The principle is that several servers virtualize an IP address. The application will make the servers respond to the data in a round-robin fashion. Problems were encountered during deployment. You can pay attention to this experience in the future. The problem is briefly described as follows: When the external test PC sends a ping packet to the virtual IP address, the virtual IP responds with a data packet, and the real host also responds. Data packets cause the security device to think that the session is not secure. So block it, resulting in abnormal business.)
4: Global load balancing technology (also known as WAN load balancing)
Global load balancing technology is suitable for large-scale website systems with multiple server clusters below. Global load balancing technology is to perform load balancing processing on multiple servers distributed in various regions of the country. This technology can automatically turn to the nearest location by determining the IP geographic location of the visiting user. This technique is used by many large websites.
5: Link aggregation load balancing technology
The link aggregation load balancing technology uses multiple physical links in the network system as a single aggregate logical link, so that the data traffic in the website system is shared by all physical links in the aggregate logical link. This technology can greatly improve network data throughput and save costs without changing the existing line structure or increasing the existing bandwidth.
to sum up:
There are at least four applications for load balancing:
§ Server load balancing;
§ Wide area network server load balancing;
§ Firewall load balancing;
§ Transparent website accelerator load balancing.
Server load balancing is responsible for distributing the tasks requested by customers to multiple servers to expand the service capacity and exceed the processing capacity of one server, and to make the application system fault-tolerant.
Wide-area network server load balancing is responsible for directing customer requests to server groups in different data centers, so as to provide customers with faster response speed and intelligent redundant processing in the event of a catastrophic accident in a data center.
Firewall load balancing distributes the request load to multiple firewalls to improve security performance so as to exceed the processing capacity of one firewall.
The transparent cache enables the direct traffic to be exchanged to multiple website accelerators to unload the static content of the website server to the website accelerator (Cache), thereby improving the performance of website services and speeding up the response time of the cache.
Hardware load balancing deployment method
There are generally two deployments of load balancing hardware devices: one is serial deployment and the other is bypass deployment. In part, we mainly analyze the deployment method of hardware load balancing equipment through the direct connection and bypass configuration modes of F5 load balancing.
Third, the comparison and thinking of the two models
1. From the point of view of interface flow pressure
In the case of direct connection, the traffic between bigip and the client is on the upstream interface of bigip, and the traffic between bigip and the server is on the downstream interface, so the pressure on a single bigip interface is lower.
In bypass mode, bigip's communication traffic with the client or the same server is on one interface of bigip, so the single interface of bigip is under pressure. To solve this problem, link aggregation technology can be used between bigip and the switch, that is, port bundling to avoid the interface becoming a network bottleneck.
2. From the perspective of network structure security
In the case of direct connection, you do not need to publish the real IP address used by the internal server, but only need to publish the virtual address that provides load balancing. In the bypass situation, the client can learn the real address of the server. In this mode To ensure the security of the server, the server's gateway points to bigip, and the packet filtering (firewall) function on bigip can be used to protect the server.
3. From the perspective of management convenience
In the case of direct connection, since the real address of the server can be implied, it is necessary to enable the address translation (NAT) function on bigip, which is relatively more complicated. The bypass mode does not require the configuration of address translation.
4. From the perspective of scalability
Direct connection mode does not support npath mode. Bypass mode supports npath mode. Enabling npath mode can reduce the pressure on F5 devices and bypass the flow direction in npath mode. (In the case of this kind of traffic flow, if there are security devices in the network, problems are likely to occur. The specific problem depends on whether the security device is above or below the load balancing device)
In bypass mode, using npath's traffic processing method, all the traffic responded by the server can not pass through bigip, which can greatly reduce the pressure on the traffic on bigip. But the traffic processing method of npath cannot work in direct connection mode.
5. In the subsequent system transformation, the work complexity of the two modes is different
If the load balancing technology is modified to a system without load balancing technology, then, in the case of direct connection, the IP address of the server needs to be modified and the network structure needs to be adjusted (transfer the server to the bigip backend), and related The connection application also needs to be changed and requires strict testing before it can go online. However, in the bypass mode, only the gateway of the server needs to be changed. The other parts of the original system (including the network structure) basically do not need to be changed, so The former makes major changes to the system, while the latter makes minor changes.
Finally, to summarize, compared to the direct connection mode, the main advantages of the bypass mode in the system architecture:
1. Increase the flexibility of the network: F5 adopts the bypass method, and the gateway of the back-end server points to the address of the layer 3 switch instead of the address of F5. It is convenient to use the modified route when maintaining the network equipment The method makes the equipment offline, which is convenient for maintenance and management. At the same time, some special applications can also use policy routing on the core switch to point to specific network devices.
2. Improve the overall reliability of the network: due to the existence of the bypass mode, if the F5 device has a problem, the routing can be modified on the switch to use the data flow to bypass the F5, without affecting the entire business system.
3. For some special applications, the speed is increased: after the bypass method is adopted, some specific application data sensitive to speed and time delay can adopt different paths when entering and leaving, for example: can pass through when entering F5 equipment, check it, load balance. When the data stream leaves, it does not go through F5 to increase its speed.