In the face of a large number of user access, high concurrency requests, and massive data, you can use high-performance servers, large databases, storage devices, and high-performance Web servers, using highly efficient programming languages such as (Go, Scala) when the capacity of a single machine reaches the limit, we need to consider business splitting and distributed deployment.
In the face of a large number of user access, high concurrency requests, and massive data, you can use high-performance servers, large databases, storage devices, and high-performance Web servers, using highly efficient programming languages such as (Go, Scala) when the capacity of a single machine reaches the limit, we need to consider business splitting and distributed deployment to solve the problems of large website traffic, high concurrency, and massive data.
From a standalone website to a distributed website, the most important difference is service splitting and distributed deployment. after applications are split, they are deployed on different machines to implement large-scale distributed systems. Distributed and business splitting solves the problem from centralization to distribution. However, each deployed independent business still has single point of failure and unified access portal. to solve single point of failure, we can adopt redundancy. Deploy the same application on multiple machines. To solve the unified access problem, we can add a server load balancer device in front of the cluster to achieve traffic distribution.
Load Balance (SLB) balances the Load (job, access request) and distributes the Load to multiple operation units (servers, components) for execution. It is the ultimate solution for high performance, single point of failure (high availability), and scalability (horizontal scaling.
This is the first article in a detailed explanation of server load balancer. it introduces the principles of server load balancer, server load balancer classification (DNS server load balancer, HTTP server load balancer, IP server load balancer, and link layer server load balancer, hybrid P load balancing ). Part of the content is taken from reading notes.
Outline
- Load balancing principle
- DNS Load Balancing
- HTTP load balancing
- IP server load balancer
- Link layer load balancing
- Hybrid P-load balancing
I. Load Balancing principles
System expansion can be divided into vertical (vertical) scaling and horizontal (horizontal) scaling. Vertical scaling improves the server processing capability from the perspective of a single machine by adding hardware processing capabilities, such as CPU processing capabilities, memory capacity, and disks, it cannot meet the needs of large distributed systems (websites), large traffic, high concurrency, and massive data. Therefore, you need to scale horizontally to meet the processing capability of large website services by adding machines. For example, if one machine cannot meet the requirements, add two or more machines to share the access pressure. This is a typical cluster and load balancing architecture: for example:
- Application cluster: deploy the same application on multiple machines to form a processing cluster, receive requests distributed by server load balancer devices, process the requests, and return the corresponding data.
- Server load balancer device: distributes user access requests to a processing server in the cluster according to the server load balancer algorithm. (A device that distributes network requests to available servers in a server cluster)
Role of server load balancer (solved ):
1. Solve the concurrent pressure and improve the application processing performance (increase the throughput and enhance the network processing capability );
2. failover is provided to achieve high availability;
3. provide website scalability (scalability) by adding or reducing the number of servers );
4. security protection (filtering, black/white lists, and other processing on server load balancer devices)
II. load balancing classification
Different implementation technologies can be divided into DNS Load Balancing, HTTP load balancing, IP load balancing, and link layer load balancing.
2.1DNS load balancing
The earliest load balancing technology, using domain name resolution to achieve load balancing, in the DNS server, configure multiple A records, these A records corresponding to the server constitute A cluster. A large website always uses DNS resolution as the first-level server load balancer. For example:
Advantages
- Easy to use: server load balancer work is handed over to the DNS server for processing, saving the trouble of maintaining the server load balancer.
- Performance improvement: it supports address-based domain name resolution and resolution to the server address closest to the user, which can speed up access and improve performance;
Disadvantages
- Poor availability: The DNS resolution is multi-level resolution. after The DNS is added/modified, the resolution takes a long time. during the resolution process, the user fails to access the website;
- Low scalability: the DNS server load balancer has control over the domain name service provider and cannot be improved or expanded;
- Poor maintainability: it does not reflect the current running status of the server; few algorithms are supported; server differences cannot be distinguished (the load cannot be determined based on the status of the system and service)
Practical Suggestions
Using DNS as the first server load balancer, A records the IP addresses of the internal server load balancer and distributes requests to the real Web servers through the internal server load balancer. It is generally used by Internet companies and is not suitable for complex business systems. For example:
1.3 IP server load balancer
At the network layer, server load balancer performs load balancing by modifying the request target address.
The user requests data packets. after arriving at the server load balancer server, the server load balancer server obtains network data packets in the kernel process of the operating system, obtains a real server address based on the server load balancer algorithm, and then modifies the request destination address, the obtained real IP address does not need to be processed by the user process.
After the real server completes processing, the response data packet is returned to the server load balancer server, the server load balancer server, and then the data packet source address is changed to its own IP address, which is sent to the user's browser. For example:
IP server load balancer: the real physical server is returned to the server load balancer. There are two methods: (1) the server load balancer modifies the source address while modifying the destination IP address. Set the source address of the data packet to its own disk, that is, the source address conversion (snat ). (2) use the server load balancer server as the gateway server of the real physical server cluster.
Advantages:
(1) data distribution in the kernel process is better than distribution at the application layer;
Disadvantages:
(2) All Request responses must go through the server load balancer server. The maximum throughput of the cluster is limited by the NIC bandwidth of the server load balancer server;
2.4 link layer load balancing
Modify the mac address on the data link layer of the communication protocol for load balancing.
When distributing data, if you do not modify the ip address, it means to modify the target mac address and configure the virtual ip addresses of all machines in the real physical server cluster to be consistent with the ip addresses of the server load balancer server, so that the source and target addresses of the data packets are not modified, the purpose of data distribution.
The actual ip address of the server is the same as the ip address of the data request. the server load balancer server does not need to perform address translation. the server load balancer can directly return the response data packet to the user's browser to avoid the bottleneck of the NIC bandwidth of the server load balancer server. It is also called direct routing mode (DR mode ). For example:
Advantages: good performance;
Disadvantage: complicated configuration;
Practice: DR mode is the most widely used Load Balancing method.
2.5 hybrid load balancing
Due to the differences in hardware devices, sizes, and services provided by multiple server groups, you can consider the most appropriate load balancing method for each server group, then, the server load balancer or cluster is used to provide services to the outside world in one whole (that is, the server load balancer group is regarded as a new server load balancer group ), to achieve the best performance. This method is called hybrid load balancing.
This method is sometimes used when the performance of a single balanced device cannot meet a large number of connection requests. Is currently widely used by large Internet companies.
Method 1, for example:
The above mode is suitable for scenarios with dynamic/static separation. reverse proxy servers (clusters) can play the role of caching and dynamic request distribution. when static resource caching is on the proxy server, it is directly returned to the browser. If the page is dynamic, request the application server load balancer (application cluster) next to it ).
Method 2, for example:
The above model is suitable for dynamic request scenarios.
Because of the hybrid mode, you can flexibly combine different methods based on specific scenarios. the above two methods are for reference only.