Server Load balancer II-Basic knowledge of Server Load balancer
In the previous article, we have been talking about the knowledge around Server Load balancer. Don't worry about it, and you can't eat hot tofu. A little bit every day!
SeriesArticleIndex:
Server Load balancer: requirements of Server Load balancer
Server Load balancer details Article 2: Basic concepts of Server Load balancer-Network Basics
Server Load balancer Part 3: basic concepts of Server Load balancer-Server group using Server Load balancer
Server Load balancer details Article 4: basic concepts of Server Load balancer-Data Packet Flow During Server Load balancer
Server Load balancer details Article 5: basic concepts of Server Load balancer-Health Check
Server Load balancer details Article 6: basic concepts of Server Load balancer-Network Address Translation (NAT)
Server Load balancer Article 7: basic concepts of Server Load balancer-Direct server return
Server Load balancer details Article 8: Server Load balancer Advanced Technology-Session persistence (upper)
Server Load balancer (Advanced Server Load balancer Technology)-Session persistence (medium)
Server Load balancer details Article 10: Server Load balancer Advanced Technology-Session persistence (lower)
Generally, Server Load balancer acts as an intermediate bridge before the server and network, as shown in:
As mentioned earlier, Server Load balancer generally includes the following types:
Server Load balancer
Global Server Load balancer
Firewall Load Balancing
Server Load balancer mainly refers to distributing load requests to each backend server to overcome the problems caused by one server, so as to achieve scalability and fault tolerance.
Server Load balancer forwards requests from users in different regions around the world to server processing centers in different regions to provide users with faster responses, it can also provide good Fault Tolerance: it is useless if all the servers in one region are down, because there are servers in other regions and data can be recovered from the total data center.
The main purpose of the firewall is to ensure security. To avoid the performance bottleneck of the firewall as a site, multiple firewalls are used to achieve load balancing, as shown in the following figure (for details, we will discuss it later ):
Next, let's take a look at some Server Load balancer products.
Load Balancing products can basically be divided into three categories: software, hardware, switches!
Software Load Balancing: This product directlyAlgorithmTo coordinate requests. For example, the server wEight algorithm, the server kinship algorithm, and the server resource usage status are used.
Hardware load balancing: In many cases, some devices, including hardware and specific software, are manufactured by several major manufacturers. This device is a black box for us and we only need to follow the instructions.
I have talked a lot here, but I have never talked about what my friends really care about! Next, let's get started.
First,VeryIt is necessary to popularize some knowledge:Network Address Translation,TCPRequest Process, server request processing process.
Network Address Translation
Here, we will briefly introduce how some data forwarding functions of layer 2nd and layer 3rd in the OSI model play a role in Server Load balancer.
First, we know that a MAC address provides a unique identifier for the hardware throughout the network. At the same time, an IP address uniquely identifies a host.
In a switch, if a port is responsible for receiving data packets, this port is called an "entry". Similarly, if a port is responsible for sending data packets, this port is an "exit ".When the switch receives the datagram, it decides to send it to the exit. At this time, the switch will modify some information in the datagram before sending it.
Well, the Foundation has been completed, and some concepts have also been discussed. Next we will link it up:
When the 2nd layer switch receives a packet, the switch determines the next destination based on the packet header information of the 2nd layer (such as the MAC address) and forwards the packet to the 3rd layer. Similarly, the layer-3 switch checks the header information of a datagram (for example, an IP address). By changing the MAC address of the destination in the datagram, it decides to send the datagram to the next place. The 3rd layer switch is also called a "Router", and the work done by the 3rd layer switch is called a "Route ".
Server Load balancer is located at Layer 3, or layer 4th, layer 5th, and layer 6th. The request Forwarding is determined by viewing the data packets sent to these layers.
TCP request process
For the sake of better understanding in subsequent articles, let's talk about the TCP request and processing process and see the following figure:
I believe you are not familiar with this figure. As we all know, each TCP link involves three handshakes. Let's take a look.
First, the client will first send a SYN datagram to the server to exchange data with the server. The important information in this SYN datagram is the IP address of the request source (the request source refers to the requesting party. At this time, the client sends the request ), the Port Number of the request source, the target IP address (the IP address of the Requested Party, which is the server at this time), and the target port number.
The following figure shows a clear explanation:
At the same time, the SYN datagram also contains a number indicating the sequence. Each time a new datagram is sent from the client to the server, the number increments.
After the server receives the SYN datagram, it sends a syn ack response to the client. The syn ack response contains a number indicating the sequence of the server.
After the client receives the response, it will send an ACK response to the server. At this time, a connection is established. Then, the client and the server start data exchange through this connection channel.
Each established TCP connection can be uniquely identified by four values: source IP address, source port number, target IP address, and target port number. In addition, after the connection is established, each datagram transmitted over the TCP connection contains the four information. In addition, the role of "Source" and "target" is constantly changing between the client and the server. That is, if the datagram is sent from the client to the server, the client is the "Source" and the server is the "target". If the datagram is sent from the server to the client, the server is the "Source" and the client is the "target ". Everything is based on the inbound and outbound data packets. To sum up, the "Source" refers to the data outflow end and the "target" refers to the end to which the datagram is sent. Understanding this is one of the keys to understanding Server Load balancer.
After the data exchange between the client and the server is complete, the client sends a fin datagram, and the server returns a fin ack response. At this time, the TCP connection is destroyed.
Server Request Processing Process
Here we mainly analyze the specific process of requesting a webpage.
Note: The deeper the technology is, the more things you need to master. As an application, it is no longer a drag-and-drop control. If you do a lot of add, delete, modify, and query operations, it will be OK.. Too many things need to be considered and mastered in depth..
Suppose the user opens the browser and enters www.agilesharp.com in the address bar. When the user clicks enter, the page of the site is displayed in front of us. In fact, a lot of operations are performed behind this process. Let's take a look.
First, when you click Enter, the browser first resolves the domain name www.agilesharp.com to an IP address. During the resolution process, the browser first checks the local DNS server. The local DNS server uses related protocols and methods to obtain the IP address of the domain name, and then returns it to the browser. If the IP address of the domain name is not found in the local DNS server, you need to go to the remote DNS server to find the IP address (during performance tuning, ). Once the IP address is found, the browser uses the three handshakes we mentioned earlier to establish a TCP connection.
After the connection is established, the browser sends an HTTP request to the server to request the webpage www.agilesharp.com.
The request sent to the server is as follows:
After the Server accepts the request, it starts to process the request and then sends the response back, as shown below:
Note: This response is divided into two parts: one header information and the other content of the page. They are separated by a line break.
After the server sends the response back, the browser accepts the response. At this time, the browser loads the HTML structure of the page, and then parses and presents the page in the order from top to bottom. If you need to request resources, such as JS, CSS, and images, the browser will first search from the cache. If there is no, it will perform domain name resolution based on the resource address again, send a request, get data, until the entire page is completely parsed! The detailed steps here will not be repeated here, because the center of gravity is not here!