SLB Principle Analysis and Practice Part 1-Introduction
SeriesArticleIndex:
Server Load balancer: requirements of Server Load balancer
Server Load balancer details Article 2: Basic concepts of Server Load balancer-Network Basics
Server Load balancer Part 3: basic concepts of Server Load balancer-Server group using Server Load balancer
Server Load balancer details Article 4: basic concepts of Server Load balancer-Data Packet Flow During Server Load balancer
Server Load balancer details Article 5: basic concepts of Server Load balancer-Health Check
Server Load balancer details Article 6: basic concepts of Server Load balancer-Network Address Translation (NAT)
Server Load balancer Article 7: basic concepts of Server Load balancer-Direct server return
Server Load balancer details Article 8: Server Load balancer Advanced Technology-Session persistence (upper)
Server Load balancer (Advanced Server Load balancer Technology)-Session persistence (medium)
Server Load balancer details Article 10: Server Load balancer Advanced Technology-Session persistence (lower)
I believe that my friends are familiar with Server Load balancer! Especially for O & M personnel! Many technical personnel may think that SLB is not managed by it O & M personnel. What do our developers do? I thought so too, but later I found that I was wrong. Developers need to understand performance issues, and Server Load balancer is also in the performance category. So ...! If developers want to constantly improve themselves, move towards the role of design and architecture, then you must have a comprehensive grasp of the technology and overall evaluation, in order to achieve real availability, scalability, and flexibility in terms of software and hardware in the project.
I have experienced many projects, especially when I am outsourcing, but it basically does not involve any performance issues, and it rarely involves Server Load balancer. When I got in touch with the Internet, I was often told that I was engaged in enterprise development and did not understand performance. So I began to pay attention to performance and I also knew that there was a Server Load balancer. At that time, I felt that my mind was suddenly opened and I found a different world! However, Server Load balancer lacks a lot of information. Many places and articles on the Internet are just a bit of information. When it comes to depth, it will be gone!
Technology, easy to get started, difficult to learn! If you are not refined, you will not have the core competitiveness, because everyone knows and everyone can replace you! So be refined!
Here, I will thoroughly analyze the principle of Server Load balancer and use it! You can even feel that, after that, you can write a Server Load balancer software on your own! After that, you can easily analyze and think about the problem!
....
Server Load balancer is no longer a new concept. Server Load balancer not only refers to putting some servers together to achieve partial pressure.Server Load balancer is a general term. What we usually talk about most is Server Load balancer, global load balancing, firewall load balancing, and cache load balancing! (In the future, we will talk about various aspects ).
Server Load balancer distributes requests among multiple server resources. Server Load balancer can also implement precise server health check and request forwarding. In addition, because Server Load balancer is deployed on the front-end of the server, it can also include servers that are protected from malicious user attacks and improve security. At the same time, you can intelligently select different IP data packets based on the information inProgram, Different servers to handle!
Necessity of Server Load balancer
With the popularization of the Internet, more and more people begin to use services online. At the same time, it cannot be tolerated that the network suddenly crashes or the network speed and service performance are extremely low. Especially for applications involving online transactions, the emergence of any problem is a major economic loss. To ensure better and more stable services, we will constantly upgrade the server's related devices.
According to Moore's Law, the processing speed of computers doubles every 18 months. However, this speed still keeps up with the speed at which the Internet is deployed and users' needs for services, and purchasing better equipment is not only expensive, but also cost-effective.
In fact, a scalability challenge has been involved here. Next, let's take a look at some common scalability solutions.
As mentioned earlier, the computer update speed cannot catch up with the user's needs. At this time, the cluster technology is applied. This technology is mainly provided by those large computer manufacturers, to some extent, the cluster technology has relaxed the previous problems. Next, let's look at two typical cluster technologies: loosely coupled systems and symmetric multi-processor systems.
Loosely Coupled System
Loosely coupled systems are composed of many identical computer blocks that are connected through the system bus. Each module contains various processors, memory, disk controllers, disk drivers, and network interfaces. In fact, each block can be seen as an independent computer, but now they are together. The following sketch shows the relationship:
The loosely coupled computer cluster system uses the processor communication technology to distribute data to multiple processors. This system achieves good scalability only when tasks can be split. For example, we now have a task to return all the data in a table, and the data in this table has been split into multiple different files and placed on the disk. In this case, a loosely coupled computer cluster technology is used to explain that a query task is divided into multiple parallel subtasks for query. Each subtask is searched in a single file, merge the results and return them.
However, not all tasks can be split. For example, if a task needs to update a field in the previous table, even if the update task is split into small subtasks, however, the fields to be updated must be in a file between them. That is to say, only one subtask is actually executed, and other subtasks are being mixed and idle.
In addition, in order to make the loosely coupled computer cluster system more scalable, this system requires a lot of technical support and a lot of personnel maintenance, the cost is very high.
Symmetric multi-processor system
Symmetric Multi-Processor Systems (SMP) use multi-core shared memory (in fact, this will also lead to performance problems, many computers, especially multi-core servers, are now using asymmetric memory, if you are interested, you can study it ). In this system, in order to achieve good scalability, the applications we develop must adopt multi-thread technology (and a task must be divided into result subtasks for running, different subtasks run on different threads ). These threads share the memory and communicate internally. At the same time, the operating system will schedule these threads so that they can run on multiple cores. Similarly, the previous system has this similar problem.
OK. Let's briefly introduce the previous two methods. I did not go into depth, and it was not the focus.
Next, let's take a look at some of the foundations of the Server Load balancer technology.
We know that traditional vswitches and vrouters decide where to send data packets based on their IP addresses and MAC addresses. However, this simple forwarding capability cannot meet the current requirements of the Web farm. For example, traditional routers or switches cannot send data packets intelligently to specific applications or servers. Even if a server goes down, the router sends packets to the server.
So how can we achieve load balancing?
First, let's look at some important theoretical foundations: OSI network model,
This figure is very familiar. Here I will just give a brief introduction! OSI is the standard for open network protocols. As you can see, this OSI model defines Layer 7, from the physical layer to the top application layer. Network protocols, such as TCP, UDP, IP, and HTTP, correspond to different layers in the model. The IP protocol is in the Layer 3, TCP, and UDP processing layer.
Are you going to ask? How can this be used?
We also know that traditional routers and switches are located on the second or third layer of OSI. That is to say, they determine how a data packet must be processed and where it must be sent. Although the second and third-layer switches have done a great job, in fact, more valuable information is not used in the packet header information. If we get the data packet at this time, analyze the header information in it, and then forward the request according to our needs, we can implement load balancing and other technologies at the second and second layers. This is what we often see or hearLayer 2/3 Switching.
In addition, if a packet is obtained from Layer 4 to Layer 7, the header information is analyzed, and the request is forwarded as needed,Layer 4 through 7 Switching". In Layer 4, the header of TCP and UDP contains a large amount of information, which makes it more intelligent for us to forward requests. For example, when a user sends an HTTP request to a site deployed on TCP port 80We can analyze and obtain this information by analyzing the header information, so that the request can be forwarded to other servers.
Come here today, there are not many things, and there may be a lot of nonsense. J!