What are the differences between clustering, distribution, and load balancing? Frame

Source: Internet
Author: User
There is a big difference between cluster, distributed and load Balancing in PHP, the following article I will give you specific to write a cluster, distributed and load-balanced between the specific differences, say not much, let us take a look at it.

The concept of clustering

Computer clusters work closely together to complete calculations with a loosely integrated set of computer software and/or hardware. In a sense, they can be seen as a computer. A single computer in a clustered system is often called a node, usually connected over a local area network, but there are other possible ways to connect. Cluster computers are often used to improve the computational speed and/or reliability of a single computer. In general, cluster computers are much more expensive than individual computers, such as workstations or supercomputing machines.
For example, the operation of a single heavy load is divided into multiple nodes of the device to do parallel processing, each node device processing end, the results are summarized, returned to the user, the system processing capacity has been greatly improved. Generally divided into several:

    • High availability clustering: Typically, when a node in a cluster fails, the tasks on it are automatically transferred to other normal nodes. Also refers to a node in the cluster can be offline maintenance and online, the process does not affect the entire cluster operation.

    • Load Balancing cluster: When a load-balanced cluster is running, the workload is typically distributed to a set of servers on the backend through one or more front-end load balancers to achieve high performance and high availability across the system.

    • High-performance computing clusters: high-performance computing clusters are used in the field of scientific computing because they increase computing power by allocating computing tasks to different compute nodes in the cluster.

Distributed

Cluster: The same business, deployed on multiple servers. Distributed: A business that splits into multiple sub-businesses, or is itself a different business, deployed on different servers.
To put it simply, distributed is to improve efficiency by shortening the execution time of a single task, while clustering increases efficiency by increasing the number of tasks executed per unit of time. Example: For example, Sina, the number of people who visit, he can do a cluster, the front of a balanced server, the next few servers to complete the same business, if there is business access, the response server to see which server load is not very heavy, will be to which one to complete, and a server collapsed, The other servers can be topped up. distributed to each node, all of the different business, a node collapsed, then this business may fail.

Load Balancing

Concept

With the increase of traffic, the number of the core parts of the existing network and the rapid growth of data traffic, its processing power and computational strength correspondingly increased, so that a single server device simply cannot bear. In this case, if you throw away the existing equipment to do a lot of hardware upgrades, which will result in the waste of existing resources, and if faced with the next increase in the volume of business, which will lead to another hardware upgrade of the high cost of investment, even the performance of the equipment can not meet the demand for the growth of the current business volume.
Load balancing technology by setting the virtual server IP (VIP), the backend multiple real server application resources virtual into a high-performance application server, through the load balancing algorithm, the user's request forwarded to the backend intranet server, the intranet server returns the response of the request to the load balancer, The load balancer then sends the response to the user, which hides the intranet structure from the Internet user, prevents the user from accessing the backend (intranet) server directly, makes the server more secure, and prevents attacks on the core network stack and other port-running services. and the Load Balancer device (software or hardware) will continue to check the application status on the server, and automatically isolate the invalid application server, realize a simple, extensible, high reliability application solution, solve the single server processing performance is insufficient, the scalability is not enough, the problem of low reliability. The expansion of the
system can be divided into vertical (vertical) and horizontal (horizontal) extensions. Vertical expansion, from the point of view of a single machine to increase the hardware processing capacity, such as CPU processing capacity, memory capacity, disk and so on, to achieve the improvement of server processing capacity, can not meet the large-scale Distributed System (website), large traffic, high concurrency, massive data problems. Therefore, a scale-out approach is required to accommodate the processing power of large Web services by adding machines. For example: A machine can not be satisfied, then add two or more machines, the joint burden of access pressure.

  one of the most important applications of load balancing is the use of multiple servers to provide a single service , which is sometimes referred to as a server farm. Typically, load balancing is primarily applied to Web sites, large Internet Relay chat networks, high-traffic file-download sites, NNTP (Network News Transfer Protocol) services, and DNS services. Now the load balancer is also starting to support the database service, called the database load balancer.
Server load Balancing has three basic feature: Load Balancing algorithm, health check and session hold, these three feature are the basic elements to ensure the normal operation of load balancing. Some of the other features have been deepened on top of these three features. Let's look at the functions and principles of each function in detail below.
Before the load-balancing device is deployed, the user accesses the server address directly (in the middle there may be a server address mapped to a different address on the firewall, but essentially one-on access). When a single server is unable to handle the access of many users due to insufficient performance, it is necessary to consider using multiple servers to provide services, and the way to achieve this is load balancing. The implementation principle of the load balancing device is to map the address of multiple servers into an external service IP (we usually call it VIP, the server map can directly map the server IP to the VIP address, or the server Ip:port map to Vip:port, Different mapping methods will take appropriate health checks, in port mapping, the server port and VIP port can not be the same, this process is not visible to the client, the user actually does not know that the server is doing load balancing, because they are accessing a destination IP, then the user's access to the load balancer device, How to distribute the user's access to the appropriate server is the load balancing device to do the work, specifically to use the above three big feature.
Let's do a detailed analysis of the access process:

User (ip:207.17.117.20) to access www.a10networks.com the domain name, First, the DNS query to resolve the domain name of the public address: 199.237.202.124, the next user 207.17.117.20 will access 199.237.202.124 this address, so the packet will reach the load balancer device, the load balancer will then distribute the packet to the appropriate server, see:

Load balancing device When the packet is sent to the server, the packet is changed, as shown, before the packet reaches the load-balanced device, The source address is: 207.17.117.20, the destination address is: 199.237.202.124, when the load Balancer device forwards the packet to the selected server, the source address is: 207.17.117.20, the destination address becomes 172.16.20.1, we call this way for the destination address NAT (DNAT, Destination address translation). In general, in server load balancing Dnat is necessary to do (there is another mode called the server directly return-DSR, is not to do Dnat, we will discuss separately), and the source address according to the deployment mode, sometimes also need to convert to another address, we call it: Source address Nat (SNAT) , in general, the bypass mode needs to do Snat, and the serial connection mode does not need, this is the string mode, so the source address does not do NAT.
We look at the return packet of the server, as shown, also through the conversion process of IP address, but the source/destination address in the reply packet and the request packet just swapped, The package source address returned from the server is 172.16.20.1, the destination address is 207.17.117.20, when the load balancer device is reached, the load balancer device changes the source address to 199.237.202.124 and then forwards it to the user to ensure the consistency of the access.

Load Balancing algorithm

In general, load balancing devices will support multiple load-balancing distribution policies By default, such as:

    • Polling (Roundrobin) sends the request in a sequential loop to each server. When one of the servers fails, Ax takes it out of the sequential loop queue and does not participate in the next polling until it returns to normal.

    • Ratio (Ratio): Assigns a weighted value to each server proportional to the scale at which the user's request is assigned to each server. When one of the servers fails, Ax takes it out of the server queue and does not participate in the next user request assignment until it returns to normal.

    • Priority: Group all servers, define priority for each group, assign user requests to the highest priority server group (within the same group, use a pre-set polling or ratio algorithm, assign user requests), and when all servers in the highest priority or a specified number of servers fail, AX will send the request to the secondary priority server group. This way, the user is actually provided with a way of hot backup.

    • Minimum number of connections (leastconnection): Ax records the current number of connections on each server or service port, and the new connection is passed to the server with the fewest number of connections. When one of the servers fails, Ax takes it out of the server queue and does not participate in the next user request assignment until it returns to normal.

    • Fastest response times (fast reponse time): New connections are delivered to the fastest-responding servers. When one of the servers fails, Ax takes it out of the server queue and does not participate in the next user request assignment until it returns to normal.

    • Hashing algorithm (hash): The source address of the client, the port is hashed, according to the results of the operation is forwarded to a server for processing, when one of the servers fails, it takes out from the server queue, do not participate in the next user request allocation, until it returns to normal.

    • Packet-based content distribution: For example, a URL that determines HTTP, and if the URL has a. jpg extension, forwards the packet to the specified server.

Health Check

Health checks are used to check the availability status of various services that are open to the server. Load balancing devices typically configure various health check methods, such as Ping,tcp,udp,http,ftp,dns. Ping is a third-tier health check to check the connectivity of the server IP, while TCP/UDP is a fourth-tier health check to check the Up/down of the service port, and if you want to check more accurately, use a 7-tier health check, such as creating an HTTP health check, Get a page back, and check whether the page content contains a specified string, if it is included, the service is up, if the page is not included or not, the server's Web service is considered unavailable (down). For example, the load Balancer device checks to 172.16.20.3 that the 80 port of this server is down, and the load balancer device will not forward the subsequent connections to this server, but instead forwards the packets to other servers based on the algorithm. When you create a health check, you can set the check interval and the number of attempts, such as setting the interval to 5 seconds, the number of attempts is 3, then the load Balancer device initiates a health check every 5 seconds, if the check fails, the attempt 3 times, if 3 times check failed, the service is marked down, The server is then checked against the down server every 5 seconds, and when the server health check is found to be successful at some point, the server is re-marked as up. The interval and number of attempts for health checks are set according to the general situation, and the principle is not to have an impact on the business without causing a heavy burden on the load balancing device.

Session hold

How to ensure that a user's two HTTP requests are forwarded to the same server, which requires the Load Balancer device configuration session to remain.
The session remains used to maintain the continuity and consistency of the session, because it is difficult for the server to synchronize user access information in real time, which requires that the user's front and back access sessions be persisted to a single server for processing. For example, a user visits an e-commerce site, if the user is logged on by the first server to deal with, but the user purchase of goods by the second server to handle the action, the second server because do not know the user information, so this purchase will not succeed. This situation requires session retention and the user's actions are handled through the first server to succeed. Of course, not all access requires session retention, for example, the server provides static pages such as the site's news channel, each server has the same content, this access does not need to maintain the session.
The vast majority of load balancing products support two basic types of Session retention: source/Destination Address session Retention and cookie session retention, as well as more commonly used methods like hash,url persist, but not all devices are supported. Depending on the application to configure a different session hold, this can cause unbalanced load or even access exceptions. We mainly analyze the B/s structure of the session hold.

Based on B/s structure of the application:

For the general B/s structure of the application content, such as the static page of the site, you can not configure any session to maintain, but for a B/s structure, especially the middleware platform business system, must configure the session to maintain, in general, we configure the source address session to maintain to meet the needs, However, it is a better way to keep a cookie session in mind, given that the client may have the above environment that is not conducive to the maintenance of the source address session. The cookie session keeps the server information selected by the Load Balancer device in a cookie sent to the client, which is brought when the client continues to access it, and the load balancer persists the session to the previously selected server by analyzing the cookie. Cookies are classified as file cookies and memory cookies, which are stored on the hard disk of the client computer and can be maintained to the same server, regardless of whether the cookie file is closed for a period of time. The memory cookie keeps the cookie information in memory, and the cookie's lifetime is started by opening the browser and closing the browser. Because the browser now has a certain default security settings for cookies, some clients may stipulate that file cookies are not allowed, so now application development uses memory cookies more.
However, memory cookies are not omnipotent, for example, browsers may disable cookies altogether for security purposes, so that the cookie session remains out of effect. We can use Session-id to implement session hold, Session-id as a URL parameter or in a hidden field<input type="hidden">, and then analyze the Session-id for distribution.
Another scenario is to save each session information to a database. Because this scenario increases the load on the database, this scenario is not good for performance improvement. A database is best used to store session data that has a long session time. To avoid a single point of failure in the database and to increase its extensibility, the database is typically replicated across multiple servers, distributing requests to the database server through a load balancer.
Based on the source/destination address session is not very useful, because the customer may be through the Dhcp,nat or Web proxy to connect to the Internet, the IP address may change frequently, which makes the service quality of the scheme is not guaranteed.
NAT (network address translation): When some hosts inside the private network have already been assigned local IP addresses (that is, private addresses that are used only in this private network), but now want to communicate with the host on the Internet (no encryption required), you can use the NAT method. This approach requires the installation of NAT software on a router that has a private network connected to the Internet. A router with NAT software is called a NAT router, and it has at least one valid external global IP address. In this way, all hosts that use local addresses will be able to connect to the Internet by translating their local addresses into global IP addresses on the NAT router when communicating with the outside world.

Additional Benefits of Load balancing

High scalability

High concurrent requests can be better addressed by adding or reducing the number of servers.

(server) Health Check

The load balancer can improve reliability by checking the health of the background server application tier and removing those failed servers from the server pool.

TCP connection multiplexing (TCP Connection reuse)

TCP Connectivity Multiplexing technology uses HTTP requests from multiple clients on the front end to a TCP connection established by the backend to the server. This technology can greatly reduce the performance load of the server, reduce the delay caused by the new TCP connection with the server, and minimize the number of concurrent connections requests to the backend server, and reduce the resource consumption of the server.
In general, clients need to have a TCP three handshake with the server before sending an HTTP request, establish a TCP connection, and then send an HTTP request. The server receives an HTTP request for processing and sends the processed results back to the client, and the client and the server send fin to each other and close the connection after receiving the ACK confirmation from the fin. In this way, a simple HTTP request requires more than 10 TCP packets for processing to complete.
With TCP connection multiplexing, the client (for example: Clienta) takes three handshakes and sends an HTTP request between the Load balancer device. After the load balancer device receives the request, it detects whether the server has an idle long connection, and if not, the server establishes a new connection. When the HTTP request response is complete, the client negotiates the shutdown connection with the load balancer device, while load balancing maintains this connection to the server. When other clients (such as CLIENTB) need to send an HTTP request, the Load Balancer sends an HTTP request directly to the idle connection held between the server and avoids the latency and server resource consumption caused by the new TCP connection.

In HTTP 1.1, a client can send multiple HTTP requests in a TCP connection, a technique called HTTP multiplexing (HTTP multiplexing). The most fundamental difference between it and TCP connection multiplexing is that TCP connection multiplexing is the reuse of HTTP requests from multiple clients to a single server-side TCP connection, while HTTP multiplexing is the process of multiple HTTP requests from a single client over a TCP connection. The former is a unique feature of the load balancer device, which is a new feature supported by the HTTP 1.1 protocol and is currently supported by most browsers.

HTTP caching

A load balancer can store static content that can respond directly to users without having to request them back to the server when the user requests them.

TCP buffering

TCP buffering is to solve the problem of server resource wasting caused by the mismatch between the backend server speed and the customer's front-end network. The link between the client and the load balancer has high latency and low bandwidth, while the load balancer uses a low latency and high bandwidth LAN connection between the servers. Because the load balancer can stage the response data of the backend server to the customer, and then forwards them to those customers who have longer response times, the backend Web server can release the corresponding thread to handle other tasks.

SSL acceleration

In general, HTTP is transmitted in clear text in the network, it is possible to be illegal eavesdropping, especially for authentication password information. To avoid such security problems, the HTTP protocol is generally encrypted with the SSL protocol (ie: HTTPS) to ensure the security of the entire transport process. In SSL communication, the asymmetric key technology is used to Exchange authentication information and exchange the session key between the server and the browser for encrypting data, and then use the key to encrypt and decrypt the information in the communication process.
SSL is a security technology that requires a lot of CPU resources. Currently, most load balancing devices use SSL-accelerated chips (hardware load balancers) for SSL information processing. This approach provides higher SSL processing performance than traditional SSL encryption using the server, which saves a lot of server resources and enables the server to focus on the processing of business requests. In addition, centralized SSL processing simplifies the management of certificates and reduces the workload of daily administration.

Content filtering

Some load balancers can modify data through it as required.

Intrusion Prevention Features

The security of the network layer/transport layer is ensured by the firewall, and the application layer safety is provided.

Classification

The following is a discussion of the implementation of load balancing from different levels:

DNS Load Balancing

DNS is responsible for providing the domain name resolution service, when visiting a site, in fact, the first need through the site domain name of the DNS server to obtain the IP address that the domain name points to, in this process, the DNS server completed the domain name to IP address mapping, the same, so that the mapping can also be one-to-many, this time, The DNS server acts as a load Balancer scheduler, spreading the user's requests across multiple servers. Use the dig command to see the DNS settings for "Baidu":

It can be seen that Baidu has three A records.

The advantage of this technology is that it is easy to implement, easy to implement, low cost, suitable for most TCP/IP applications, and the DNS server can look for a single server that is closest to the user in all available A records. However, the disadvantage is also obvious, first of all, this scenario is not a true load balancer, the DNS server distributes HTTP requests evenly to the backend Web servers (or geographically), regardless of the current load on each Web server , if the background of the Web server configuration and processing capabilities, the slowest Web server will become the bottleneck of the system, the processing power of the server is not fully functioning, second, not consider fault tolerance, if a Web server in the background fails, The DNS server will still assign DNS requests to this failed server, causing the client to fail to respond. The last point is fatal, potentially causing a significant portion of customers to be unable to enjoy Web services, and the consequences of DNS caching for a considerable period of time (general DNS refresh period of approximately 24 hours). So in the latest foreign Construction Center Web site scenario, it has rarely been used.

Link Layer (OSI second layer) load balancing

The data link layer of the communication protocol modifies the MAC address for load balancing.
Data distribution, do not modify the IP address (because you do not see the IP address), only modify the destination MAC address, and configure all back-end server virtual IP and load Balancer IP address consistent, to not modify the source address and destination address of the packet, for data distribution purposes.
The actual processing server IP and data request destination IP consistent, do not need to go through the Load Balancer Server for address translation, the response packet can be directly returned to the user browser, to avoid load Balancing server network card bandwidth becomes a bottleneck. Also known as Direct route mode (DR Mode). Such as:

The performance is very good, but the configuration is complex, the current application is more extensive.

Transport Layer (OSI layer fourth) load balancing

The transport layer is the fourth layer of the OSI, including TCP and UDP. The popular transport layer load balancer has HAProxy (this is also used for application layer load Balancing) and IPVS.
  The final choice of the internal server is determined mainly by the destination address and port in the message , plus the server selection method set by the load balancer device.
In the case of common TCP, the load balancer device, when it receives the first SYN request from the client, chooses the best server in the above way and modifies the destination IP address in the message (instead of the backend server IP) and forwards it directly to the server. TCP connection is established, that is, the three-time handshake is established directly between the client and the server, the load balancer device is just a router-like forwarding action. In some deployment situations, in order to ensure that the server back-up can be correctly returned to the load balancer device, while forwarding the message may also be the original source address of the message to modify.

Application Layer (OSI layer seventh) load balancing

The application layer is the seventh layer of the OSI. It includes HTTP, HTTPS, and WebSockets. A very popular and tried-and-tested application-layer load balancer is nginx[Engingex = Engine X].
The so-called seven-tier load balancing, also known as "Content Exchange", is the main use of the message in the real meaningful application layer content, coupled with the load Balancer device settings of the server selection method, determine the final choice of internal server. Note that you can see the full URL for the specific HTTP request at this point, so you can implement the distribution shown:

For example, in the case of TCP, the load balancing device can only see the message of the real application layer content sent by the client after the server is selected by the actual application layer content and then the client must establish a connection (three handshake), and then according to the specific fields in the message, Plus the server selection method of the Load Balancer device setting determines the final selected internal server. Load balancer device In this case, it is more like a proxy server. The load balancer and the front-end clients and the backend servers establish TCP connections separately. So from this point of view, the seven-layer load balancer is significantly more demanding for load balancing devices, and the ability to handle seven layers is bound to be lower than the four-tier mode deployment. So why do you need a seven-tier load balancer?

The benefits of

Seven-tier load balancing are to make the entire network more "intelligent", such as the benefits of load balancing listed above, mostly based on seven-tier load balancing. For example, access to a Web site user traffic, you can pass the request of the picture class to a specific image server through seven layers, and can use the caching technology, the text class request can be forwarded to a specific word server and can use compression technology. Of course, this is only a small case of seven-tier application, from the technical principle, this way can be the client's request and the response of the server in any sense, greatly improved the application system in the network layer of flexibility.
Another feature that is often referred to is security. The most common SYN flood attack in the network, that is, hackers control many source clients, using a false IP address to send SYN attacks to the same target, usually this kind of attack will send a large number of SYN packets, depletion of the relevant resources on the server to achieve denial of Service (DoS) purposes. It can also be seen from the technical principle that these SYN attacks are forwarded to the backend server in the four-layer mode, whereas in the seven-tier mode these SYN attacks are naturally cut off on the load-balanced device without affecting the normal operation of the backend servers. In addition, the load Balancer device can set up various strategies at seven layers, filter specific messages, such as SQL injection and other application-level attack methods, and further improve the overall security of the system from the application level.
now seven-tier load balancing, mainly focused on the application of a wide range of HTTP protocol, so its application is mainly a number of web sites or internal information platform, such as based on B/s development system. Four-tier load balancing corresponds to other TCP applications, such as ERP systems based on C/s development.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.