The Difference between Cluster, Distributed, and Load Balancing

Source: Internet
Author: User
Keywords cluster distributed load balancing
Cluster
The concept of cluster
  Computer clusters are connected by a group of loosely integrated computer software and/or hardware to complete computing work in a highly close collaboration. In a sense, they can be seen as a computer. A single computer in a cluster system is usually called a node, and is usually connected through a local area network, but there are other possible connection methods. Cluster computers are generally used to improve the calculation speed and/or reliability of a single computer. In general, cluster computers have a much higher cost-performance ratio than single computers, such as workstations or supercomputers.
   For example, a single heavy-load operation is shared on multiple node devices for parallel processing. After each node device is processed, the results are summarized and returned to the user, and the system processing capacity is greatly improved. Generally divided into several types:

High-availability cluster: Generally, when a node in the cluster fails, the tasks on it will be automatically transferred to other normal nodes. It also means that a certain node in the cluster can be maintained offline and then online. This process does not affect the operation of the entire cluster.
Load balancing cluster: When a load balancing cluster is running, generally one or more front-end load balancers are used to distribute the workload to a group of back-end servers to achieve high performance and high availability of the entire system.
High-performance computing clusters: High-performance computing clusters use computing tasks to be distributed to different computing nodes in the cluster to improve computing capabilities, and are therefore mainly used in the field of scientific computing.
distributed
  Cluster: The same service is deployed on multiple servers. Distributed: A business is split into multiple sub-businesses, or is a different business in itself, and deployed on different servers.
   Simply put, distributed is to shorten the execution time of a single task to improve efficiency, while clusters improve efficiency by increasing the number of tasks executed per unit time. Example: For example, Sina.com. If there are more people visiting, he can set up a cluster, put a balanced server in the front, and several servers in the back to complete the same service. If there is a service visit, the response server should see which server is not very loaded. If it is heavy, it will be done by which one, and if one server goes down, the other servers can come up. Each distributed node completes a different business. If one node goes down, the business may fail.

Load balancing
concept
  With the increase in business volume, the access volume and data traffic of each core part of the existing network has increased rapidly, and its processing power and computing strength have also increased correspondingly, making a single server device unable to afford it. In this case, if you throw away the existing equipment and do a lot of hardware upgrades, this will cause a waste of existing resources, and if you face the next increase in business volume, this will lead to another high hardware upgrade. Cost investment, even equipment with excellent performance can't meet the demands of current business volume growth.
Load balancing technology virtualizes the application resources of multiple real servers in the back-end into a high-performance application server by setting the virtual server IP (VIP), and forwards user requests to the back-end intranet server and intranet server through the load balancing algorithm The response of the request is returned to the load balancer, and the load balancer sends the response to the user. This hides the internal network structure from Internet users and prevents users from directly accessing the background (intranet) server, making the server more secure and can prevent Attacks on the core network stack and services running on other ports. And the load balancing equipment (software or hardware) will continuously check the application status on the server, and automatically isolate invalid application servers, realizing a simple, scalable, and highly reliable application solution. A single server handles the problems of insufficient performance, insufficient scalability, and low reliability.
  System expansion can be divided into vertical (vertical) expansion and horizontal (horizontal) expansion. Vertical expansion is to increase the processing capacity of the server from the stand-alone point of view by increasing the processing capacity of the hardware, such as CPU processing capacity, memory capacity, disk and so on. It cannot meet the requirements of large-scale distributed systems (websites), large traffic, high concurrency, and massive amounts. Data problem. Therefore, it is necessary to adopt a horizontal expansion method to meet the processing capacity of large-scale website services by adding machines. For example, if one machine cannot meet the requirements, two or more machines will be added to share the access pressure.

   One of the most important applications of load balancing is to use multiple servers to provide a single service. This solution is sometimes called a server farm. Generally, load balancing is mainly applied to Web sites, large Internet Relay Chat networks, high-traffic file download sites, NNTP (Network News Transfer Protocol) services, and DNS services. Now load balancers also start to support database services, which are called database load balancers.
   There are three basic features of server load balancing: load balancing algorithm, health check and session maintenance. These three features are the basic elements to ensure the normal work of load balancing. Some other functions are some deepening on these three functions. Below we introduce the function and principle of each function in detail.
   Before the load balancing equipment is deployed, users directly access the server address (there may be a firewall that maps the server address to another address, but essentially one-to-one access). When a single server cannot handle the access of many users due to insufficient performance, it is necessary to consider using multiple servers to provide services. The way to achieve this is load balancing. The implementation principle of load balancing equipment is to map the addresses of multiple servers to an external service IP (we usually call it VIP. The server mapping can directly map the server IP to the VIP address, or the server IP: Port mapping Different mapping methods will take corresponding health checks. During port mapping, the server port and the VIP port can be different). This process is invisible to the user, and the user actually does not know what the server did Load balancing, because they are still accessing a destination IP, then after the user's access reaches the load balancing device, how to distribute the user's access to the appropriate server is the work of the load balancing device. Specifically, it is used The three features mentioned above.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.