"Linux" three major software load balancer comparison (LVS, Nginx, HAproxy)

Source: Internet
Author: User
Tags email protocols node server what is nginx haproxy

Three major software load balancer comparisons (LVS, Nginx, HAproxy)

(Information from the network, made part of the supplementary instructions)

LVS:
1. Strong anti-load capability, high performance, can reach F5 60%, low memory and CPU resource consumption
2. Work in the Network 4 layer, through the VRRP protocol (for Proxy only), the specific traffic is handled by the Linux kernel, so there is no traffic generated.
3. Stable, high reliability, self-perfect Heat Preparation program (KEEPALIVED+LVS)
4. Do not support regular processing, can not do static and dynamic separation.
5. Support for multiple load balancing algorithms: RR (polling), WRR (with weight polling), LC (minimum Connection), WLC (least connected with right)
6. The configuration is relatively complex, the network dependence is relatively large, high stability.
7. LVS has 4 modes of operation:
(1) NAT address translation
(2) Dr Direct routing
(3) Tun Tunnel
(4) Full-nat

Nginx:
1. Work on the network Level 7, you can do some diversion strategies for HTTP applications, such as for domain names, directory structure
2. Nginx is less dependent on the network, theoretically can ping the load function
3. Nginx installation configuration is relatively simple, easy to test.
4. Can also bear high load pressure and stability, nginx is to solve the problem of c10k born
5. Health check on back-end server is only supported by port, and is not supported by URL detection
6. Nginx asynchronous processing of requests can help the node server reduce load pressure
7. Nginx can only support HTTP, HTTPS and email protocols, which is less applicable.
8. The direct hold of the session is not supported, but can be solved by Ip_hash. Support for the big request header is not very good.
9. Nginx can also do the Web server is the cache function.

6th supplement:
What is nginx asynchronous processing:
Squid synchronous Processing: The browser initiates the request and the request is immediately forwarded to the backend, thus establishing a channel between the browser and the background. This channel is always present from the time the request is initiated until the request is completed.
Nginx Asynchronous Processing: The browser initiates the request, the request does not immediately go to the backend, but the request data (header) receives the NIGNX first, then the nginx then sends this request to the backend, after the back-end processing completes the data returns to the Nginx, Nginx sends the data stream to the browser.

Benefits of using asynchronous processing:
1. Assume that the user performs an upload file operation, because the user network speed is relatively slow, it takes half an hour to transfer files to the server. Squid synchronization agent after the user began to upload and the background to establish a connection, half an hour after the end of the file upload, so that the background server connection maintained for half an hour, and Nginx asynchronous agent is the first to receive this file Nginx, so only Nginx and the user to maintain a half-hour connection, Background server in this half hour did not open the connection for this request, half an hour after the end of the user upload, Nginx upload content sent to the backstage, nginx and backstage bandwidth is very ample, so only spent a second to send the request to the background, thus, the background server connection maintained for a second. Synchronous transmission took the backend server for half an hour, the asynchronous transmission only takes one second, the visibility is greatly optimized.
2. In the above example, if the background server for a variety of reasons to restart, upload the file will be a natural interruption, which is very annoying to the user a thing, presumably you have uploaded files to half of the experience interrupted. After using Nginx proxy, the backend server's restart has reduced the impact on the user upload to the pole, and Nginx is very stable does not need to restart it often, even if the need to restart, the use of kill-hup can be uninterrupted restart Nginx.
3. Asynchronous transmission can make the load balancer more secure, why do you say so? In the other equalizer (Lvs/haproxy/apache, etc.), each request is only one chance, if the user initiates a request, the result of the request to the background server is just hanging off, then the request failed, and Nginx because it is asynchronous, So this request can be re-sent to the next background, the next backstage returned the normal data, so the request will be successful. Or with the user upload files This example, if not only with the Nginx proxy, and use the load balancer, nginx upload files to one of the backstage, but this server suddenly restarted, Nginx received an error, will upload this file to another background, So the user doesn't have to spend half an hour uploading it again.
4. If the user uploads a 10GB size file, the latter server does not consider this situation, then the background server will not crash. With Nginx can put these things are blocked on nginx, through nginx upload file size limit function to limit, in addition nginx performance is very secure, it is assured that the Internet on those alternative users and nginx confrontation to it.
Using asynchronous transports can cause problems:
Background server has the ability to provide upload progress, with the Nginx agent can not make progress, this need to use a third-party module Nginx to achieve.

8th supplement:
The allocation strategy and principle of Nginx upstream support:
1. Polling (default): Each request is assigned to a different back-end server in order. If the backend server is down, switch to another and reject the down back-end host
2. Weight: Specifies the polling probability, which is proportional to the weight and the access ratio, for the performance of the backend server is uneven.
3. Ip_hash: Each request according to the hash result of the access IP allocation, different IP requests are assigned to different servers on the backend, you can solve the session problem.

HAProxy:
1. Support two kinds of proxy mode: TCP (four layer) and HTTP (seven layer), support virtual host;
2. Be able to add some of the shortcomings of Nginx such as session retention, cookie guidance and other work
3. Support for URL detection backend server problem detection will be good help.
4. More load balancing strategies such as Dynamic weighted round robin (Round Robin), weighted source address hash (Weighted source hash), weighted URL hash and weighted parameter hash (Weighted Parameter hash) have been implemented
5. In terms of efficiency, the haproxy is more efficient than nginx with better load-balancing speed.
6. Haproxy can load-balance MySQL and Detect and load-balance the DB node in the backend.
7. Support Load Balancing algorithm: Round-robin (round robin), Weight-round-robin (with weighted rotation), source (original address hold), RI (request URL), Rdp-cookie (based on cookie)
8. Cannot be a Web server or cache.

three major software load balancers for business scenarios:
1. The initial stage of the site, you can choose Nginx, haproxy as the reverse proxy load balancing (the flow is small, you can not choose load balancing), because its simple configuration, performance can also meet the general business scenario. If you consider a single point problem with your load balancer, you can use nginx+keepalived/haproxy+keepalived to avoid a single point problem with your load balancer itself.
2. After the website reaches a certain degree, in order to improve the stability and forwarding efficiency, LVS can be used, after all, LVS is more stable than nginx/haproxy, and the forwarding efficiency is higher.
Note: Nginx and haproxy comparison: Nginx only support seven layer, the largest number of users, stability is more reliable. The Haproxy supports four-and seven-tier, supports more load-balancing algorithms, and supports session.

several important factors that weigh the load balancer's quality:
1. Session Rate: Number of requests processed per unit of time
2. Session concurrency Capability: concurrent processing capability
3. Data rate: Ability to process data

"Linux" three major software load balancer comparison (LVS, Nginx, HAproxy)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.