Compare Haproxy and nginx load balancing effects

Source: Internet
Author: User
Tags nginx reverse proxy haproxy nginx load balancing

In order to compare the effects of hproxy and nginx load Balancing, respectively on the test machine (the following experiments are tested on a single machine, that is, the load machine and the back end of the machine are on a computer) do the two load balancing environment, and their respective capture packet analysis.
The following are the results of the packet capture analysis under these two load balancing environments:

1) haproxy experimental records in load balancing environment. When a machine on the back end is hung, the request will be forwarded to the suspended machine if it has not reached the point of detection, and the request will be lost.
The experimental records of Haproxy load Balancing are as follows:
First look at the configuration of the Haproxy.
Configuring Inter 20000 for 20s detection, this is to capture the haproxy load-balancing detection mechanism more obviously.
[Email protected] ~]# cat/usr/local/haproxy/conf/haproxy.cfg

....... listen Test9090bind 127.0.0.1:9090mode tcpserver localhost90 127.0.0.1:90 check Inter 20000server localhost91 127.0.0.1:91 Check Inter 20000

Nginx configuration of the 2--backend machine
[Email protected] ~]# cat/usr/local/nginx/conf/vhost/test.conf

server {    listen  ;    Listen  ;    Location/{      root/var/www/html;    }}

First create a home page index.html file in/var/www/html
[Email protected] ~]# echo "This is Test" >/var/www/html/index.html
Then test the access, and on the machine open two windows to see if the packet is balanced properly, two Windows run the following commands, respectively:
[Email protected] ~]# Curl 127.0.0.1:9090
[[email protected] ~]# tcpdump-i lo-nn ' Port 90 '
[[email protected] ~]# tcpdump-i lo-nn ' Port 91 '

The above-mentioned proof of the 90 and 91 ports on the nginx monitor are monitored. Using a grab packet to detect more fine points than looking at the log, so it is still used to grasp the package to analyze.

3--Grab Package View the health detection mechanism of HAPROXY
Because the front haproxy is configured with Inter 20000, that is, to tell haproxy 20s detection once, capture the package is also a 20s check.
Note: This detection is detected when the client does not have a task request, that is, the processing request is separate from the probe.

4--analog on-line fault, Nginx hangs out 91 port
The Listen 91 this nginx configuration removal, and then reload Nginx, will find the front-end request if distributed to 91 port, will hang up, after grasping the packet found that haproxy need to probe three times will be the fault to cut off the line. We configured the 20s probe once, and the longest possible to detect 60s to remove the fault. If there is a 1w request within this 60s, then you will lose 5k. If you use the online, the detection mechanism will certainly not be 20s at a time, generally up to 3s will be switched off.

2) experimental records in Nginx load Balancing environment

1--nginx's reverse proxy load balancer configuration, as follows:
[Email protected] ~]# cat/usr/local/nginx/conf/vhost/lb.conf

Upstream backend {       server 127.0.0.1:90 weight=1 max_fails=3 fail_timeout=30;    Server 127.0.0.1:91 weight=5 max_fails=3 fail_timeout=30;            } server {     listen 9090;     Location/{     proxy_pass http://backend;     }            }

The front end still uses 9090来 listening to forward requests to ports 90 and 91.
The 2--back-end Nginx configuration is as follows . You can build a index.html for testing in/var/www/html/.
[Email protected] ~]# cat/usr/local/nginx/conf/vhost/test.conf

server {    listen  ;    Listen  ;    Location/{      root/var/www/html;    }}

Create a home page index.html file in/var/www/html
[Email protected] ~]# echo "This is Test2" >/var/www/html/index.html

3--Capture Package View Nginx reverse proxy load Balancing health detection mechanism . The clutch will also take place 90 and 91 of the packages have come over.
The clutch will find that Nginx does not have a task request on the 90 and 91 ports when there is no request. That is, when there is no request, the backend proxy server will not be detected.

4--analog on-line fault, Nginx hangs out 91 port
Listen 91 This nginx configuration removed, and then reload, found that the front-end access to no task impact. The grab packet is as follows, the request has packed 91, but because 91 did not request to the data. The Nginx equalization will also go to 90 to fetch the data again. In other words, nginx if the back end of the 91 port, the front-end request has no task impact, as long as the concurrent support can live.

The above experiment shows that:
1) Haproxy for the back-end server has been doing health testing (even if the request does not come over the time will do health checks):
Back-end machine failure occurs when the request has not arrived, Haproxy will cut off the fault machine, but if the back-end machine failure occurs during the request arrival, then the front-end access will be abnormal. That is to say, Haproxy will transfer the request to the back end of the fault machine, and after a number of probes before the machine will be cut off, and send the request to other normal back-end machine, which will inevitably result in a short period of time the front-end access failure.
2) Nginx for the backend server did not always do health detection:
The back-end machine fails, when the request comes, the distribution will still be distributed normally, but when the data is not requested, it will then turn to the good back-end machine to make the request until the request is normal. That is, the nginx request to go to the back end of an unsuccessful machine, but also to another server, which has no impact on the front-end access.
3) Therefore, if the useful haproxy as the front-end load balancer, if the back-end server to maintain, in the case of high concurrency, it will certainly affect the user. But if it is nginx as the front-end load balancing, as long as the concurrent support, the back end cut off a few units will not affect the user.

Compare Haproxy and nginx load balancing effects

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.