Haproxy configuration for Server Load balancer and haproxy configuration for Server Load balancer

Source: Internet
Author: User
Tags email protocols haproxy

Haproxy configuration for Server Load balancer and haproxy configuration for Server Load balancer
Common Open-Source Software load balancers include Nginx, LVS, and Haproxy. Comparison of three major software load balancers (LVS Nginx VS Haproxy)

I. LVS:
1. Strong load resistance. Strong load resistance and high performance, up to 60% of F5 hardware; low consumption of memory and cpu resources
2. Working on Layer 4 of the network, vrrp protocol Forwarding (for distribution only). The specific traffic is processed by the Linux kernel, so no traffic is generated.
2. good stability and reliability, and perfect Hot Standby solution (for example, LVS + Keepalived)
3. A wide range of applications, allowing Load Balancing for all applications;
4. Regular Expression Processing is not supported, and dynamic/static separation is not supported.
5. Support Load Balancing algorithms: rr (Round Robin), wrr (Weighted Round Robin), lc (minimum link), and wlc (weighted least link)
6. complicated configurations, high network dependencies, and high stability.

2. Ngnix:
1. Working on Layer 7 of the network, you can implement some shunting policies for http applications, such as targeting domain names and directory structures;
2. Nginx has little dependence on the network. Theoretically, it can be pinged to implement the load function;
3. Nginx installation and configuration are relatively simple and easy to test;
4. It can also handle high load pressure and stability. Generally, it can support more than 10 thousand concurrent jobs;
5. backend server health check only supports port detection, but does not support url detection.
6. asynchronous processing of requests by Nginx can help node servers reduce load;
7. Nginx only supports http, https, and Email protocols, so that the application scope is small.
8. Direct Session persistence is not supported, but can be solved through ip_hash. The support for Big request headers is not very good,
9. supports Server Load balancer algorithms: Round-robin (round robin), Weight-Round-robin (Weighted round robin), and Ip-hash (Ip hash)
10. Nginx can also be used as the Cache function for Web servers.

3. Features of HAProxy:
1. Two proxy modes are supported: TCP (layer-4) and HTTP (layer-7), and virtual hosts are supported;
2. It can supplement some shortcomings of Nginx, such as Session persistence and Cookie guidance.
3. It is helpful to detect problems on the backend servers using URLs.
4. More Load Balancing policies, such as Dynamic Weighted Round Robin and Weighted Source Hash ), the Weighted URL Hash and Weighted Parameter Hash have been implemented.
5. In terms of efficiency, HAProxy provides better load balancing speed than Nginx.
6. HAProxy performs load balancing on Mysql, and checks and balances the backend DB nodes.
9. supports Server Load balancer algorithms: Round-robin (round robin), Weight-Round-robin (Weighted round robin), source (original address persistence), and RI (request URL), rdp-cookie (based on cookie)
10. Web servers cannot be used as Cache.

Three major software Load balancer application scenarios:
1. At the initial stage of website construction, you can use the Server Load balancer (or Server Load balancer is not recommended if there is not much traffic) as the Reverse Proxy Server Load balancer (because of its simple configuration, performance can also meet general business scenarios. If the server Load balancer has a single point of failure, you can use Nginx + Keepalived/HAproxy + Keepalived to avoid the single point of failure of the Server Load balancer.
2. After the website concurrency reaches a certain level, LVS can be used to improve stability and forwarding efficiency. After all, LVS is more stable and more efficient than Nginx/HAproxy. However, LVS has higher requirements on maintenance personnel and higher investment costs.

Note: Compared with Haproxy, the client can support layer-7 protocol with the largest number of users, and the stability is relatively reliable. Haproxy supports layer-4 and layer-7 Load Balancing algorithms and session storage. The specific selection depends on the Application Scenario. Currently, Haproxy is improving the number of users because it makes up for some of the shortcomings of the client.

Several important factors to measure the performance of a server Load balancer:
1. Session rate: Number of requests processed per unit time
2. Session concurrency: Concurrent processing capability
3. Data Rate: data processing capability
According to official testing statistics, the maximum number of requests processed by haproxy per unit time is 20000, and-concurrent connections can be maintained at the same time. The maximum data processing capability is 10 Gbps. Based on the above, haproxy is a superior load balancing and reverse proxy server.

Summarize the main advantages of HAProxy:

I. free open-source, stability is also very good, this can be seen through some of my small projects, single Haproxy also runs well, stability can be comparable to LVS;

2. According to the official documents, HAProxy can run full 10Gbps-New benchmark of HAProxy at 10 Gbps using Myricom's 10GbE memory (Myri-10G PCI-Express), which serves as a software-level load balancing, it is also amazing;

3. HAProxy can be used as MySQL, mail, or other non-web load balancing. We often use HAProxy as MySQL (read) load balancing;

4. A page with powerful monitoring server status is provided. In the actual environment, we use Nagios for email or SMS alerts. This is one of the reasons why I like it very much;

5. HAProxy supports virtual hosts.

The following describes how to use Haproxy as the Server Load balancer:

Current Environment:

Ubuntu16.04 + Haproxy 192.168.93.21
Centos6 + httpd 192.168.93.5
Centos6 + httpd 192.168.93.7

Haproxy configuration file

Vi/etc/haproxy. using Global log/dev/log local0 log/dev/log local1 notice chroot/var/lib/haproxy stats socket/run/haproxy/admin. sock mode 660 level admin stats timeout 30 s user haproxy group haproxy daemon # Default SSL material locations ca-base/etc/ssl/certs crt-base/etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers (1SSL ). this list is from :# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ Ssl-default-bind-ciphers ECDH + AESGCM: DH + AESGCM: ECDH + AES256: DH + AES256: ECDH + AES128: DH + AES: ECDH + 3DES: DH + 3DES: RSA + AESGCM: RSA + AES: RSA + 3DES :! ANULL :! MD5 :! DSS ssl-default-bind-options no-sslv3defaults log global mode http # default mode {tcp | http | health}, tcp is Layer 4, http is Layer 7, health only returns OK option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400/etc/haproxy/errors/400. http errorfile 403/etc/haproxy/errors/403. http errorfile 408/etc/haproxy/errors/408. http errorfile 500/etc/haproxy/errors/500. http errorfile 502/etc/haproxy/errors/502. http errorfile 503/etc/haproxy/errors/503. http errorfile 504/etc/haproxy/errors/504. http ######## statistics page configuration ######## listen admin_statsbind 0.0.0.0: 1080 # listener port mode http # http Layer 7 mode option httplog # Use http log format # log 127.0.0.1 local0 errmaxconn 10 stats refresh 30 s # statistics page automatic refresh time stats uri/stats # on the statistics page, urlstats realm XingCloud \ Haproxy # The text stats auth admin is displayed in the password box of the statistics page: admin # Set stats hide-version for the user name and password on the statistics page # hide the HAProxy version information on the statistics page ######### test configuration ######### ######## listen testbind 0.0.0.0: 8080 # note that the port number is not lower than 1024 mode tcp # maxconn 4086 # log 127.0.0.1 local0 debugserver s1 192.168.93.5: 80 server s2 192.168.93.7: 80

To access the monitoring page: Configure stats uri/haproxy and restart the service:

Service haproxy restart

Next we will explain how to use Haproxy + web Server Load balancer:
Configure two web servers: 192.168.93.5/192.168.93.7
Both of them perform the same operation:

1. experiment environment

CentOS release 6.8 (Final)

2. Configure the web server (node5/7 ):

Easy to test. Disable selinux and iptables.

The default setting is used, and no configuration is required.

yum install httpd -y# vim /etc/httpd/conf/httpd.conf 

Httpd listening port:

DocumentRoot: the path where the webpage is stored, and the root directory of the document

Restart httpd

#service httpd restart

Modify the display content:

# vim /var/www/html/index.htmlI'm node5!!! My IP is 192.168.93.5...

Access again:

In this way, the three web services are ready.

Configure Server Load balancer (this experiment only uses one Haproxy: 192.168.93.21 ):
Vim/etc/haproxy. cfg

######## Test configuration ################# listen testbind 0.0.0.0: 8080 # note that the port number is not lower than 1024 mode tcp # maxconn 4086 # log 127.0.0.1 local0 debugserver s1 192.168.93.5: 80 server s2 192.168.93.7: 80

Browser requests 192.168.93.21: 8080

The above results show that the frontend request for 192.168.93.21 is balanced by the Load balancer of Haproxy to the two backend web192.168.93.5 and 192.168.93.7.
In this way, when one of the two fails, the traffic can be normally distributed to the other normal web, which never improves the system reliability.

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger. Http://blog.csdn.net/qq_36357820/article/details/79084898

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.