The difference between four and seven layers of load balancing
four layers
The so-called four layer is the fourth layer in the ISO reference model. Four-layer load balancing is also called four-layer switch, it is mainly through the analysis of IP layer and TCP/UDP layer of traffic implementation based on IP plus port load balancing. The common four-layer load balancer has LVS, F5 and so on.
in the case of common TCP applications, when the load balancer receives the first SYN request from the client, it selects an optimal back-end server through a set load balancing algorithm, modifies the destination IP address in the message to the back-end server IP and forwards it directly to the back-end server. Such a load balancing request is completed.
From this process, a TCP connection is created directly by the client and the server, and the load balancer only completes a forwarding action similar to that of a router. In some load balancing strategies, in order to ensure that the message returned by the backend server can be correctly passed to the load balancer, the original source address of the message may be modified while forwarding the message.
seven layer in the same way, the seven-tier load balancer, also known as the seven-layer switch, is located at the highest level of OSI, the application layer, At this time the load balancer supports a variety of application protocols, such as HTTP, FTP, SMTP, and so on. The seven-layer load balancer can select back-end servers according to the content of the message and then match the load-balancing algorithm, so it is also called a "content exchanger".
For example, for WEB server load Balancing, the seven-tier load balancer can not only carry out load streaming according to the "ip+ port", but also decide the load balancing strategy according to the URL of the website, the domain name, the browser category and the language.
For example, there are two Web servers corresponding to the Chinese and English two sites, two domain names are a, B, to achieve access to a domain name to enter the English web site, access to the B domain name when the Web site, which is almost impossible to achieve in the four-tier load balancer, and seven-layer negative Load balancing can be handled according to the different Web pages of the client accessing the domain name.
Common seven-tier load balancers have Haproxy, Nginx, and so on. Haproxy supports two main proxy modes: TCP is the 4 layer (mostly for mail servers, internal protocol communication servers, etc.).
In 4-tier mode, Haproxy only forwards bidirectional traffic between the client and the server. HTTP is a 7-tier mode, Haproxy analyzes protocols and can control the protocol by allowing, rejecting, swapping, adding, modifying, or requesting or responding to (response) the specified content,
This action is based on a specific rule. (The new version after 1.3 introduces the frontend,backend instruction; frontend matches the content of any HTTP request header, then directs the request to the associated backend.) Here is still a common example of TCP applications, Because the load balancer to obtain the content of the message, it can only replace the backend server and the client to establish a connection, and then receive the message sent by the client, and then according to the specific field in the message and load balancer set in the load balancing algorithm to determine the final selection of the internal server.
Throughout the process, the seven-tier load balancer is similar to a proxy server in this case.
The entire process is shown in the following illustration.
Comparing the whole process of four-layer load balancing and seven-layer load balancing operation, we can see that
Seven-tier load Balancing mode, the load balancer and the client and the backend server will establish a TCP connection respectively
A TCP connection is established only once in a four-layer load balancing mode.
It can be concluded that the seven-tier load balancing device requires a higher level of load-balancing equipment, and the seven-tier load-balancing processing capacity must be lower than the four-tier model of load balancing.
similarities and differences between Haproxy and LVs
here is a simple summary of the similarities and differences between the two load balancing software:
1 Both are software load balancing products, but LVS is a soft load balance based on Linux operating system kernel, And Haproxy is based on the third application to achieve the soft load balance.
2 LVs is based on the four layer of IP load balancing technology, and Haproxy is based on the four-tier and seven-tier technology, can provide TCP and HTTP applications of the Integrated load balancing solution.
3) LVs work in the fourth layer of the ISO model, so its state monitoring function is single, and haproxy in the State monitoring function is powerful, can support port, URL, script and so on a variety of state detection methods.
4 Haproxy Although the function is powerful, but the overall processing performance is lower than the four layer mode of LVS load balance, and LVS has close to the hardware equipment network throughput and connection load capacity. To
sum up, haproxy and LVS have advantages and disadvantages, no good or bad points, to choose which as a load balancer, to the actual application environment to decide.
Haproxy is mainly because it has the following advantages, summarized as follows:
1, Haproxy is to support the virtual host, through the frontend instructions to achieve
2, can complement nginx some of the shortcomings such as the maintenance of the session, cookies, such as the guidance of work
3, the support of the URL detection backend server problem detection will be very good help.
4, like LVS, itself is just a load-balancing software, simply from the efficiency of haproxy more than nginx have a better load balancing speed, in the concurrent processing is superior to nginx.
5, Haproxy can be read to the MySQL load balance, the MySQL node on the back end of the detection and load balancing, but in the back-end of the MySQL slaves number of more than 10 when the performance of LVS, so to recommend lvs+keepalived.
6, can the request URL and header information to do match, there is better than LVS 7 layer implementation
Install Haproxy
You can download the Haproxy source package at Haproxy's official website http://www.haproxy.org/, which takes the operating system Centos6.6 X64 version as an example, The download Haproxy is haproxy-1.6.9.tar.gz and the installation process is as follows:
#tar-ZXVF haproxy-1.6.9.tar.gz
#cd haproxy-1.6.9
#make target=linux26 prefix=/usr/local/haproxy #此处TARGET =linux26 is to fill in the system kernel version uname-r, such as the version is 2.6.32-504.el6.x86_64, Fill in 26 directly.
#make Install Prefix=/usr/local/haproxy
#ls/usr/local/haproxy/
Doc Sbin Share
Create profile directory and log directory (PID file)
#haproxy安装完成后, there is no profile in the default installation directory, this is the copy of the sample configuration file in the source code to the configuration file directory
This completes the installation of the Haproxy.
Haproxy Configuration Detailed http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#4.2
Haproxy configuration is divided into five major:
Global configuration parameters, process-level, to control the Haproxy before the start of some processes and system settings
defaults: Configure some default parameters, can be frontend,backend, Listen segment inheritance uses
frontend: used to match the domain name, URI, etc. requested by the receiving customer, and to do different requests for different matches
backend: Define back-end server clusters and some weights, queues, The set of options, such as the number of connections, I will understand as a combination of upstream block
Listen:frontend and backend in Nginx
Global
maxconn 4096
# defines the maximum number of connections per Haproxy process, because each connection includes a client and a server side, the maximum TCP session for a single process will be twice times that value
Chroot/usr/local/haproxy
#修改haproxy的工作目录至指定目录并在放弃权限前执行chroot () operation to elevate the Haproxy security level,
#
User Nobody
Group Nobody
#设置运行haproxy的用户和组, you can also use the Uid,gid keyword instead to create haproxy users
#以守护进程的方式运行
Nbproc 1
# Set the number of processes at the start of the haproxy, according to the official documentation, I interpret it as: the setting of this value should be the same as the CPU core of the server, that is, the Common 2 8 core CPU servers, that is, a total of 16 cores,
#则可以将其值设置为: <=16, creating multiple processes can reduce the task queues for each process, but too many processes can cause a process to crash. Here I set to 16
Pidfile/usr/local/haproxy/logs/haproxy.pid
# define PID for Haproxy
# Global Log configuration, using the log keyword, specifies to use the Local0 log device in the Syslog service on the 127.0.0.1 log with log level info
# Set the maximum number of open file descriptors, prompted in the official 1.4 document, the value is automatically calculated, so it is not recommended to set
Time format, default to milliseconds
US microseconds 1/1000000 sec
ms milliseconds 1/1000 sec
s
m min
h hour
D Day
Defaults (similar to common)
Mode http
# Mode Syntax: mode {http|tcp|health}. HTTP is seven layer mode, TCP is four layer mode, health is health detection, return OK
Log 127.0.0.1 Local3 Err
# Log error messages using the LOCAL3 device on the syslog service on 127.0.0.1
Retries 3
# defines the number of failed reconnection connections to the backend server, which will mark the corresponding back-end server as unavailable after the number of connection failures exceeds this value
Option Httplog
# Enable logging HTTP requests, default Haproxy logging is not logging HTTP requests, logging only time [5 13:23:46] Log server [127.0.0.1]
#实例名已经pid [haproxy[25218]] information [Proxy http_80_in stopped.] ", the log format is simple.
Option Redispatch
# When a cookie is used, Haproxy will insert the ServerID of its requested backend server into the cookie to ensure session persistence; At this point, if the backend server is down, the client's cookie is not refreshed.
#如果设置此参数, the customer's request will be directed to another back-end server to ensure that the service is normal. #
Option Abortonclose
# when the server load is high, automatically end the current queue processing more long links
Option Dontlognull
# Enable this entry, the null connection will not be logged in the log. The so-called empty connection is in the upstream load balancer or monitoring system in order to detect whether the service is available, it is necessary to regularly connect or get a fixed component or page, or to detect whether the scan port in the monitor or open action is called empty connection;
#官方文档中标注, it is not recommended to use this parameter if there is no other load balancer upstream of the service, because malicious scans or other actions on the Internet are not recorded.
Option Httpclose
# This argument I understand this way: using this parameter, every time a request is processed, Haproxy checks the value of the connection in the HTTP header, and if the value is not close,haproxy, it is deleted. If the value is null, it will be added as: Connection:close.
#使每个客户端和服务器端在完成一次传输后都会主动关闭TCP连接. Another parameter similar to this parameter is option Forceclose, which forces the external service channel to be shut down because the TCP connection is not automatically turned off when the server side receives the Connection:close.
#如果客户端也不关闭, the connection is always open until it times out. ##
Contimeout 5000
# Set the maximum wait time for a successful connection to a server, the default unit is milliseconds, and the new version of Haproxy uses timeout connect instead, which is backward compatible
Clitimeout 3000
# Sets the maximum wait time for a successful connection when the connection client sends data, the default unit is milliseconds, and the new version Haproxy uses timeout client substitution. This parameter is backward compatible
Srvtimeout 3000
# Set the maximum wait time for server-side response to customer data delivery, the default unit is milliseconds, and the new version Haproxy uses timeout server substitution. This parameter is backward compatible
Option Forwardfor except 127.0.0.0/8
Frontend front_www_server Bind *:80
#http_80_in定义前端部分监听的套接字 VIP, can bind ip:80 mode http
#定义为HTTP模式 option Httplog
# Enable logging HTTP requests, default Haproxy logging is not to log HTTP requests, only logging "time [5 13:23:46] Log server [127.0.0.1] instance name already pid[haproxy[25218]" information [Proxy Http_80_in stopped.] ", the log format is simple. Can be added to default module option Forwardfor
# Enable X-FORWARDED-FOR, insert client IP in requests header to backend server, and enable backend server to obtain true IP option for client Httpclose
#这个参数我是这样理解的: Using this parameter, each time a request is processed, Haproxy checks the connection value in the HTTP header, if the value is not close,haproxy, it will be * * * If the value is null, it will be added as: Connection:close.
#使每个客户端和服务器端在完成一次传输后都会主动关闭TCP连接. Another parameter similar to this parameter is "option Forceclose", which is to force the external service channel to be closed because some server-side receives Connection:close
#也不会自动关闭TCP连接, if the client does not close, the connection remains open until it times out. Log Global
#继承global中log的定义 option Dontlognull
# Enable this entry, the null connection will not be logged in the log. The so-called empty connection is in the upstream load balancer or monitoring system in order to detect whether the service is available, it is necessary to regularly connect or get a fixed component or page, or to detect whether the scan port in the monitor or open action is called empty connection;
#官方文档中标注, this parameter is not recommended if there is no other load balancer upstream of the service, because malicious scans or other actions on the internet are not recorded
Default_backend Www_server
#acl Host_wwwhdr_dom (Host)-I www.zb.com #acl host_imghdr_dom (host)-I-img.zb.com #use_backendhtmpool if host_www #use_backendimgpool if host_img ========== optional parameter ACL static_down nbsrv (static_server) LT 1 # defines an AC named Static_down L, when the number of surviving machines in backend Static_sever is less than 1, it is matched to the ACL php_web url_reg/*.php$ #acl php_web path_end. PHP # defines an ACL named Php_web when the requested The end of the URL is at the end of. PHP and will be matched to either of the above two kinds of writing ACL static_web url_reg/*. (css|jpg|png|jpeg|js|gif) $ #acl static_web path_end. gif. png. jpg. css. The definition of an ACL named Static_web when the URL at the end of the request is. C. SS,. jpg,. png,. jpeg,. js,. gif end, will be matched to, the above two kinds of writing optional one use_backend php_server if Static_down # if the policy Static_down is met, Submit the request to backend Php_server use_backend php_server if Php_web # if the policy Php_web is met, the request is made to backend Php_server Use_backend C_server if Static_web # if the policy Static_web is met, the request is submitted to backend Static_server ==========
Frontend Front_www_server
Bind *:80
Mode http
Option Httplog
Option Forwardfor
Option Httpclose
Log Global
Option Dontlognull
Default_backend Www_server
backend back_www_server mode http
#设置为http模式 option re Dispatch
# When a cookie is used, Haproxy inserts the ServerID of its requested back-end server into the cookie to ensure session persistence, and if the backend server is down, However, the client's cookie is not refreshed, and if this parameter is set, the client's request will be directed to another back-end server to ensure that the service is normal and can be defined to the default module option Abortonclose
# When the server is loaded high , automatically ends the current queue processing a long time link balance static-rr
# Set the Haproxy scheduling algorithm for the source address STATIC-RR cookie ServerID
#允许向cookie插入SER Verid, each server's ServerID can be defined below with the cookie keyword option httpchk get/index.html
# To turn on health detection on the backend server, and to determine the backend server by get/test/index.php The health situation option Httpchk head/http/1.0 is OK, but the log seems a bit much.
Server node81 192.168.0.8