Haproxy implements reverse proxy and load balancing

Source: Internet
Author: User
Tags haproxy

Haproxy implements reverse proxy and load balancing
Reverse Proxy Server features: web Cache (acceleration), reverse proxy, content routing (forwarding requests to specific servers based on traffic and content types), Transcoder cache: reducing redundant content transmission; reduces bandwidth and network bottlenecks, reduces request pressure on the original server, and reduces transmission latency. Public cache can be used by everyone, for private cache with sensitive data, only certain types or individuals can use nginx to implement the cache function. haproxy cannot implement the cache function. here only the reverse proxy function and Server Load balancer function are described.

Yum install haproxy main configuration file haproxy. cfg enables the log function: edit/etc/rsyslog. conf file $ ModLoad imudp $ UDPServerRun 514 # enable udp514 port local2. */var/log/haproxy. log edit/etc/haproxy. cfg file: log 127.0.0.1 local2 Configure the server Load balancer backend HOST: globallog 127.0.0.1 local2chroot/var/lib/haproxypidfile/var/run/haproxy. pidmaxconn 4000 defines the maximum number of connections for the client (for the client side) userhaproxygroup haproxydaemon # turn on stats unix socketstats socket/var/lib/haproxy/stats # response # main frontend which proxys to the backends # expose frontend main *: 80 # method 1 # bind *: 80 # method 2 # bind *: 8080 # can only be used for frontend and listen; # maxconn can also be defined here or after listen, defines the maximum number of concurrent connections for a single instance. If the global segment defines the total number of default_backend websrvs # round robin balancing between the various backends # define backend websrvsbalance roundrobinserver web1 192.168.20.7: 80 check # The name web1 is added to the request header and sent to the backend. When the backend has a virtual host, server web2 192.168.20.8: 80 check is useful.

 

Several scheduling algorithms: balance: Specify the scheduling algorithm; Dynamic: The weight can be dynamically adjusted static: Adjust the weight does not take effect in real time roundrobin: Round Robin, dynamic algorithms, each backend host supports up to 4128 connections; static-rr: Round Robin and static algorithms. Each backend host supports no upper limit. leastconn: Scheduling Based on the number of loads on the backend host; applicable only to persistent connections; dynamic; hash-type: map-based: modulo; static; consistent: consistent hash; dynamic; the following four scheduling algorithms are based on the preceding two hash-typesource: uri: the left half of the uri (? Mark the previous part) or hash the whole uri, divide it by the total weight of the backend server, and bind it to the backend server url_param: Schedule the task based on the value of the specified parameter in the url; perform hash calculation on the value, divided by the total weight. hdr (<name>): scheduling is performed based on the specified headers (such as use_agent, referer, and hostname) in the request message; divide the value of the specified header by the total weight. Example: backend websrvsbalance hdr (User-Agent) hash-type consistentserver web1 192.168.20.7: 80 checkserver web2 192.168.20.8: 80 check test: mode: The Protocol HAProxy used for health status detection. The default protocol is tcp. There are three options: tcp, http, and health: client and front-end, the backend can only use http for communication. In the front section, you can also specify the parameters that can be added after the log: frontend main *: 80 log global log 127.0.0.2 local3 defines the rear section use_backend dynamic if there static if url_css url_img extension_imgserver with use_backend: backup: set as a backup server. Other servers in the server Load balancer scenario only cannot be used to enable this server. check: Start the server to perform health check, you can use other parameters to set the Health Check Interval in milliseconds. The default value is 2000. For example, inter <delay>; you can also use fastinter and downinter to optimize the Time Delay Based on the server status. rise <count>: sets an offline ser in the health check. The number of times that the server needs to be checked successfully after switching from the offline status to the normal status; fall <count>: the number of times that the server needs to be checked from the normal status to the unavailable status; cookie <value>: set the cookie value for the specified server. The value specified here will be checked when the request is sent. The server selected for this value for the first time will be selected in subsequent requests, maxconn <maxconn>: specifies the maximum number of concurrent connections accepted by the server. If the number of connections sent to the server is greater than the value specified here, it will be placed in the Request queue to wait for other connections to be released; maxqueue <maxqueue>: sets the maximum length of the Request queue; observe <mode>: the health status is determined by observing the communication status of the server. The default value is disabled. The supported types include "layer4" and "layer7". "layer7" can only be used for http Proxy scenarios; redir <prefix>: Enable the redirection function to send the GET and H EAD requests all respond with a 302 status code. Note that the prefix cannot be followed by A/, and the relative address cannot be used to avoid loops. For example: server srv1 172.16.100.6: 80 redir http://imageserver.magedu.com checkweight <weight>: weight, the default is 1, the maximum is 256, 0 indicates not involved in load balancing; Define the health check method can use option: option httpchkoption httpchk <uri> option httpchk <method> <uri> example: backend https_relaymode tcpoption httpchk OPTIONS * HTTP/1.1 \ r \ nHost: \ www. lee123.comserver apache1 192.168.1.1: 443 check port 80 Use Case: server First 172.16.100.7: 1080 cookie first check inter 1000 server second 172.16.100.8: 1080 cookie second check inter 1000 session sticky Based on browser cookies: key points: (1) each server has its own unique cookie ID; (2) In backend, it is defined as the backend websrvsbalance roundrobincookie SERVERID insert nocache indirectserver web1 192.168.20.7: 80 check cookie websrv1server web2 192.168.20.8: 80 check cookie websrv2 test: notice the websr of the cookie Header Is the v1 keyword entered? On the statistics page, go to listen statisticsbind *: 9090 stats enablestats hide-version # stats scope. stats uri/haproxyadmin? Statsstats realm "HAPorxy \ Statistics" stats auth admin: mageedustats admin if TRUE record additional information to the log: capture request headercapture response header when the mode is http, record rich log information: option httplog-enable error page redirection by default: errorfile: responds with local files on the haproxy host; errorloc, errorloc302: responds with the specified url, and the response status code is 302; not applicable to request methods other than GET; errorloc303: Return status code 303; add request or response message header: reqaddrspadd frontend mainbind *: 80 bind *: 8080 rspadd Via: \ node1.lee. comdefault_backend websrvs displays Via: Dynamic/static separation example:
frontend  mainbind *:80bind *:8080acl url_static   path_beg   -i /static /images /javascript /stylesheetsacl url_static   path_end   -i .jpg .gif .png .css .jsuse_backend static  if url_staticdefault_backend appsrvs#---------------------------------------------------------------------# static backend for serving up images, stylesheets and such#---------------------------------------------------------------------backend static   balance roundrobin   server static1 192.168.20.7 check   server static2 192.168.20.8 checkbackend appsrvs   balance roundrobin   option forwardfor except 127.0.0.1 header X-Client   option httpchk   cookie SERVERID insert indirect nocache   server  web1 192.168.20.7:80 check cookie web1   server  web2 192.168.20.8:80 check cookie web2

 


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.