Nginx + keepalived configure high-availability HTTP Cluster

Source: Internet
Author: User
Tags reflector sendfile pkill
Nginx is not only an excellent WEB server, but also can be configured as a powerful Load balancer according to the anti-proxy of nginx. this section describes how to configure nginx as a server Load balancer and configure highly available clusters with keepalived. generally, the main architecture of the cluster is: the front-end Server Load balancer is two: master-slave, two working modes, one is the STANDBY state of the slave machine, so the host

Nginx is not only an excellent WEB server, but also can be configured as a powerful Load balancer according to the anti-proxy of nginx. this section describes how to configure nginx as a server Load balancer and configure highly available clusters with keepalived. generally, the main architecture of a cluster is as follows: the front-end Server Load balancer has two types of Load balancer: master and slave. One is the standby mode.

Nginx is not only an excellent WEB server, but also can be configured as a powerful Load balancer according to the anti-proxy of nginx. this section describes how to configure nginx as a server Load balancer and configure highly available clusters with keepalived. the main architecture of a cluster is as follows:

There are two front-end load balancers: master and slave. One is standby mode. When a host fails, the slave node takes over the host. Zhuang Yi is faulty, when the host fault recovery is complete, the Standby host continues to only need to be in the standby status. The second is that the active and standby instances work at the same time, and the other instance automatically takes over the other instance for failover. the first method is to resolve the domain name to a virtual ip address (vip). The primary Server Load balancer is bound to a virtual ip address. When the primary Server Load balancer fails, through keepalived, the vip is automatically bound to the backup server Load balancer, And the arping gateway refresh the MAC address ., avoid spof. the second method is to bind both the master and slave hosts with a vip address and resolve the domain name to the two servers through DNS round robin. If the host fails, the slave server binds the vip address bound to the slave server, at the same time, the arping gateway refreshes the MAC address. implement failover.

The WEB server acts as the real server in the middle to process requests. the backend is the database and distributed file system. generally, databases are mainly composed of two databases. the Distributed File System effectively synchronizes data between WEB servers. some will also separate the image server and put it on the backend.

Environment used in this article:

  • CentOS 5.5 32-bit
  • Nginx: nginx-1.0.11
  • Keepalived: keepalived-1.1.19.tar.gz
  • Master Scheduler: 192.168.3.1
  • Backup Scheduler: 192.168.3.2
  • Real server: 192.168.3.4/5/6

In this article, the first method is used for vip: 192.168.3.253.

1. Deploy nginx1. download on the Master/Slave Server

wget http://nginx.org/download/nginx-1.0.11.tar.gz

2. Installation

? Yum ?? -Y? Install? Zlib-devel? Pcre-devel? Openssl-devel? # Install dependency tar-zxvf nginx-1.0.11.tar.gzcd nginx-1.0.11./configure -- prefix =/usr/local/nginx -- with-http_ssl_module -- with-http_flv_module -- with-http_gzip_static_modulemake & make install

3. Configuration

Configure nginx for the master scheduler and edit nginx. conf

Vi/usr/local/nginx/conf/nginx. confhttp {include mime. types; default_type application/octet-stream; # log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request"' # '$ status $ response "$ http_referer"' # '"$ http_user_agent" "$ http_x_forwarded_for "'; # access_log logs/access. log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; # gzip on; # adding a set of real server address pools # proxy servers upstream real_server_pool used in the proxy_pass and fastcgi_pass commands {# If dynamic applications exist in the background, the ip_hash command can use the hash algorithm # To locate client requests on the same backend server to solve session sharing. # However, we recommend using dynamic applications for session sharing # ip_hash; # server is used to specify the name and parameters of a backend server # weight representation, default value 1, the higher the weight, the more clients are allocated. # Number of backend request failures in max_fails within the specified time # The time when fail_timeout is suspended after the number of failures specified by max_fails reaches server 192.168.3.4: 80 weight = 1 max_fails = 2 fail_timeout = 30 s; # The down parameter is used to mark as offline and is not involved in Server Load balancer. use in ip_hash # Here For demonstration. server 192.168.3.5: 80 weight = 1 max_fails = 2 fail_timeout = 30 s down will be removed from the subsequent test; # backup is used only when the non-backup server is down or busy # (for demonstration here, it will be removed from the test later) server 192.168.3.6: 80 weight = 1 max_fails = 2 fail_timeout = 30 s backup;} server {listen 192.168.3.1: 80; server_name localhost; # charset koi8-r; # access_log logs/host. access. log main; location/{# root html; # index index.html index.htm; # use a set of proxy servers set by upstream # If execution errors such as 502 or 504 occur on the backend server, # automatically forward requests to another server in the Server Load balancer pool. proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://real_server_pool ; Proxy_set_header Host $ host; proxy_set_header X-Forwarded-For $ remote_addr ;}}}

(Note:Note ip_hash in the configuration file to ensure that the client request is always forwarded to a server. Therefore, if the ip_hash command is enabled, weight (weight parameter) cannot be used ), add the ip_hash command to the configuration file) configure the slave nginx and change the listening ip address to the ip address of the slave scheduler.

Http {include mime. types; default_type application/octet-stream; # log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request"' # '$ status $ response "$ http_referer"' # '"$ http_user_agent" "$ http_x_forwarded_for "'; # access_log logs/access. log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; # gzip on; upstream real_server_pool {# ip_hash; server 192.168.3.4: 80 weight = 1 max_fails = 2 fail_timeout = 30 s; server 192.168.3.5: 80 weight = 1 max_fails = 2 fail_timeout = 30 s; server 192.168.3.6: 80 weight = 1 max_fails = 2 fail_timeout = 30 s;} server {listen 192.168.3.2: 80; # listener ip changed to local ip server_name localhost; # charset koi8-r; # access_log logs/host. access. log main; location/{# root html; # index index.html index.htm; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass http://real_server_pool ; Proxy_set_header Host $ host; proxy_set_header X-Forwarded-For $ remote_addr ;}

Then start master-slave nginx:

/usr/local/nginx/sbin/nginx

2. Deploy keepalived on the Master/Slave Server

Install and install dependencies:

Yum-y install kernel-devel # install Dependencies

Enable route forwarding:

Vi/etc/sysctl. confnet. ipv4.ip _ forward = 1 # change this parameter to 1 sysctl-p # Make the modification take effect

First install ipvs:

Ln-s/usr/src/kernels/2.6.18-194. el5-i686 // usr/src/linux # ipvs needs the Kernel File to make a soft connection # download wget http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gztar-zxvf ipvsadm-1.24.tar.gzcd ipvsadm-1.24makemake install

Then install keepalived.

# Download wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gztar-zxvf keepalived-1.1.19.tar.gzcd keepalived-1.1.19. /configure -- prefix = // # install it in the default location (configuration file, binary file, and startup script are placed in the default location) -- mandir =/usr/local/share/man/\ -- with-kernel-dir =/usr/src/kernels/2.6.18-194. el5-i686/# the header file that requires the kernel make & make install

Configure keepalived

Edit the configuration file/etc/keepalived. conf ###

Global_defs {icationication_email {cold_night@linuxzen.com # define notification mailbox, there are multiple newlines to add} icationication_email_from root@linuxzen.com # define the mail sent by smtp_server www.linuxzen.com # define the sender server smtp_connect_timeout 30 # define the connection smtp server timeout router_id LVS_DEVEL} vrrp_instance VI_1 {state MASTER # mark Master/Slave, replace BACKUP interface eth0 # The port virtual_router_id 51 monitored by HA on the slave machine # The value of virtual_router_id of the master and slave must be the same as priority 100 # priority, generally, the advert_int 1 # VRRP Multicast broadcast period is slightly larger than the standby one. authentication {# define authentication auth_type PASS # authentication method auth_pass 1111 # authentication password word} virtual_ipaddress {# define multiple vip 192.168.3.253 # add a line feed, one row} virtual_server 192.168.3.253 80 {delay_loop 6 # query realserver status lb_algo rr lb_kind NAT nat_mask limit 255.0 persistence_timeout 50 # connections from the same IP address are allocated to the same realserver protocol within 50 seconds TCP # use TCP to monitor realserver status real_server 192.168.3.1 80 {weight 3 # weight TCP_CHECK {connect_timeout 10 #10 seconds no response timeout nb_get_retry 3 delay_before_retry 3 connect_port 80} real_server 192.168.3.2 80 {weight 3 TCP_CHECK {connect_timeout 3 delay_before_retry 3 connect_port 80 }}}

To configure keepalived for the BACKUP Scheduler, you only need to change the state MASTER to state BACKUP to reduce the value of priority 100:

Global_defs {icationication_email {cold_night@linuxzen.com} fill root@linuxzen.com smtp_server route Limit 30 router_id LVS_DEVEL} vrrp_instance VI_1 {state BACKUP # Replace BACKUP interface eth0 virtual_router_id 51 # The value of virtual_router_id in the master and BACKUP must be same # The standby priority is less than the master scheduler advert_int 1 authentication {auth_type PASS auth_pass 1111} virtual_ipaddress {192.168.3.253} virtual_server 192.168.3.253 80 {delay_loop 6 upstream rr limit NAT limit 255.0 limit 50 protocol TCP real_server 192.168. {weight 3 TCP_CHECK {connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80} real_server 192.168.3.2 80 {weight 3 TCP_CHECK {connect_timeout 3 limit 3 connect_port 80 }}}

Start keepalived on the master and slave nodes:

service keepalived start

Iii. Test ----- deploy backend servers

Install nginx on the backend server. Deploy only one server and create three ip-based virtual hosts for testing: bind ip:

ifconfig eth0:1 192.168.3.4/24ifconfig eth0:2 192.168.3.5/24ifconfig eth0:3 192.168.3.6/24

After nginx is installed, edit the configuration file and add it to the http block:

http {    server {        listen  192.168.3.4:80;        server_name     192.168.3.4;        location / {             root html/s1;             index index.html index.htm;        }    }    server {        listen  192.168.3.5:80;        server_name     192.168.3.5;        location / {            root html/s2;            index index.html index.htm;        }    }    server {        listen 192.168.3.6:80;        server_name     192.168.3.5;        location / {            root html/s3;            index index.html index.htm;        }    }}

Create the root directory of the VM and create an inaccessible homepage document:

cd /usr/local/nginx/html/mkdir s1 s2 s3echo server1 > s1/index.htmlecho server2 > s2/index.htmlecho server3 > s3/index.html

Start nginx:

/usr/local/nginx/sbin/nginx

Open a browserhttp://192.168.3.253

Refresh will show different content: server1, server2, and server3 (production servers should be the same) [gallery link = "file" order = "DESC"] Stop keepalived of the master scheduler.

pkill keepalived

View the logs of the standby Scheduler:

cat /var/log/messagesFeb 10 16:36:27 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEFeb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATEFeb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Feb 10 16:36:28 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.3.253Feb 10 16:36:28 cfhost Keepalived_vrrp: Netlink reflector reports IP 192.168.3.253 addedFeb 10 16:36:28 cfhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.3.253 addedFeb 10 16:36:33 cfhost Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.3.253

Access nowhttp://192.168.3.253Still accessible. we can also see that the keepalived of the standby machine switches to the vip only when the keepalived of the host is stopped, instead of detecting a service of a real server (such as HTTP of port 80) switches to the vip, therefore, when the nginx process is stopped, failover cannot be implemented if the server is not down. Therefore, we compile a script to check the nginx status and use keepalived to implement failover:

#! /Bin/bash # filename: nsc. shps aux | grep nginx | grep-v grep 2>/dev/null 1> & 2 # filter nginx processes if [$? -Eq 0] # If any nginx process is filtered out, the system will return 0, and the system will think that nginx is alive then sleep 5 # enable the script to sleep else # If nginx is not alive, try to start nginx, if it fails, kill the keepalived process/usr/local/nginx/sbin/nginx ps aux | grep nginx | grep-v grep 2>/dev/null 1> & 2 if [[ $? -Eq 0] then pkill keepalived fifi

Then run the script in the background:

nohup sh nsc.sh &

In this way, the cluster is highly reliable and highly available.

Original article address: nginx + keepalived: configure a high-availability HTTP cluster. Thank you for sharing it with me.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.