LVS/DR + Keepalived
Although the previous LVS has been configured successfully, it also achieves Load Balancing (see). However, during the test, we found that when a real server stops the nginx process, then ctor will forward the request, which causes some requests to be abnormal. Therefore, a mechanism is required to check the status of the real server, which is keepalived. In addition to checking the rs status, keepalived can also detect the status of the slave ctor. In other words, keepalived can implement the ha cluster function. Of course, a slave director is also required.
The keepalived software and ipvsadm must be installed for the backup ctor;
Keepalived calls lvs to implement its own rules;
Yum install-y keepalived ipvsadm
Environment setup:
Primary ctor 192.168.11.30 eth1 Nic
From director 192.168.11.40 eth1 Nic
Real server1: 192.168.11.100 eth0 Nic
Real server2: 192.168.11.101 eth0 Nic
The linux host 192.168.11.0 network segment used for curl testing;
Keepalived and ipvsadm must be installed on both the primary dr and backup dr;
Install nginx on two rs;
After installation, the configuration file of the master ctor
Vim/etc/keepalived. conf // Add the following:
Vrrp_instance VI_1 {
State MASTER # BACKUP on the slave server
Interface eth1
Virtual_router_id 51
Priority 100 # priority. The higher the value, the higher the priority. The backup server is 90.
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.11.110
}
}
Virtual_server 192.168.11.110 80 {
Delay_loop 6 # (query the realserver status every 6 seconds and check whether the status is alive)
Lb_algo wlc # (polling algorithm)
Lb_kind DR # (Direct Route)
Persistence_timeout 0 # (How many seconds is the connection from the same IP address allocated to the same realserver? 0 indicates no connection)
Protocol TCP # (use the TCP protocol to check the realserver status)
Real_server 192.168.11.100 80 {
Weight 100 # (weight)
TCP_CHECK {
Connect_timeout 10 # (no response timeout in 10 seconds)
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
Real_server 192.168.11.101 80 {
Weight 1, 100
TCP_CHECK {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
You only need to modify the following two items from the director configuration file:
State MASTER-> state BACKUP
Priority 100-> priority 90
After keepalived is configured, You need to enable port forwarding (Master/Slave dr is required ):
Echo 1>/proc/sys/net/ipv4/ip_forward
Then, run the/usr/local/sbin/lvs_dr_rs.sh script on the two rs to start the nginx service.
#/Etc/init. d/nginx start
Finally, start the keepalived service on the two ctor S (Master first and slave ):
#/Etc/init. d/keepalived start
In addition, when the keepalived service is started, the vip and ipvsadm rules are automatically generated.
Run the command # ip addr to view the virtual ip address of the dr; Use ifconfig directly without displaying the virtual ip address;
[Root @ dr1 keepalived] # ip addr
Eth1: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
Link/ether 00: 0c: 29: 97: c3: f6 brd ff: ff
Inet 192.168.11.30/24 brd 192.168.11.255 scope global eth1
Inet 192.168.11.110/32 scope global eth1
Inet6 fe80: 20c: 29ff: fe97: c3f6/64 scope link
Valid_lft forever preferred_lft forever
In curl tests on other machines, the number of requests for rs1 and rs2 is the same;
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2
Stop nginx on rs2 and test the curl. All requests are sent to rs1;
The log also records the remove rs2; log file:/var/log/messages
[Root @ rs2 ~] #/Etc/init. d/nginx stop
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ dr1 ~] # Tail-2/var/log/messages
Jun 9 23:27:19 localhost Keepalived_healthcheckers [1572]: TCP connection to [192.168.11.101]: 80 failed !!!
Jun 9 23:27:19 localhost Keepalived_healthcheckers [1572]: Removing service [192.168.11.101]: 80 from VS [192.168.11.110]: 80
Rs2 starts nginx and records adding rs2 in log files. curl test shows that requests are evenly distributed to rs1 and rs2;
[Root @ rs2 ~] #/Etc/init. d/nginx start
[Root @ dr1 ~] # Tail-2/var/log/messages
Jun 9 23:31:38 localhost Keepalived_healthcheckers [1572]: TCP connection to [192.168.11.101]: 80 success.
Jun 9 23:31:38 localhost Keepalived_healthcheckers [1572]: Adding service [192.168.11.101]: 80 to VS [192.168.11.110]: 80
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2
[Root @ localhost ~] # Curl 192.168.11.110
Rs1rs1
[Root @ localhost ~] # Curl 192.168.11.110
Rs2rs2
Join dr2 slave dircetor machine;
The keepalive service is stopped on the master. After the stop operation, check the bound virtual ip address from the ip addr, which indicates that the service has been taken over. The switching speed is very fast;
After the keepalived service is started on the master, the master binds a virtual ip address to take over the service;
[Root @ dr2 keepalived] # ip addr
Eth1: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
Link/ether 00: 0c: 29: af: 73: 3f brd ff: ff
Inet 192.168.11.40/24 brd 192.168.11.255 scope global eth1
Inet 192.168.11.110/32 scope global eth1
The nc command can scan whether the port is opened:
Scan ports 11.100, 11.101, and 11.110 on other machines to check whether port 80 is enabled;
# Nc-z-w2 192.168.11.110 80
[Root @ localhost ~] # Nc-z-w2 192.168.11.100 80
Connection to 192.168.11.100 80 port [tcp/http] succeeded!
[Root @ localhost ~] # Nc-z-w2 192.168.11.101 80
Connection to 192.168.11.101 80 port [tcp/http] succeeded!
[Root @ localhost ~] # Nc-z-w2 192.168.11.110 80
Connection to 192.168.11.110 80 port [tcp/http] succeeded!
LVS + Keepalived achieves layer-4 load and high availability
LVS + Keepalived high-availability server Load balancer cluster architecture Experiment
Heartbeat + LVS build a high-availability server Load balancer Cluster
Build an LVS load balancing test environment
A stress test report for LVS
Haproxy + Keepalived + Apache configuration notes in CentOS 6.3
Haproxy + KeepAlived WEB Cluster on CentOS 6
Keepalived + Haproxy configure high-availability Load Balancing
Haproxy + Keepalived build high-availability Load Balancing
Configure LVS + Keepalived + ipvsadm on CentOS 7
Keepalived high-availability cluster Construction
For more information about Keepalived, click here.
Keepalived: click here
This article permanently updates the link address: