Start directly. The overall plan is as follows: The experiment is implemented through a virtual machine and completed based on RedHat5.8. before doing an experiment on resource analysis, we must first know what resources are allocated with high availability. Because we are working on the high availability of the LvsDR model Director, we have two resources used: 1) VIP: user Access 2) ipvsadm understands the resources to be allocated and makes it much clearer. Before achieving high availability, we must first ensure that our Lvs can work normally. 2. Verify
Start directly. The overall plan is as follows:
The experiment is implemented through a virtual machine based on Red Hat 5.8.
1. Resource Analysis
Before doing the experiment, you must first know what resources are allocated for high availability. Because we are working on the high availability of The Lvs DR model Director, we have two resources used:
1) VIP: used by the user to access
2) ipvsadm
Once we know the resources to be allocated, it is much clearer to do. Before achieving high availability, we must first ensure that our Lvs can work normally.
2. Verify resource availability
2.1 verification implementation Lvs
1) RS Configuration
For more information about web configuration on RS, refer to my blog about lnmp. To better understand the experiment results, modify the web files on the two RS to be inconsistent, so that we can see the effect
# Echo 2>/proc/sys/net/ipv4/conf/eth0/arp_announce
# Echo 2>/proc/sys/net/ipv4/conf/all/arp_announce
Sets the limit level for the local IP address to send an arp response. The IP address in the arp response is the IP address on the device, not the Sending address, that is, the vip
# Echo 1>/proc/sys/net/ipv4/conf/eth0/arp_ignore
# Echo 1>/proc/sys/net/ipv4/conf/all/arp_ignore
Defines arp queries whose target address is a local IP address. local arp queries only respond to the address of the access interface.
# Ifconfig eth0 172.16.99.2/16
# Ifconfig lo: 0 172.16.98.1 broadcast 172.16.98.1 netmask 255.255.255.255 up
# Route add-host 172.16.98.1 dev lo: 0
Set the route. All IP requests to 172.16.98.1 are responded to through the lo: 0 device.
The configuration process of RS2 is the same as above, where RIP is 172.16.99.2
2) DR Configuration
# Yum install ipvsadm
# Ifconfig eth0 172.16.99.1/16
# Ifconfig eth0: 0 172.16.98.1 broadcast 172.16.98.1 netmask 255.255.255 up
# Echo 1>/proc/sys/net/ipv4/ip_forward
Enable route forwarding
# Ipvsadm-A-t 172.16.98.1: 80-s rr
# Ipvsadm-a-t 172.16.98.1-r 172.16.99.2-g
# Ipvsadm-a-t 172.16.98.1-r 172.16.99.3-g
Open the webpage and access the following through vip172.16.98.1 to verify Lvs
# Ipvsadm-L
Run the following command to see if Director is forwarded, and verify the forwarding of another slave Director.
3. High-Availability Configuration
Preparations before 3.1 High Availability
(Configuration on DC)
# Hostname node1.ying.com
# Vim/etc/hosts
172.16.99.2 node1.ying.com node1
172.16.99.3 node2.ying.com node2
Nodes are parsed by host names.
# Ssh-keygen-t rsa
# Ssh-copy-id-I. ssh/id_rsa.pub root @ node2
The dual-host mutual trust is established to facilitate operations. resource enabling and disabling cannot be directly set on the local machine. It must be implemented through DC. Of course, your slave ctor has been changed to node2.
Note that the health status of the DC is determined based on the arrival time of the heartbeat information between the master and slave high-availability instances. Therefore, the machine time between the master and slave instances must be consistent.
3.2 High Availability
# Ipvsadm-S>/etc/sysconfig/ipvsadm
Save the policy we defined and enable the required file for ipvsadm.
# Service ipvsadm stop
# Scp/etc/rc. d/init. d/ipvsadm node2:/etc/rc. d/init. d
# Scp/etc/sysconfig/ipvsadm node2:/etc/sysconfig
# Ssh node2 'service ipvsadm stop'
(Install corosync + pacemaker In the rpm package)
# Yum-y -- nogpgcheck install cluster-glue-1.0.6-1.6.el5.i386.rpm cluster-glue-libs-1.0.6-1.6.el5.i386.rpm corosync-1.2.7-1.1.el5.i386.rpm corosynclib-1.2.7-1.1.el5.i386.rpm heartbeat-3.0.3-2.3.el5.i386.rpm heartbeat-libs-3.0.3-2.3.el5.i386.rpm libesmtp-1.0.4-5.el5.i386.rpm pacemaker-1.1.5-1.1.el5.i386.rpm pacemaker-cts-1.1.5-1.1.el5.i386.rpm pacemaker-libs-1.1.5-1.1.el5.i386.rpm perl-TimeDate-1.16-5.el5.noarch.rpm
# Cd/etc/corosync
# Cp corosync. conf. example corosync. conf
# Vim corosync. conf
Secauth: Signature of the on heartbeat information. It is recommended that you do not enable the two devices if the two devices are determined to be highly available.
# To_syslog: yes comment out this line and add the following content to support pacemaker
Service {
Ver: 0
Name: pacemaker
}
# Corosync-keygen generate the heartbeat information signature file
# Scp authkeys corosync. conf node2:/etc/corosync
After corosync is installed on the slave Director, the two files are tested to save configuration time.
# Service corosync start
# Ssh node2 'service corosync start'
# Crm directly enters the interactive interface of corosync
Crm (live) # configure
Crm (live) configure # primitive vip ocf: heartbeat: IPaddr params ip = 172.16.98.1
Crm (live) configure # primitive ipvsadm lsb: ipvsadm
Define two resources
Crm (live) configure # colocation vip_with_ipvsadm inf: ipvsdm vip
Crm (live) configure # order into sadm_after_vip inf: vip guest SADM
Define resource arrangement constraints and order constraints
Crm (live) configure # property no-quorum-policy = ignore
No arbitration device exists, so the ticket count is ignored.
Crm (live) configure # property stonith-enabled = false
Because there is no stonith device, an error will always be reported. Canceling the error does not affect the experiment.
Crm (live) configure # verify Configuration
Crm (live) configure # commit
Submit configuration to and synchronize to CIB of each node
Crm (live) configure # exit
# Crm_mon
After the configuration is complete, view the high-availability work information, and display the nodes and resources of the work, as well as the location of the resource work.
# Crm node standby
Set this node as standby and use the previous command again to view different results
# Crm node online
# Ssh node2 'crm node standby'
Set Node 2 as standby