DRBD + HeartBeat architecture Experiment
HeartBeat Module
CRM: (Cluster Resource Manager) The Cluster brain submits the node status collected by heartbeart to the CCM module to update the Cluster's member ship and directs LRM to "start" node resources ", "Stop". In short, it means that the resource should eventually run on that node.
LRM: (Local Resource Manger) the module that operates and manages resources. It monitors, starts, and stops resources. Three resource script storage paths
Heartbeat:/etc/ha. d/resoruce. d
Ocf:/usr/lib/resource. d/heartbeat
Lsb:/etc/init. d
CIB: Collects original information about resources and constantly updates resource status changes, which are stored in cib. xml. It is equivalent to the cluster. conf file, that is, the cluster configuration file.
CCM: maintains the relationship between nodes. heatbeat is only a communication tool, and CCM makes all nodes form a cluster.
HeartBeat configuration file
How long does keeplived 2 broadcast heartbeat?
If the slave node fails to receive the heartbeat of the master node within 10 seconds of warntime, a warning is written to the log, but no resource switching will occur.
Deadtime: If the heartbeat of the master node is not received within 30 seconds, it is determined that the master node is dead and the slave node takes over the master node resources immediately.
The initdead 120 master node restarts due to a fault and takes a long time to restart.
Udpport 694 uses the broadcast heartbeat Port
Ucast eth1 specifies the heartbeat Nic
Auto_failback on when the host server returns to normal, the resources are automatically switched back from the slave Node
Set the host name for node node1 and run the uname command to view the host name.
Node node2
DRBD
Distributed Replicated Block Device (DRBD) is a software-based, non-shared, Replicated storage solution for Block devices (hard disks, partitions, logical volumes, etc.) between servers). DRBD works in the kernel, similar to a driver module. DRBD is equivalent to the storage of a raid 1 function. When the local system fails, the same data is retained on the remote host and can be used again. The architecture of DRBD is as follows:
Lab Host Name: HaMater and HaBack
HaMaster eth0: 192.168.10.20
HaBack eth0: 192.168.10.21
HaMaster eth1: 192.168.10.10
HaBack eth1: 192.168.10.20
Install heartbeat yum install-y heartbeat
The experiment architecture is as follows:
Download and install DRBD (run on HaMaster and HaBack)
Http://oss.linbit.com/drbd/
Tar zxvf drbd-8.4.3.tar.gz
Cd drbd-8.4.3
./Configure -- prefix =/usr/local/drbd -- with-km
Make KDIR =/usr/src/kernels/'uname-R'/(specify the absolute path of the kernel)
Mkdir-p/usr/local/drbd/var/run/drbd/
Cp/usr/local/drbd/etc/rc. d/init. d/drbd/etc/rc. d/init. d/
Chkconfig -- add drbd
Chkconfig drbd on
Install the DRDB module (executed on HaMaster and HaBack)
Go to the DRBD extract directory of drbd, cd/root/software/drbd-8.4.3/drbd
Make clean
Make KDIR =/usr/src/kernels/'uname-R '/
Cp drbd. ko/lib/modules/'uname-R'/kernel/lib/
Depmod
Add storage (the storage of the experiment pair is/dev/sdb1 and executed on HaMaster and HaBack)
Fdisk/dev/sdb (partitioning)
Fdisk-l view partitions
Configure DRBD (executed on HaMaster and HaBack)
① Main configuration file of DRBD
/Usr/local/drbd/etc/drbd. conf. This file contains a global configuration file and all resource files. The content is as follows:
include "drbd.d/global_common.conf";include "drbd.d/*.res";
② Modify the global_common.conf file and add protocol C to net;
/Usr/local/drbd/etc/drbd. d/global_common.conf
③ Add a resource file and create a resource r0 named r0.res. The content is as follows:
resource r0{on HaMaster{device /dev/drbd0;disk /dev/sdb1;address 192.168.10.20:7789;meta-disk internal;}on HaBack{device /dev/drbd0;disk /dev/sdb1;address 192.168.10.21:7789;meta-disk internal;}}
④ Load the DRDB Module
Modprobe drbd
View lsmod | grep drbd
(It is not clear why we should perform the following steps)
Dd if =/dev/zero of =/dev/sdb1 bs = 1 M count = 100
Create resource r0: drbdadm create-md r0
Start Resource: drbdadn up 0
Start Service:/etc/init. d/drbd start
⑤ View Master/Slave node status and set Master/Slave nodes
View node drbd status: cat/proc/drbd
Set the master node: drbdadm primary -- force r0
Set slave node: drbdadm secondary r0
6. Format and mount drbd
Mkfs. ext3/dev/drbd0
Mkdir/db
Mount/dev/drbd0/db
7. Test
Manually switch the Master/Slave nodes and view them through cat/proc/drbd.
Install and configure HeartBeat (executed on HaMaster and HaBack)
① Installation: yum-y install heartbeat
② Copy the template file from the installation path
cd /usr/share/doc/heartbeat-2.1.3cp ha.cf authkeys haresources /etc/ha.d/
③ Configure ha. cf
Logfile/var/log/ha-logkeepalive 2 deadtime 30 warntime 10 initdead 120 udpport 694 ucast eth1 192.168.10.10 (IP address of the Peer heartbeat Nic) node HaMasternode HaBack
④ Configure authkeys
auth 11 crc
⑤ Configure haresources
HaMaster drbddisk::r0 Filesystem::/dev/drbd0::/db::ext3 mysqld
Test
Start DRBD: service drbd start (HaMaster and HaBack)
Start heartbeat: service heartbeat (HaMaster and HaBack)
Use tail-f/var/log/messages to view Service Startup logs
Use mount to check whether the hard disk is mounted on HaMaster, while position is mounted on HaBack.
Use/etc/init. d/mysqld to check whether mysql is started.
Stop heartbeat on HaMaster and check whether all resources can be taken over by HaBack.
Restart heartbeat on HaMaster to check whether resources can be taken back from HaBack.
Reference
1. Blog: http://czmmiao.iteye.com/blog/1773079
2. 51CTO video: http://edu.51cto.com/course/course_id-2.html