First, the environment
System: CentOS 6.4x64 Minimized installation
node1:192.168.1.13
node2:192.168.1.14
vip:192.168.1.15
nfs:192.168.1.10
Second, the basic configuration
Node1 and Node2 do the same thing.
#关闭iptables和selinux [[email protected] ~]# getenforcedisabled #确保这项是正确的 [[email protected] ~]# service iptables stop# Configuring local hosts parsing [[email protected] ~]# echo "192.168.1.13 node1" >>/etc/hosts[[email protected] ~]# echo "192.168.1.14 node2" >>/etc/hosts[[email Protected] ~]# cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.13 node1192.168.1.14 node2# configuring Epel source [[email protected] ~]# rpm -ivh http:// dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm[[email protected] ~]# sed -i ' [email protected]#[email Protected]@g ' /etc/yum.repos.d/epel.repo[[email protected] ~]# sed -i ' [ Email protected]@#[email protected] ' /etc/yum.repos.d/epel.repo# sync time [[email protected] ~]# yum install ntp -y[[email protected] ~]# echo "*/10 * * * * /usr/sbin/ntpdate asia.pool.ntp.org &>/dev/null " >/ var/spool/cron/root[[email protected] ~]# ntpdate asia.pool.ntp.org21 jun 17:32:45 ntpdate[1561]: step time server 211.233.40.78 offset -158.552839 sec[[email protected] ~]# hwclock -w# Configuring SSH Trust [[email protected] ~]# ssh-keygen[[email protected] ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub [email Protected
Three, installation configuration heartbeat
(1). Install Heartbeat
Perform installation operations #在ha-node1 and Ha-node2 [[email protected] ~]# Yum install heartbeat-y
(2). Configure HA.CF
[[email protected] ~]# cd /usr/share/doc/ heartbeat-3.0.4/[[email protected] heartbeat-3.0.4]# cp authkeys ha.cf haresources /etc/ha.d/[[email protected] heartbeat-3.0.4]# cd /etc/ha.d/[[email Protected] ha.d]# lsauthkeys ha.cf harc haresources rc.d README.config resource.d shellfuncs[[email protected] ha.d]# egrep -v "^$|^#" /etc/ha.d/ha.cf logfile/var/log/ha-loglogfacilitylocal1keepalive 2deadtime 30warntime 10initdead 120mcast eth0 225.0.10.1 694 1 0auto_ Failback onnode node1node node2crm no
(3). Configure Authkeys
[[email protected] ha.d]# dd if=/dev/random bs=512 count=1 | OpenSSL md50+1 Records in0+1 records out21 bytes (+ B) copied, 3.1278e-05 s, 671 kb/s (stdin) = 4206bd8388c16292bc03710a0c7 47f59[[email protected] ha.d]# grep-v ^#/etc/ha.d/authkeys Auth One MD5 4206bd8388c16292bc03710a0c747f59# Modify the authentication file permissions to 600[ [Email protected] ~]# chmod 600/etc/ha.d/authkeys
(4). Configure Haresource
[Email protected] ha.d]# grep-v ^#/etc/ha.d/haresources node1 Ipaddr::192.168.1.15/24/eth0
(5). Start Heartbeat
[[email protected] ha.d]# scp authkeys haresources ha.cf node2:/etc/ ha.d/#node1启动服务 [[email protected] ~]# /etc/init.d/heartbeat startstarting High-availability services: info: resource is stoppeddone. [[email protected] ~]# chkconfig heartbeat off# Note: Turn off boot, when the server restarts, you need to manually start the #node2 boot service [[ email protected] ~]# /etc/init.d/heartbeat start# view results [[email protected] ~]# ip a |grep eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state up qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #vip在主节点上 [[EMAIL&NBSP;PROTECTED]&NBSP;~]#&NBsp;ip a |grep eth02: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 #备节点上没有vip
(6). Test Heartbeat
Normal state
#node1信息 [[email protected] ~]# ip a |grep Eth02: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #vip在主节点上 #node2 information [[ Email protected] ~]# ip a |grep eth02: eth0: <broadcast,multicast, up,lower_up> mtu 1500 qdisc pfifo_fast state up qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 #备节点上没有vip
Simulate the status information of the primary node after the outage
#在主节点node1停止heartbeat服务 [[email protected] ~]# /etc/init.d/heartbeat stopstopping High-availability services: done. [[email protected] ~]# ip a |grep eth0 #主节点的heartbeat服务停止后, VIP resources were robbed 2: eth0: <broadcast, multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state up qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 #在备节点node2查看资源 [[email protected] ~]# ip a |grep eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state up qlen 1000 inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24&nbsP;brd 192.168.1.255 scope global secondary eth0
Heartbeat service to recover the master node
[[email protected] ~]# /etc/init.d/heartbeat startstarting high-availability Services: info: resource is stoppeddone. #主节点的heartbeat服务恢复后, took over the resources back [[email Protected] ~]# ip a |grep eth02: eth0: <broadcast,multicast,up,lower_up > mtu 1500 qdisc pfifo_fast state UP qlen 1000 inet 192.168.1.13/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.15/24 brd 192.168.1.255 scope global secondary eth0 #查看备节点 [[email protected] ~]# ip a |grep eth0 #vip资源已移除2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 Inet 192.168.1.14/24 brd 192.168.1.255 scope global eth0
Iv. installation and deployment of DRBD
(1). Partition the hard disk, as Node1 and Node2 do
[[email protected] ~]# fdisk/dev/sdb# Description:/dev/sdb divided into 2 partitions/dev/sdb1 and/dev/sdb2,/dev/sdb1=19g[[email protected] ~]# partprobe/dev/sdb# formatting the partition [[email protected] ~]# MKFS.EXT4/DEV/SDB1 Description: The SDB2 partition is a meta data partition and does not require a format operation [[email protected] ~]# tune2fs-c -1/dev/sdb1 Description: Set maximum mount number to-1, turn off forced check mount count limit
(2). Install DRBD
Since our system is CentOS6.4, we also need to install the kernel module, the version needs to be consistent with UNAME-R, the installation package we extracted from the system installation software, the process is slightly. The installation process of Node1 and Node2 is the same, only the installation process of Node1 is given here.
#安装内核文件 [[email protected] ~]# RPM-IVH kernel-devel-2.6.32-358.el6.x86_64.rpm kernel-headers-2.6.32-358.el6.x86_64. RPM [[email protected] ~]# Yum install drbd84 kmod-drbd84-y
(3). Configuring DRBD
a. Modify the global configuration file
[[email protected] ~]# egrep -v "^$|^#|^[[:space:]]+#" /etc/drbd.d/global_ Common.confglobal {usage-count no;} common {protocol c;handlers {}startup {}options {}disk { on-io-error detach; no-disk-flushes;no-md-flushes;rate 200m;} net {sndbuf-size 512k; max-buffers 8000; unplug-watermark 1024; max-epoch-size 8000; cram-hmac-alg "SHA1"; shared-secret " weyee2014 "; after-sb-0pri disconnect; after-sb-1pri disconnect; after-sb-2pri disconnect; rr-conflict disconnect;}}
b. Adding resources
[[email protected] ~]# cat /etc/drbd.d/ nfsdata.resresource nfsdata { on node1 { device /dev/drbd1; disk /dev/sdb1; address 192.168.1.13:7789; meta-disk /dev/sdb2 [0]; } on node2 { device /dev/ drbd1; disk /dev/sdb1; address 192.168.1.14:7789; meta-disk /dev/sdb2 [0]; }}
c. Copy the configuration file to Node2, reboot the system to load the DRBD module, initialize the meta data
[[email protected] ~]# scp global_common.conf nfsdata.res node2:/etc/drbd.d/[[ Email protected] ~]# depmod[[email protected] ~]# modprobe drbd[[email protected] ~]# lsmod |grep drbddrbd 365931 0 libcrc32c 1246 1 drbd# Initializing meta data in Node1 [[email protected] ~]# drbdadm create-md nfsdatainitializing Activity lognot initializing bitmapwriting meta data ... new drbd meta data block successfully created. #在node2上加载模块, initialize meta data [[Email protected] ~]# depmod[[email protected] ~]# modprobe drbd[[email Protected] ~]# lsmod |grep drbddrbd 365931 0 libcrc32c 1246 1 drbd[[email protected] ~]# drbdadm create-md Nfsdatainitializing activity lognot initializing bitmapwriting meta data ... New drbd meta data block successfully created.
D. Start DRBD on Node1 and Node2
#node1操作
This article is from the "ly36843" blog, please be sure to keep this source http://ly36843.blog.51cto.com/3120113/1664020
Heartbeat+drbd+nfs High Availability