Install and configure DRBD in CentOS 6.5
Lab environment:
(1) Use CentOS6.5
(2) Use the 163 yum source (see below for configuration methods)
(3) Add the same virtual hard disk to the two virtual machines. I use 10 GB
Operating System host name IP address drbd Disk
CentOS6.5 local.aaa.com 192.168.1.13/dev/sdb4
CentOS6.5 local2.aaa.com 192.168.1.12/dev/sdb4
Note:
(1) configure the IP address. If DNS exists, ensure that the Host Name of the VM complies with FQDA, perform normal DNS resolution, or write the hosts file (/etc/hosts)
(2) Disable selinux (setenforce 0) Disable iptables (service iptables stop) or write rules
1. Prepare DRBD
1. yum Configuration
(1) Go to the yum source configuration directory cd/etc/yum. repos. d.
(2) backup system comes with yum source mv CentOS-Base.repo CentOS-Base.repo.bak
(3) download 163 Netease yum Source: wget http://mirrors.163.com/.help/CentOS6-Base-163.repo
(4) change file name mv CentOS6-Base-163.repo CentOS-Base.repo
(5) After updating the yum source, run the following command to update the yum configuration so that the operation takes effect immediately: yum clean all yum makecache
2. Install DRBD
(1) first, we need to upgrade the kernel, and then restart yum-y update kernel.
Yum install kernel-devel
(2) rpm-Uvh http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm
(3) install DRBD yum-y install drbd83-utils kmod-drbd83
(4) load the DRBD module to the kernel modprobe drbd
(5) check whether the DRBD is successfully installed lsmod | grep drbd
Display Information: drbd 332493 0
(6) modprobe-l | grep-I drbd can view the path
After the installation is complete, the startup script is generated in/etc/init. d/drbd.
3. initialize the disk
(1) first use fdisk-l to determine the disk Number of the newly added virtual disk.
(2) Using fdisk/dev/disk number for partitioning does not need to be formatted (for specific operations, see (http://liumingyuan.blog.51cto.com/9065923/1604923)
Ii. DRBD Configuration
(The configuration file of DRBD has three scores: global, common, and resource. The default path for reading the configuration file during running is/etc/drbd. conf. This file describes some configuration parameters of drbd and the ing between devices and hard disk partitions. It is null by default, but the sample configuration file is included in the source code package of DRBD)
Generally, global_common.conf (directory of the experiment file/etc/drbd. d/) The file contains only the global and common configurations (understood as the global configuration) in/etc/drbd. d /*. res files are defined as a resource (defined as a host ).
In fact, DRBD configurations can be integrated into the drbd. conf file, but this will become messy when there are many resources.
(1) In this distributed configuration, we first configure/etc/drbd. d/global_common.conf.
The content is as follows:
Global {
Usage-count no; # whether to participate in DRBD user statistics. By default
}
Common {
Syncer {rate 200 M;} # maximum network rate during synchronization between the master node and the slave Node
Protocol C; # using the third synchronization protocol of DRBD, it indicates that after receiving the write confirmation from the remote host, the write is considered complete.
Handlers {
Pri-on-incon-degr "echo o>/proc/sysrq-trigger; halt-f ";
Pri-lost-after-sb "echo o>/proc/sysrq-trigger; halt-f ";
Local-io-error "echo o>/proc/sysrq-trigger; halt-f ";
Fence-peer "/usr/lib64/heartbeat/drbd-peer-outdater-t 5 ";
Pri-lost "echo pri-lost. Have a look at the log files. | mail-s 'drbd Alert 'root ";
Split-brain "/usr/lib/drbd/notify-splot-brain.sh root ";
Out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root ";
}
Net {# verification method and password used for DRBD Synchronization
Cram-hmac-alg "sha1 ";
Shared-secret "MySQL-HA ";
}
Disk {# Use the dpod function (drbd outdate-peer daemon) to ensure that data cannot be switched during the same step
On-io-error detach;
Fencing resource-only;
}
Startup {
Wfc-timeout 120;
Degr-wfc-timeout 120;
}
}
(2) Next, we create a file with the. res suffix under the drbd. d directory. This file is created in the/etc/drbd. d/directory/
The content is as follows:
Resource r0 {# define the resource name as r0
On local.aaa.com {# each host starts with "on" and is followed by the host name hostname.
Device/dev/drbd0; # define the logical path of the disk used by DRBD
Disk/dev/sdb4; #/dev/drbd0 disk partition
Address 192.168.1.13: 7788; # Set the listening port of DRBD to communicate with another host.
Meta-disk internal; # DRBD internal metadata Storage Method
}
On local2.aaa.com {
Device/dev/drbd0;
Disk/dev/sdb4;
Address 192.168.1.12: 7788;
Meta-disk internal;
}
}
(3) create a haclient group and set the permission because drbd-peer-outdater fence-peer program is used. With this mechanism, the dopd heartbeat plug-in program needs to be able to call the drbdsetup and drbdmeta root privileges.
The command is as follows:
Groupadd haclient
Chgrp haclient/sbin/drbdsetup
Chmod o-x/sbin/drbdsetup
Chmod u + s/sbin/drbdsetup
Chgrp haclient/sbin/drbdmeta
Chmod o-x/sbin/drbdmeta
Chmod u + s/sbin/drbdmeta
(4) copy the configuration file to another computer using scp 192.168.1.13:/etc/dbrd. d/*/etc/drbd. d/
And run groupadd haclient.
Chgrp haclient/sbin/drbdsetup
Chmod o-x/sbin/drbdsetup
Chmod u + s/sbin/drbdsetup
Chgrp haclient/sbin/drbdmeta
Chmod o-x/sbin/drbdmeta
Chmod u + s/sbin/drbdmeta
(5) Before starting DBRD, you need to create a data block that provides the DRBD record information on the specified partition (sdb4) on the two hosts respectively.
Drbdadm create-md r0 (r0 indicates the previously defined resource) or execute drbdadm create-md all
The Correct prompt is:
Writing meta data...
Initializing activity log
NOT initialized bitmap
New drbd meta data block successfully created.
(6) Start the DRBD service script on the two nodes at/etc/init. d/drbd.
/Etc/init. d/drbd start
(7) Run cat/proc/drbd
The output content is as follows:
Version: 8.3.16 (api: 88/proto: 86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil @ Build64R6, 14:51:37
0: cs: Connected ro: Secondary/Secondary ds: Inconsistent/Inconsistent C r -----
Ns: 0 nr: 0 dw: 0 dr: 0 al: 0 bm: 0 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: f oos: 10482024
Output explanation:
Ro indicates the role information. When DRBD is started for the first time, both nodes are in the Secondary status by default.
Ds indicates the disk status information. "nconsistent/Inconsistent" indicates that the disk data of the two nodes is different.
Ns indicates the packet information sent by the network.
Dw indicates disk write information
Dr indicates disk read information
(8) set the master node. By default, there is no distinction between the master node and the slave node. Therefore, you need to set the master node and the slave node. Select the host on which the master node needs to be set and execute the following command.
Drbdsetup/dev/drbd0 primary-o can also execute the following command drbdadm -- overwrite-data-of-peer primary all
After the command is executed, you need to set the master node to use another command.
/Sbin/drbdadm primary r0 or/sbin/drbdadm primary all
Wait a moment to view/proc/drbd
The output is as follows:
Version: 8.3.16 (api: 88/proto: 86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil @ Build64R6, 14:51:37
0: cs: Connected ro: Primary/Secondary ds: UpToDate/UpToDate C r -----
Ns: 10482024 nr: 0 dw: 0 dr: 10482696 al: 0 bm: 640 lo: 0 pe: 0 ua: 0 ap: 0 ep: 1 wo: f oos: 0
Ro status changes to Primary/Secondary
The ds status also changes to UpToDate/UpToDate, indicating real-time synchronization.
(9) attach a DRBD device to the master node:
Mkfs. ext4/dev/drbd0
Mount/dev/drbd0/mnt (mount to mnt as needed)
(10) Test the DRBD data image
Create a file under the mount point/mnt of the master node
Dd if =/dev/zero of =/mnt/ceshi. tmp bs = 10 M count = 20
After that, check whether the files on the backup host have been synchronized.
First, stop the DRBD service to ensure data consistency.
/Etc/init. d/drbd stop
Mount/dev/sdb4/mnt (the mount here is the physical partition of/dev/sdb4, because DRBD will load the DRBD device to the system at startup)
Ls/mnt to view the file
After the check is complete, umount unmounts the mount point and starts the service
For more information about DRBD, see:
DRBD principles and features
Rapid installation and deployment of DRBD
DRBD for Linux high availability (HA) CLUSTERS
DRBD Chinese application guide PDF
Installation and configuration notes for DRBD in CentOS 6.3
High-availability MySQL based on DRBD + Corosync
Install and configure DRBD in CentOS 6.4
DRBD details: click here
DRBD: click here
This article permanently updates the link address: