About DRBD
DRBD is a software-implemented, non-shared storage replication solution for mirrored block device content between servers. It acts as a kernel module and works in the kernel. How it works: a block-level storage device is reserved on each node, and when any data is sent to the storage device on the master node, it is sent over the network to another node. The other node receives the data and stores it on a mirrored block device that is peer to the primary node.
There are two modes of working with DRBD (only the first one is described here):
1, master two node (primary/secondary )
2, double main mode (primary/primary )
Data in the disk process, first through the file system to write data to the cache, and then at a certain time, the cached data modified by the disk Scheduler synchronization to disk (asynchronous mode), DRBD is to call the disk scheduler before the modification of the data through the TCP protocol to send one copy to another node, The corresponding service on the other node (the DRBD service) receives the data and writes it to the peer storage device, and the DRBD structure diagram is as follows:
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/71/57/wKiom1XLXLPid1UQAAGlexvFXds698.jpg "title=" K) I0ecvhb0w]lujp ' R~2w3q.png "alt=" Wkiom1xlxlpid1uqaaglexvfxds698.jpg "/>
There are 3 types of protocols for the copy mode of DRBD:
Protocol A: Asynchronous replication, data is returned immediately after the local write succeeds, and data may be lost during the sending process.
Protocol B: Memory synchronous replication, the data is successfully sent back from the node (the data is still in the memory from the node)
Protocol C: Synchronous replication, data successfully sent to the slave node, synchronized to disk before returning (high reliability, poor performance)
Implementation process
1) First ensure that: 1, the master-slave node time synchronization, 2, the master-slave node can be based on the host name mutual access.
Synchronize the/etc/hosts file on the master/slave node:
[Email protected] ~]# vim/etc/hosts192.168.1.126 node1.xiaoxiao.com node1192.168.1.127 node2.xiaoxiao.com node2
2) Install the package , the corresponding RPM package is DRBD (tool for user space) and drbd-kmdl(kernel module). Installation should be aware of the version of DRBD and drbd-kmdl to correspond, drbd-kmdl version to the current system's kernel version corresponds. The ftp://rpmfind.net/linux/atrpms/of the package can be downloaded on demand.
[Email protected] ~]# RPM-IVH drbd-8.4.3-33.el6.x86_64.rpm drbd-kmdl-2.6.32-431.el6-8.4.3-33.el6.x86_64.rpm
3) Configure DRBD
the master configuration file for DRBD is/etc/drbd.conf, where the Include method contains the/ETC/DRBD.D directory global_common.conf file and all files ending in. Res. Global_common.conf is used to define global and common segments, and each . res file is used to define a resource.
A simple example is as follows:
[[email protected] ~]# vim /etc/drbd.d/global_common.conf global { usage-count no; # minor-count dialog-refresh disable-ip-verification}common { protocol B; #使用的协议 handlers { } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb } options { # cpu-mask on-no-data-accessible } disk { on-io-error detach; #当资源所在的磁盘发生IO故障时, remove the disk from the current image } net { cram-hmac-alg "SHA1"; #用哪一种算法来做消息认证 shared-secret "BABY_DRBD"; #共享秘钥 } syncer { rate 1000m; #同步最大的可用速率 so that it takes up too much bandwidth resources }}
Define a resource/etc/drbd.d/web.res
resource dbdata { #dbdata是资源名 #protocol c #指定使用的协议 on node1.xiaoxiao.com { #主机名 device /dev/drbd0; #对应的drbd设备的设备名 disk /dev/sdb1; #用于drbd镜像的Equipment address 192.168.1.126:7789; #资源监听的地址和端口, Default Listener Port 7789 meta-disk internal; #drbd的元数据存放在何处 , internal means that it is stored in the internal } on node2.xiaoxiao.com { device /dev/drbd0; disk /dev/sdb1; address 192.168.1.127:7789; meta-disk internal; }}
The above files must be the same on two nodes, so you can synchronize all the files you just configured to another node based on SSH.
[Email protected] ~]# scp/etc/drbd.d/* node2:/etc/drbd.d/
4) initialize the defined resource and start the service on two nodes
Create a DRBD resource (to be executed on two nodes):
[Email protected] ~]# drbdadm CREATE-MD dbdata #资源名与配置文件中的保持一致
To start a service on two nodes:
[[Email protected] ~]# service DRBD start
View the boot status (also available through Drbd-overview):
[[email protected] ~]# cat /proc/drbdversion: 8.4.3 (api:1/proto:86-101) git-hash: 89a294209144b68adb3ee85a73221f964d3ee515 build by [email protected], 2013-11-29 12:28:00 0: cs:connected ro:secondary/ Secondary ds:inconsistent/inconsistent c r---- ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:505964
Secondary/secondary # Two nodes are in secondary state
Ds:inconsistent/inconsistent #主从节点上数据非一致
In master-slave mode, only the primary node can be mounted and used from the node only as a backup function. So to use this device you need to promote one of the nodes to the primary node (primary).
[Email protected] ~]# Drbdadm primary--force Resource
With Drbd-overview view, the data is in the process, because the two peer device of DRBD is bit-synchronized, so regardless of whether there is no data on the device, there will be this process.
[Email protected] ~]# drbd-overview 0:dbdata/0 synctarget primary/secondary inconsistent/uptodate B r-----[======> .............] Sync ' ed:36.7% (6488/10244) M
Sync Complete:
[Email protected] ~]# drbd-overview 0:dbdata/0 Connected primary/secondary uptodate/uptodate B r-----/data ext4 9.9G 146M 9.3G 2%
After synchronization is complete, you can create a file system on the/dev/drbd0 to mount it.
[[email protected] ~]# mkfs-t ext4-b 1024/dev/drbd0[[email protected] ~]# mkdir/data[[email protected] ~]# mount/dev/ Drbd0/data
5) master-slave switching
In the primary/secondary model of DRBD, only one node can be primary at a time, and to perform a role conversion, the primary node needs to be set to secondary before the original secondary node can be promoted to primary.
On the Node1:
[[Email protected] ~] # Umount/data[[email protected] ~] # Drbdadm Secondary dbdata
On the Node2:
[Email protected] ~]## Drbdadm primary dbdata[[email protected] ~]# drbd-overview 0:dbdata/0 Connected Primary/seconda Ry Uptodate/uptodate B r-----[[email protected] ~]## mkdir/data[[email protected] ~]## Mount/dev/drbd0/data
6) DRBD Brain fissure recovery
If the cluster fails, DRBD may have a brain fissure. There is no situation on the two nodes of DRBD.
Node1:
[Email protected] ~]# drbd-overview 0:dbdata/0 StandAlone secondary/unknown uptodate/dunknown r-----
Node2:
[Email protected] ~]# drbd-overview 0:dbdata/0 StandAlone secondary/unknown uptodate/dunknown r-----
This allows you to recreate the device on one of the devices and restart the service on two nodes.
[[email protected] ~]# Drbdadm down Dbdata[[email protected] ~]# drbdadm CREATE-MD dbdata
After restarting the service, the data will start syncing again:
[Email protected] ~]# drbd-overview 0:dbdata/0 synctarget secondary/secondary inconsistent/uptodate B r-----[...] ...............] Sync ' ed:8.0% (9432/10244) M
.................^_^
Simple application of DRBD