A detailed description of the DRBD installation configuration process based on CentOS6.7

Source: Internet
Author: User

I. Introduction of DRBD


The full name of DRBD is: Distributed Replicatedblock Device (DRBD) distributed block devices replication, which is made up of kernel modules and related scripts to build high-availability clusters. The way to do this is to mirror the entire device over the network. You can think of it as a network raid. It allows the user to create a real-time image of a local block device on a remote machine.


Second, how does DRBD work?


(DRBD Primary) is responsible for receiving data, writing the data to a local disk and sending it to another host (DRBD secondary). The other host then saves the data to its own disk. Currently, DRBD only allows read and write access to one node at a time, but this is sufficient for the usual failover of a highly available cluster. It is possible that later versions will support two nodes for read and write access.


Third, the relationship between DRBD and Ha


A DRBD system consists of two nodes, similar to Ha clusters, with the primary and standby nodes, where applications and operating systems can run and access the DRBD device (/dev/drbd*) on nodes with primary devices. The data written by the master node is stored on the primary node's disk device through the DRBD device, and the data is automatically sent to the backup node's DRBD device, which eventually writes to the standby node on the disk device, and on the standby node, DRBD simply writes data from the DRBD device to the disk on the standby node. Most high-availability clusters now use shared storage, and DRBD can act as a shared storage device, using DRBD that does not require much hardware to invest. Because it runs in a TCP/IP network, using DRBD as a shared storage device saves a lot of cost because the price is much cheaper than a dedicated storage network, and its performance and stability are good.


Four, DRBD replication mode


Protocol A:


Asynchronous replication protocol. Once the local disk write has completed and the packet is already in the Send queue, the write is considered complete. In the event of a node failure, data loss can occur because the data that is written to the remote node may still be in the sending queue. Although the data on the failover node is consistent, it is not updated in a timely manner. This is typically used for geographically separate nodes


Protocol B:


Memory synchronization (semi-synchronous) replication protocol. Once the local disk write is completed and the replication packet reaches the peer node, it is considered to be written on the master node as completed. Data loss can occur in the case of simultaneous failure of participating two nodes because the data in transit may not be committed to disk


Protocol C:


Synchronous replication Protocol. Write is considered complete only if the disk on the local and remote nodes has confirmed that the write operation is complete. There is no data loss, so this is a popular mode for cluster nodes, but I/O throughput depends on network bandwidth


Generally, protocol C is used, but the choice of C protocol will affect traffic, thus affecting network latency. For data reliability, we need to be cautious about which protocol to use when using a production environment



V. the underlying block device supported by DRBD

DRBD needs to be built on top of the underlying device, and then build a block device out of it. For the user, a DRBD device, like a physical disk, can create a file system on top of it. The underlying devices supported by DRBD have the following classes:

1, a disk, or a partition of a disk;

2, a RAID device;

3, a logical volume of LVM;


Iv. working principle diagram of DRBD


DRBD is a distributed storage system in the storage layer of the kernel of Linux, and can be used to share file systems and data between two Linux servers using DRBD. Similar to the functionality of a network RAID-1:

650) this.width=650; "src=" http://img.educity.cn/img_16/333/2014031904/108857041256.jpg "alt=" 108857041256.jpg "/ >


VI. Configuration tools for DRBD

DRBDADM: Advanced management tools, managing/etc/drbd.conf, sending instructions to Drbdsetup and Drbdmeta.

Drbdsetup: Configuration of the DRBD module loaded into the kernel, usually rarely used directly.

Drbdmeta: Managing meta data structures, usually rarely used directly.


The configuration file for DRBD illustrates that the master configuration file for DRBD is/etc/drbd.conf; for ease of management, it is now common to divide some configuration files into multiple sections and save them to the/ETC/DRBD.D directory, using only "include" in the Master profile Directives to integrate these configuration file fragments. Typically, the configuration file in the/etc/drbd.d directory is global_common.conf and all files ending in. Res. Where global_common.conf defines both the global and common segments, and each. res file is used to define a resource.

In a configuration file, the global segment can only occur once, and if all of the configuration information is saved to the same configuration file without being separated into multiple files, the global segment must be at the beginning of the configuration file. The parameters that can be defined in the current global segment are only Minor-count, Dialog-refresh, Disable-ip-verification, and Usage-count.

The common segment is used to define parameters that are inherited by default by each resource, and parameters that can be used in a resource definition can be defined in the common segment. In practice, common segments are not required, but it is recommended that multiple resource-sharing parameters be defined as parameters in the common segment to reduce the complexity of the configuration file.

The resource segment is used to define the DRBD resource, and each resource is typically defined in a separate file in the/etc/drbd.d directory that ends in. Res. The resource must be named when it is defined, and the name can consist of non-whitespace ASCII characters. The definition of each resource segment contains at least two host sub-segments to define the node to which this resource is associated, and other parameters can be inherited from the default of the common segment or DRBD without having to be defined.

The DRBD configuration file is modular, drbd.conf is the master configuration file, and the other module configuration files are/etc/drbd.d/


Viii. Resources for DRBD

Resource Name: can be any acsii character except for whitespace characters

DRBD Device: The equipment file for this DRBD device on both nodes, typically/DEV/DRBDN, with the main device number 147

Disk configuration: On both nodes, the respective storage devices are provided

Nerwork configuration: Network properties used by both sides of data synchronization


Case

Resource Data {#资源名为 "data"

On master {#设置数据节点master

device/dev/drbd0; #配置drbd的标示名

DISK/DEV/SDB1; #指定要用于drbd的设备

Address 192.168.1.136:7788; #指定数据节点ip和端口号

Meta-disk internal; #配置meta数据盘, the Meta-disk and DRBD data disks we use here are partitions, which can also be partitioned separately.

}

On slave {

device/dev/drbd0;

DISK/DEV/SDB1;

Address 192.168.1.137:7788;

Meta-disk internal;

}

}



Ix. installation of DRBD


Description: DRBD consists of two parts: kernel module and user space management tool. The DRBD kernel module code has been integrated into the Linux kernel 2.6.33 later, so if your kernel version is higher than this version, you only need to install the management tools, otherwise, you need to install both kernel modules and management tools two packages, and both of the version number must be kept corresponding.


Master

[Email protected] ~]# yum-y install drbd84 kmod-drbd84

Slave

[Email protected] ~]# yum-y install drbd84 kmod-drbd84


Viewing the status after startup/ETC/INIT.D/DRBD status

DRBD driver loaded OK; Device Status:

version:8.2.6 (api:88/proto:86-88)

GIT-HASH:3E69822D3BB4920A8C1BFDF7D647169EBA7D2EB4 build by [email protected], 2008-10-03 11:30:17

M:res CS St DS p mounted fstype

0:r0 Connected secondary/secondary inconsistent/inconsistent C

Now both machines are in secondary, which is the standby state. (red part marks),

of the wrong

DRBD driver loaded OK; Device Status:

version:8.3.16 (api:88/proto:86-97)

git-hash:a798fa7e274428a357657fb52f0ecf40192c1985 build by [email protected], 2014-11-24 14:51:37

M:res CS ro ds p mounted fstype

0:r0 wfconnection Secondary/unknown Inconsistent/dunknown C

If it is unknow, check whether two is configured as well.



X. Master and Standby node switch for DRBD

There are two ways to switch the main and standby nodes, namely stop the DRBD service switch and normal switch, and then introduce:

1 stopping the DRBD service switchover

When the primary node service is turned off, the mounted DRBD partition is automatically unloaded at the primary node, and then the Switch command is executed on the standby node:

[[Email protected] ~] #drbdadm Primary All

This will cause an error:

2:state Change failed: ( -7) refusing to being Primary while peer was not outdated

Command ' Drbdsetup 2 primary ' terminated with exit code 11

Therefore, the following command must be executed on the standby node:

[[Email protected] ~] #drbdsetup/dev/drbd0 primary–o

Or

[[Email protected] ~] #drbdadm----Overwrite-data-of-peer primary all

At this point, you can switch normally.

When the alternate node executes the switch to the Master node command, the original primary node automatically becomes the standby node. There is no need to perform a command switch to an alternate node again at the primary node.


2 Normal switching

Unmount the disk partition on the primary node,

umount/mnt/

And then execute

[[Email protected] ~] #drbdadm Secondary All

If you do not execute this command, the command to switch to the master node directly on the standby node will error:

2:state Change failed: ( -1) multiple primaries no allowed by config

Command ' Drbdsetup 2 primary ' terminated with exit code 11

Next, the backup node executes the

[[Email protected] ~] #drbdadm Primary All

Finally, mount the disk partition on the standby node:

[[Email protected] ~] #mount/dev/drbd0/mnt


Reference documents

Http://www.linuxidc.com/Linux/2013-09/90321.htm

Http://www.educity.cn/linux/1149600.html



This article is from the "Cowboy" blog, make sure to keep this source http://fangniuwa.blog.51cto.com/10209030/1759470

A detailed description of the DRBD installation configuration process based on CentOS6.7

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.