Build DRBD-based data sharing step by step

Source: Internet
Author: User
Tags echo b
DRBD (DistributedReplocatedBlocakDevice) is compiled into the kernel in Versions later than 2.6.33. Therefore, DRBD

DRBD (Distributed Replocated Blocak Device) Distributed replication Block devices are compiled into the kernel in Versions later than 2.6.33. Therefore, DRBD

DRBD (Distributed Replocated Blocak Device) Distributed replication block Device
In Versions later than 2.6.33, DRBD is compiled into the kernel, so DRBD works in the kernel.

Let's take a look at the workflow of DRBD:

When the drbd feature is enabled and the drbd device is mounted to the local device, when the user space needs to store data to the disk, when the data passes the cache, drbd divides the data into two channels, one path is stored on a local disk, and the other path transmits data to another drbd service host through the local TCP/IP protocol stack and network, the drbd host receives data and stores the data to a local drbd device to achieve data sharing and synchronization. However, the entire data synchronization process is carried out in the kernel space, therefore, it has no impact on user space.

Drbd consists of two parts: the kernel module and the user space management tool.

User space management tool: defines the behavior of data transmission and sharing in the kernel.
Kernel module: transmits and shares data according to the defined methods of user space management tools.

There are two types of Drbd models:

Single master node: any resource can only be read and written on the master node. The file type of the master node can be any, as long as it is a solution for transferring resources in a highly available cluster.

Dual-master node: any user can perform read/write operations on any node at any time, but it must be implemented in combination with the cluster file system. Therefore, GFS or OCFS2 must be used to implement its applications, supported only after drbd 8.0

The following describes how drbd works through the specific configuration process.

Configuration prerequisites:
1) This configuration has two Test Nodes, node1.a.org and node2.a.org, with the IP addresses 192.168.0.202 and 192.168.0.204 respectively;
2) each node of node1 and node2 provides a shard of the same size as the drbd device. Here we set the shard size to/dev/sda5 and the size to 1 GB;

1. Preparations

Ensure that the host names and corresponding IP Address Resolution services of the two nodes can work normally, and the host names of each node must be consistent with the results of the "uname-n" command. Therefore, make sure that the/etc/hosts file on both nodes is the following content:
192.168.0.202 node1.a.org node1
192.168.0.204 node2.a.org node2

To keep the host name as above after the system is restarted, you must execute the following commands on each node:

Node1:

Hostname node1.a.org

Hostname node2.a.org

2. install the software package

Drbd consists of a kernel module and a user space management tool. The drbd kernel module code has been integrated into Versions later than Linux kernel 2.6.33. Therefore, if your kernel version is later than this version, you only need to install the management tool. Otherwise, you need to install both the kernel module and the management tool, and the version numbers must be consistent.

Here we use the latest version 8.3.

After the download is complete, install it directly:

Yum-y -- nogpgcheck localinstall drbd83-8.3.8-1.el5.CentOS.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm

3. Configure drbd

1) copy the sample configuration file as the configuration file to be used:

Cp/usr/share/doc/drbd83-8.3.8/drbd. conf/etc

2) Configure/etc/drbd. d/global_common.conf

Global {
Usage-count no;
# Minor-count dialog-refresh disable-ip-verification
}

Common {
Protocol C;

Handlers {
Pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f ";
Pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo B>/proc/sysrq-trigger; reboot-f ";
Local-io-error "/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o>/proc/sysrq-trigger; halt-f ";
# Fence-peer "/usr/lib/drbd/crm-fence-peer.sh ";
# Split-brain "/usr/lib/drbd/notify-split-brain.sh root ";
# Out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root ";
# Before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh-p 15 ---c 16 k ";
# After-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}

Startup {
Wfc-timeout 120;
Degr-wfc-timeout 120;
}

Disk {
On-io-error detach;
Fencing resource-only;
}

Net {
Cram-hmac-alg "sha1 ";
Shared-secret "mydrbdlab ";
}

Syncer {
Rate 100 M;
}
}

3. Define a resource/etc/drbd. d/web. res. The content is as follows:

  • Resource web {
  • On node1.a.org {
  • Device/dev/drbd0; # define the drbd device number
  • Disk/dev/sda5; # define the disk used by the drbd Device
  • Address 192.168.0.202: 7789; # network address and port of the drbd Device
  • Meta-disk internal;
  • }
  • On node2.a.org {
  • Device/dev/drbd0;
  • Disk/dev/sda5;
  • Address 192.168.0.204: 7789;
  • Meta-disk internal;
  • }
  • }
  • Copy the two configuration files to node2.

  • Cd/etc/drbd. d
  • Scp web. res global_common.conf node2:/etc/drbd. d
  • Scp/etc/drbd. conf node2:/etc/


  • 4. initialize the defined resources on the two nodes and start the service:

    1) initialize the resource and run the following commands on Node1 and node2:

  • Drbdadm create-md web
  • 2) start the service and run the following commands on Node1 and node2:

  • /Etc/init. d/drbd start
  • 3) view the startup status:

  • Cat/proc/drbd
  • You can also run the drbd-overview command to view the details:

  • Drbd-overview

  • From the above information, we can see that both nodes are in the Secondary state. Therefore, we need to set node1 as Primary and execute the following command on node1:
    Drbdsetup/dev/drbd0 primary-o

    Note: You can also use the following command to set the master node on the node to be set as Primary:
    Drbdadm -- overwrite-data-of-peer primary web
    (Note: The commands for setting the primary node are only used for the first time, and drbdadm primary web can be directly used later)

    Check the status again and you will find that the data synchronization process has started:

  • Drbd-overview


  • In this case, you can use the following command to dynamically view the synchronization process:
    Watch-n 1 'drbd-overview'
    Check the status again after data synchronization is complete.

  • Drbd-overview
  • It can be found that the node is in the real-time status and has the primary and secondary nodes:

    5. Create a File System

    Because the file system can only be mounted on the Primary node, you can only format the drbd device after setting the master node:

  • Mke2fs-j/dev/drbd0
  • Mkdir/drbd
  • Mount/dev/drbd0/drbd
  • 6. Switch between Primary and Secondary nodes.

    For the drbd service of the Primary/Secondary model, only one node can be set to Primary at a time point. Therefore, you must switch the roles of the two nodes, only after the original Primary node is set to Secondary can the original Secondary node be set to Primary:

    We will first store a single-page file on the drbddevice, and then, after the master node changes from node to node 1, will the index.html file of Node 1 be copied to node 2?

    Node1:

    Files set for verification
    First, uninstall the drbd device.

  • Umount/drbd
  • Set it to slave Node

  • Drbdadm secondary web
  • View status:

    Set node2 as primary

  • Drbdadm primary web

  • View status:
    Drbd-overview

    The status shows that node2 has become a slave node.
    Mount a drbd Device

  • Mkdir/drbd
  • Mount/dev/drbd0/mnt/drbd
  • Check whether the file exists

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.