Drbd+corosync+pacemaker building a highly available MySQL cluster

Source: Internet
Author: User
Tags file copy flushes hmac

I. Introduction of DRBD

DRBD Full Name distributed replicated block device, for distributed replication block devices, software-based implementations that do not share anything, replicate the way the disk is built to work in mirrored mode, similar to RAID1, but unlike raid, DRBD implements mirrored block data across hosts. How DRBD works: from a kernel-level DRBD, a data mirror that is to be written to a local disk is sent to a local NIC, and local disk storage is sent by the local NIC to another DRBD host. As a result, the two hosts of DRBD, the disk storage is identical, thus realizes the distributed replication block device. The read-write data operation of the DRBD process on the disk runs only on one host, and receives the user's data read and write from the DRBD host only when the primary DRBD host fails. In a clustered system, DRBD can be used with the distributed lock manager of the HA cluster, so that both the master and slave DRBD hosts can read and write data to form a dual master DRBD.

1. DRBD completes file copy at block level, works in kernel

2, DRBD is a cross-host block device mirroring system

DRBD structure diagram (from DRBD website):

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M02/8A/9E/wKioL1g1kSKSavtPAARzd46UGxg515.png "title=" 1.png " alt= "Wkiol1g1ksksavtpaarzd46ugxg515.png"/>

Work Flow chart Analysis: Service services, the file system is any process to read the disk, the kernel initiates a file system-level call to implement, the filesystem can be stored in the disk data. The buffer cache is the service that caches the data in memory and is sorted by disk scheduler, which is sent to disk driver to dominate disk storage storage. Raw device is stored directly at the block level, not through file system calls. DRBD adds a layer directly between the memory of the kernel space and the hard drive drive, mirroring a copy of the data extracted from the buffer cache, and the source data continues to be stored via disk driver on disk storage, while the mirrored data is tcp/ The IP network is sent through the local NIC to the DRBD layer from the DRBD host to disk driver to dominate disk storage storage.

DRBD's User space management tool for managing master-slave DRBD hosts. The main use of drbdadm, because it more in line with the user's habits. Drbdsetup and Drbdmeta are less used as they are closer to the underlying device.

DRBD's working features: real-time, transparent, set data synchronization type

There are three types of data synchronization:

A: Asynchronous, good performance, poor reliability

B: Semi-synchronous performance reliability Tradeoff

C: Good synchronization reliability, poor performance

Resource type for DRBD:

Resource Name: Unique, can only use ASCII code definition, cannot contain the space character

DRBD Device:

Block device files managed by DRBD,

DRBD Device:/dev/drbd#

Main device Number: 147

Secondary device number: numbering starting from 0

Disk configuration: The disk or partition on each host that is used to compose this DRBD device

Network configuration: Network traffic properties when data is synchronized

Second, the DRBD implementation of HA MySQL cluster

Previously deployed COROSYNC+PACEMAKER+CRMSH---> Corosync+pacemaker to build highly available clusters using CRMSH

1. Prepare to install DRBD (NODE1,NODE2)

DRBD rpm in kernel version 2.6.33 to automatically add kernel features, the system version of this article is CentOS6.7, the kernel version is 2.6.32-573.el6.x86_64, so to use DRBD, only compile the source code, or use the three-party RPM package provided.

Prepare Epel source [[email protected] corosync]# RPM-UVH install DRBD main program and kernel module [[email protected] corosync]# Yum install drbd83 kmod-drbd83


2. Prepare the disk (NODE1,NODE2) on two node nodes

Added 10G of new hard disk on virtual machine, no shutdown on centos6.7 system read to new hard disk

[Email protected] corosync]# ls/sys/class/scsi_host/host0 host1 host2[[email protected] corosync]# echo "---" >/ Sys/class/scsi_host/host0/scan[[email protected] corosync]# echo "---" >/sys/class/scsi_host/host1/scan[[email Protected] corosync]# echo "---" >/sys/class/scsi_host/host2/scan[[email protected] corosync]# fdisk-l | Grep/dev/sdb disk/dev/sdb:10.7 GB, 10737418240 bytes

Creates a partition for the new hard disk, but does not format it.

[[email protected] corosync]# fdisk /dev/sdbdevice contains neither a  Valid dos partition table, nor sun, sgi or osf disklabelbuilding  a new dos disklabel with disk identifier 0x35afce5b. changes will remain in memory only, until you decide to  Write them. After that, of course, the previous content won ' t be recoverable. warning: invalid flag 0x0000 of partition table 4 will be  Corrected by w (rite) warning: dos-compatible mode is deprecated. it ' s  strongly recommended to         switch off  the mode  (command  ' C ')  and change display units to          sectors  (command  ' u '). command  (M FOR HELP): pdisk /dev/sdb: 10.7 gb, 10737418240  Bytes255 heads, 63 sectors/track, 1305 cylindersunits = cylinders of  16065 * 512 = 8225280 bytesSector size  (logical/physical):  512  bytes / 512 bytesI/O size  (Minimum/optimal): 512 bytes /  512 bytesdisk identifier: 0x35afce5b   device boot       Start         End       blocks   id  systemcommand  (m for help):  nCommand action    e   extended   p   primary partition  (1-4) ppartition number  (1-4): 1first cylinder  (1-1305, default 1):  using default value 1last cylinder, +cylinders or +size{k,m, g}  (1-1305, default 1305): +5gcommand  (m for help): wThe  partition table has been altered! Calling ioctl ()  to re-read partition table. Syncing disks.


Notifies the kernel to update the partition table

[Email protected] ~]# partx-a/dev/sdb[[email protected] ~]# partx-a/dev/sdb

3. Configuration files

Its configuration file is/etc/drbd.conf, which references/etc/drbd.d/global_common.conf and/etc/drbd.d/*.res

/ETC/DRBD.D/GLOBAL_COMMON.CONF: Provides a global configuration and the same configuration as multiple DRBD devices

/etc/drbd.d/*.res: Resource definition

vim /etc/drbd.d/global_common.conf# #global: Global properties, defining DRBD's own working characteristics global {# #收集用户信息          usage-count no;        #  minor-count dialog-refresh disable-ip-verification}# #common: A common attribute that defines the common characteristics of multiple sets of DRBD devices common {         protocol c;# #处理器, defining the treatment of brain fissure in a cluster          handlers {                 # These are EXAMPLE handlers only.                 # They may have  severe implications,                 # like hard resetting the node under certain circumstances .                 # be careful when chosing  your poison.                 # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;         }# #节点之间等待开启的时间, Time out, etc.         startup {                 #  wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb         }# #磁盘相关属性         disk {                 on-io-error detach;      # #节点故障就拆除                  # on-io-error fencing use-bmbv no-disk-barrier no-disk-flushes                 # no-disk-drain no-md-flushes  max-bio-bvecs        }# #网络相关属性          net {                 cram-hmac-alg  "SHA1";             # #定义消息校验时使用的算法                  shared-secret  "opnezkj3ziyn/qyfgdvk5w";        # #算法加密的密钥                  # sndbuf-size  rcvbuf-size timeout connect-int ping-int ping-timeout max-buffers                 # max-epoch-size ko-count allow-two-primaries  cram-hmac-alg shared-secret                 # after-sb-0pri after-sb-1pri after-sb-2pri data-integrity-alg  no-tcp-cork        }# #同步类型          syncer {                 rate 100m;    # #同步速率                  # rate after al-extents use-rle cpu-mask  verify-alg csums-alg        }}

Generate random numbers, fill in the Shared-secret of net

[email protected] ~]# OpenSSL rand-base64 16opnezkj3ziyn/qyfgdvk5w==

Configure Resources

[Email protected] drbd.d]# vim mystore.resresource mystore {device/dev/drbd0;    DISK/DEV/SDB1;        On Node1 {address 192.168.0.15:7789;    Meta-disk internal;        } on Node2 {address 192.168.0.16:7789;    Meta-disk internal; }}


Copy the configuration file to Node2 one copy

[Email protected] ~]# scp/etc/drbd.d/* node2:/etc/drbd.d/global_common.conf 100% 1704 1.7kb/s 00:00 mystore.res 100% 226 0.2kb/s 00:00


4. Initialize the defined resource and restart the service on two nodes

Node1

[Email protected] ~]# drbdadm create-md mystorewriting meta data...initializing activity lognot initialized bitmapnew DRB D Meta data block successfully created.

Node2

[Email protected] ~]# drbdadm create-md mystorewriting meta data...initializing activity lognot initialized bitmapnew DRB D Meta data block successfully created.


Node1 and Node2 simultaneously start DRBD

[[Email protected] ~]# service DRBD start[[email protected] ~]# service DRBD start


Both nodes are in the secondary state after the service is started, and we want to set one of the nodes to primary (only one node execution)

[Email protected] ~]# Drbdadm primary--force mystore or [[email protected] ~]# Drbdadm----Overwrite-data-of-peer primary Mystore


View status information

[Email protected] ~]# drbd-overview 0:mystore syncsource primary/secondary uptodate/inconsistent C r-----[======>.. ...........] Sync ' ed:39.7% (3096/5128) M

Synchronization completed, Node1-based, Node2 for from

[[email protected] ~]# drbd-overview 0:mystore Connected primary/secondary uptodate/uptodate C r-----can also make node1 from, Node2 main [[Email protected] ~]# Drbdadm secondary mystore[[email protected] ~]# Drbdadm primary--force Mystore


5. Create File system

All operations are node1 the primary node.

formatting partitions

[[email protected] ~]# mke2fs -t ext4 /dev/drbd0mke2fs 1.41.12  (17- May-2010) filesystem label=os type: linuxblock size=4096  (log=2) Fragment size= 4096  (log=2) stride=0 blocks, stripe width=0 blocks328656 inodes, 1313255  blocks65662 blocks  (5.00%)  reserved for the super userfirst data  block=0maximum filesystem blocks=134637158441 block groups32768 blocks per  group, 32768 fragments per group8016 inodes per groupSuperblock  backups stored on blocks: 32768, 98304, 163840, 229376, 294912,  819200, 884736writing inode tables: done                              creating journal  (32768 blocks): donedonewriting superblocks and filesystem  Accounting information: donethis filesystem will be automatically checked  every 24 mounts or180 days, whichever comes first.  use  tune2fs -c or -i to override.


Mount and Test DRBD

[Email protected] ~]# mount/dev/drbd0/mnt[[email protected] ~]# cd/mnt[[email protected] mnt]# cp/etc/issue./[[email Protected] mnt]# lsissue Lost+found

Unloading

[Email protected] ~]# umount/mnt



Not to be continued!

The level is limited, if there are errors, please correct me.

This article is from the "Linux Sailing" blog, make sure to keep this source http://jiayimeng.blog.51cto.com/10604001/1875979

Drbd+corosync+pacemaker building a highly available MySQL cluster

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.