Linux high availability (HA) Cluster notes heartbeat + drbd + NFS

Source: Internet
Author: User

By: http://blog.vfocus.net

Heartbeat + drbd

If the primary server goes down, the loss is immeasurable. To ensure uninterrupted services on the master server, you need to implement redundancy on the server. Among the numerous solutions for implementing server redundancy, heartbeat provides us with a cheap and Scalable high-availability cluster solution. We use heartbeat + drbd to create a HA cluster server in Linux.
Drbd is a block device that can be used in HA. It is similar to a network raid-1 function. When you write data to a local file system, the data will be sent to another host on the network. Record in the same format in a file system. Data on the local (master node) and remote (slave node) can be synchronized in real time. When the local system fails, the same data is retained on the remote host and can be used again. The drbd function can be used in high availability (HA) instead of a shared disk array. Because the data is stored on both the local host and remote host. When switching, the remote host only needs to use the backup data above it to continue the service.

Install heartbeat

[Root @ manager SRC] # rpm-IVH e2fsprogs-1.35-7.1.i386.rpm
[Root @ manager SRC] # tar zxvf libnet.tar.gz
[Root @ manager Libnet] #./configure
[Root @ manager Libnet] # Make
[Root @ manager Libnet] # make install
[Root @ manager SRC] # tar zxvf heartbeat-2.1.2.tar.tar
[Root @ manager SRC] # cd heartbeat-2.1.2
[Root @ manager heartbeat-2.1.2] #./configureme configure
[Root @ manager heartbeat-2.1.2] # Make
[Root @ manager heartbeat-2.1.2] # make install
[Root @ manager heartbeat-2.1.2] # cp DOC/ha. Cf/etc/ha. d/
[Root @ manager heartbeat-2.1.2] # cp DOC/haresources/etc/ha. d/
[Root @ manager heartbeat-2.1.2] # cp DOC/authkeys/etc/ha. d/
[Root @ manager heartbeat-2.1.2] # cd/etc/ha. d/
Start editing the configuration file (installation and configuration are required on both machines)
(The configuration of heartbeat is relatively simple. There are many examples on the Internet, so there is not much nonsense here)

Start compiling and installing drbd

[Root @ manager root] # cp drbd-8.2.1.tar.tar/usr/src/
[Root @ manager root] # cd/usr/src/
[Root @ manager SRC] # tar zxvf drbd-8.2.1.tar.tar
[Root @ manager SRC] # cd drbd-8.2.1
[Root @ manager SRC] # Make kernver = 2.6.17.11 kdir =/usr/src/linux-2.6.17.11

If the compilation succeeds, you can see Module build was successful.
[Root @ manager drbd-8.2.1] # make install

You can edit the configuration file.

[Root @ manager drbd-8.2.1] # vi/etc/drbd. conf
Install the drbd service on both Manager and manage_bak.
Configure/etc/drbd. conf on the two machines respectively.
[Root @ manager_bak root] # grep-V "#"/etc/drbd. conf
Global {
Usage-count yes; (whether to participate in user statistics; Yes indicates to participate)
}
Common {
Syncer {rate 300 m ;}
}
Resource R0 {
Protocol C)
Disk {
On-io-error detach;
Size: 100 GB; (because the hard disks of the two servers are different in the experimental environment, you need to set the size of drbd)
}
Net {
After-sb-0pri disconnect;
Rr-conflict disconnect;
}
Syncer {
Rate 300 m; (set the network synchronization rate)
Al-extents 257;
}
On manager_bak {
Device/dev/drbd0;
Disk/dev/sda3;
Address 192.168.0.2: 7788;
Meta-disk internal;
}
On manager {
Device/dev/drbd0;
Disk/dev/SDC;
Address 192.168.0.1: 7788;
Meta-disk internal;
}
}

Before starting drbd, you need to create a data block for the drbd to record information. Execute the following on the two hosts:

[Root @ Manager Ha. d] # drbdadm create-MD R0
[Root @ Manager Ha. d] # mknod/dev/drbd0 B 147 0
[Root @ Manager Ha. d] #/etc/init. d/drbd srart

The two hosts are both standby and "inconsistent". This is because drbd cannot determine which is the host and which disk data is used as the standard data. Therefore, we need to initialize and execute the following on the Manager:

[Root @ Manager/] # drbdsetup/dev/drbd0 primary-o

Now Data Synchronization starts. You can use CAT/proc/drbd to view the data synchronization progress.
Check the status of drbd after data synchronization.

[Root @ Manager/] # Cat/proc/drbd
Version: 8.2.1 (API: 86/proto: 86-87)
Git-Hash: 318925802fc2638479ad090b73d7af45503dd184 build by root @ manager, 2007-12-05 16:40:14
0: CS: connected st primary/secondary DS: uptodate/uptodate C r ---
NS: 1514 Nr: 1110 DW: 2616 Dr: 2259 Al: 0 BM: 482 lo: 0 PE: 0 UA: 0 AP: 0
Resync: used: 0/31 hits: 2 misses: 2 starving: 0 dirty: 0 changed: 2
Act_log: used: 0/257 hits: 202 misses: 0 starving: 0 dirty: 0 changed: 0
The disk status is "real-time", indicating that the data synchronization is complete.

[Root @ Manager/] # mkfs. XFS/dev/drbd0

Now you can mount the drbd device on the Manager to the/export directory for use. The drbd device of the standby machine cannot be mounted because it is used to receive host data and is operated by drbd.

[Root @ Manager/] # Mount/dev/drbd0/Export
[Root @ Manager/] # DF
Filesystem 1k-blocks used available use % mounted on
/Dev/sda1 10229696 3019636 7210060 30%/
/Dev/drbd0 104806400 3046752 101759648 3%/Export

Now, restore all the content in the/export that was previously backed up.
If heartbeat is not used, drbd can only manually switch the master-slave relationship.
Modify the heartbeat configuration file so that drbd can automatically switch over heartbeat

[Root @ Manager/] # vi/etc/ha. d/haresources
Manager 192.168.0.3 drbddisk: R0 filesystem:/dev/drbd0:/export: xfs dhcpd xinetd Portmap NFS

Note: The line above
A manager is a heartbeat host.
192.168.0.3 defines the IP address of the external service, which is automatically switched between the master and slave hosts.
Drbddisk: drbd resource defined by R0
Filesystem:/dev/drbd0:/export: XFS defines the mount file system
DHCPD xinetd Portmap NFS defines other services to be switched (separated by spaces)

Now we can test it.

(DHCPD Portmap NFS and other services to be switched should be configured on both servers first)

[Root @ manager root] # chkconfig-list

Determine whether heartbeat and drbd are automatically started upon startup.
Determine the service that needs to be switched by heartbeat, and the service cannot be started upon startup (the related service is enabled by heartbeat)
Power on gg1 and start the system through PXE (gg1 is one of a group of servers, a diskless server, mounted to the system stored on the manager after startup)
SSH gg1 on Manager

[Root @ manager root] # SSH gg1
-Bash-2.05b #
-Bash-2.05b # ARP-
? (192.168.0.3) at 00: 19: B9: e4: 7d: 22 [ether] On eth0
-Bash-2.05b # ls
-Bash-2.05b # Touch Test
-Bash-2.05b # ls
Test

Shut down or stop the heartbeat service on the manager.
[Root @ manager root] #/etc/init. d/heartbeat stop
Stopping high-availability services: [OK]

To ifconfig on manager_bak
[Root @ manager_bak root] # ifconfig
We can see that eth1: 0 is up, and the IP address is 192.168.0.3.

[Root @ manager_bak root] # SSH gg1
-Bash-2.05b #
-Bash-2.05b # ARP-
? (192.168.0.3) at 00: 19: B9: E5: 3B: FC [ether] On eth0 (you can see that the MAC address of 192.168.0.3 has changed)
-Bash-2.05b # ls
Test

You can see the test file created after gg1 on SSH on the manager.
-Bash-2.05b # echo "this is test"> Test
-Bash-2.05b # Cat Test
This is test

We can see that gg1 can read and write the NFS disk from manager_bak.
Start the heartbeat service on the manager.

[Root @ manager root] #/etc/init. d/heartbeat start
Starting high-availability services:
12:46:08 info: resource is stopped [OK]
-Bash-2.05b # Cat Test
This is test

Now we are done. In the past, without using drbd, heartbeat can also switch services such as Apache, DHCPD, Portmap, and NFS, but after the NFS service is switched, you must re-mount the NFS shared directory. Otherwise, the error stale NFS file handle is reported. Now, when heartbeat + drbd is used together, NFS and other services can be switched seamlessly, so you do not have to mount the NFS Directory again.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.