Use the Redhat Rhcs suite to build a high-availability cluster, and bind it to implement iscsi network storage!
The extension diagram is as follows:
TargetServer
Install software[Root @ localhost ~] # Yum-y install scsi-target-utils
Enable Service[Root @ localhost ~] # Service tgtd start [root @ localhost ~] # Chkconfig tgtd on
Add a disk partition[Root @ localhost ~] # Fdisk/dev/sdb
Create a shared Disk[Root @ localhost ~] # Tgtadm -- lld iscsi -- op new -- mode target -- tid 1 -- targetname iqn.2011-21.com.a.tar get: disk [root @ localhost ~] # Tgtadm -- lld iscsi -- op new -- mode = logicalunit -- tid = 1 -- lun = 1 -- backing-store/dev/sdbtgtadm -- lld iscsi -- op bind -- mode = target -- tid = 1 -- initiator-address = 192.168.100.0/24
Auto-start[Root @ localhost ~] # Vim/etc/rc. d/rc. localtgtadm -- lld iscsi -- op new -- mode target -- tid 1 -- targetname iqn.2011-21.com.a.tar get: disktgtadm -- lld iscsi -- op new -- mode = logicalunit -- tid = 1 -- lun = 1 -- backing-store/dev/sdbtgtadm -- lld iscsi -- op bind -- mode = target -- tid = 1 -- initiator-address = 192.168.100.0/24
Initiator1
Install software[Root @ node1 ~] # Yum-y install iscsi-initiator-utils
Enable Service[Root @ node1 ~] # Service iscsi start [root @ node1 ~] # Chkconfig iscsi on
Login Disk[Root @ node1 ~] # Iscsiadm -- mode discovery -- type sendtargets -- portal 192.168.100.30 [root @ node1 ~] # Iscsiadm -- mode node -- targetname iqn.2011-21.com.a.tar get: disk -- portal 192.168.100.30 -- loginl
Auto-startVim/etc/rc. d/rc. localiscsiadm -- mode discovery -- type sendtargets -- portal 192.168.100.30iscsiadm -- mode node -- targetname iqn.2011-21.com.a.tar get: disk -- portal 192.168.100.30 -- loginl
Initiator2
Install software[Root @ node1 ~] # Yum-y install iscsi-initiator-utils
Enable Service[Root @ node1 ~] # Service iscsi start [root @ node1 ~] # Chkconfig iscsi on
Login Disk[Root @ node1 ~] # Iscsiadm -- mode discovery -- type sendtargets -- portal 192.168.100.30 [root @ node1 ~] # Iscsiadm -- mode node -- targetname iqn.2011-21.com.a.tar get: disk -- portal 192.168.100.30 -- loginl
Auto-startVim/etc/rc. d/rc. localiscsiadm -- mode discovery -- type sendtargets -- portal 192.168.100.30iscsiadm -- mode node -- targetname iqn.2011-21.com.a.tar get: disk -- portal 192.168.100.30-loginl
GFS File System
Node1[Root @ node1 ~] # Pvcreate/dev/sdb [root @ node1 ~] # Partprobe/dev/sdb [root @ node1 ~] # Service clvmd restart [root @ node1 ~] # Lvcreate-L 1500 M-n lv1 vg1 [root @ node1 ~] # Gfs_mkfs-p lock_dlm-t cluster1: lv1-j 3/dev/vg1/lv1 \ Name: group name: logical volume name [root @ node1 ~] # Service clvmd restart
Node2[Root @ node2 ~] # Partprobe/dev/sdb [root @ node2 ~] # Service clvmd restart
Note:1. after logging on to the disk, you need to enter partprobe/dev/sdb on each node to synchronize disk information and create a volume group. After the logical volume is created, both nodes will display the volume group and logical volume information, if no node is displayed, type service clvmd restart! 2. To set the GFS file system, you must create cluster1 and add the node. Otherwise, the system prompts that the clvmd service does not exist!
Node1 Configuration
Modify host name and hosts file[Root @ localhost ~] # Vim/etc/sysconfig/networkHOSTNAME = node1.a.com [root @ localhost ~] # Hostname node1.a.com [root @ localhost ~] # Vim/etc/hosts192.168.100.10 node1.a.com node119425100.20 node2.a.com node2
Install software and enable services[Root @ localhost ~] # Yum-y install ricci [root @ localhost ~] # Service ricci start [root @ localhost ~] # Chkconfig ricci on
Node2 Configuration
Modify host name and hosts file[Root @ localhost ~] # Vim/etc/sysconfig/networkHOSTNAME = node1.a.com [root @ localhost ~] # Hostname node1.a.com [root @ localhost ~] # Vim/etc/hosts192.168.100.11 node1.a.com node119425100.20 node2.a.com node2
Install software and enable services[Root @ localhost ~] # Yum-y install ricci [root @ localhost ~] # Service ricci start [root @ localhost ~] # Chkconfig ricci on
Luci settings
Install luci[Root @ target ~] # Yum-y install luci
Initialization[Root @ target ~] # Luci_admin init
Restart service[Root @ target ~] # Chkconfig luci on [root @ target ~] # Luci_admin init [root @ target ~] # Service luci restart cluster settings create cluster1
Add Node
Create fence
Add fence to Node
Create an invalid domain
Add ResourceIP
File
Service
Add Service
This article is from the "Ziyi fenghou" blog