Basic Environment Preparation Environment topology diagram |
650) this.width=650; "title=" image "style=" Margin:0px;border:0px;background-image:none; "alt=" image "src=" http:/ S3.51cto.com/wyfs02/m01/97/6d/wkiom1kux92t6d90aacw9s7nfm0328.png "border=" 0 "height=" 484 "/>
Linux Basic service settings |
Close Iptables
#/etc/init.d/iptables stop
#chkconfig iptables off
#chkconfig List | grep iptables
Turn off SELinux
#setenforce 0
#vi/etc/selinux/config
# This file controls the state of the SELinux on the system.
# selinux= can take one of these three values:
# Enforcing-selinux security policy is enforced.
# Permissive-selinux Prints warnings instead of enforcing.
# disabled-no SELinux policy is loaded.
Selinux=disabled
# selinuxtype= can take one of these the values:
# Targeted-only targeted Network daemons is protected.
# Strict-full SELinux protection.
selinuxtype=targeted
Close NetworkManager
#/etc/init.d/networkmanager stop
#chkconfig NetworkManager off
#chkconfig List | grep NetworkManager
# mkdir ~/.ssh
# chmod ~/.ssh
# ssh-keygen-t RSA
Enter
Enter
Enter
# Ssh-keygen-t DSA
Enter
Enter
Enter
N1PMCSAP01 execution
# cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
# SSH N1PMCSAP02 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
Yes
N1PMCSAP02 's password
# SCP ~/.ssh/authorized_keys N1pmcsap02:~/.ssh/authorized_keys
Storage Multipath Configuration |
650) this.width=650; "title=" clip_image004 "style=" Border:0px;background-image:none; "alt=" clip_image004 "src=" http ://s3.51cto.com/wyfs02/m02/97/6d/wkiol1kux3_zi82gaabjueh6h8s746.gif "border=" 0 "height=" 248 "/>
Refreshing the IBM storage path using the autoscan.sh script
650) this.width=650; "title=" clip_image006 "style=" Border:0px;background-image:none; "alt=" clip_image006 "src=" http ://s3.51cto.com/wyfs02/m01/97/6d/wkiol1kux4ds8ljcaabqrjaqd4m385.gif "border=" 0 "height=" 279 "/>
View storage underlying WWID
NAME |
WWID |
Capcity |
Path |
Dataqdisk |
360050763808101269800000000000003 |
5GB |
4 |
Data |
360050763808101269800000000000001 |
328GB |
4 |
Creating a Multipath configuration file
#vi/etc/multipath.conf
650) this.width=650; "title=" clip_image008 "style=" Border:0px;background-image:none; "alt=" clip_image008 "src=" http ://s3.51cto.com/wyfs02/m00/97/6c/wkiom1kux4gwweowaaa7npmzztw359.gif "border=" 0 "height=" 309 "/>
Blacklist devnode default is ^sd[a] need to be modified to ^sd[i] This environment is the root partition SDI, shielded local hard disk
650) this.width=650; "title=" clip_image010 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image010 "src=" http://s3.51cto.com/wyfs02/M00/97/6D/wKioL1kuX4KgSGElAACLxuTHdLo703.gif "border=" 0 "height=" 426 "/ >
Restart the MULTIPATHD service after configuration is complete
/ETC/INIT.D/MULTIPATHD restart
650) this.width=650; "title=" clip_image011 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image011 "src=" http://s3.51cto.com/wyfs02/M01/97/6D/wKioL1kuX4LCpIDqAAB8fyucmu0516.gif "border=" 0 "height=" 335 "/ >
To refresh a storage path using the MULTIPATH-V2 command
Nic binding topology Diagram
650) this.width=650; "title=" image "style=" Margin:0px;border:0px;background-image:none; "alt=" image "src=" http:/ S3.51cto.com/wyfs02/m01/97/6d/wkiom1kux93cebmmaabefitxnza342.png "border=" 0 "height=" 134 "/>
650) this.width=650; "title=" clip_image015 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image015 "src=" http://s3.51cto.com/wyfs02/M00/97/6C/wKiom1kuX4OysBHVAAAa8Q_HMqE316.gif "border=" 0 "height=" 121 "/ >
Modify Vi/etc/modules.conf
650) this.width=650; "title=" clip_image017 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image017 "src=" http://s3.51cto.com/wyfs02/M02/97/6C/wKiom1kuX4SQqoc0AAAinBH_uGU746.gif "border=" 0 "height=" 94 "/ >
Create a network adapter binding configuration file
N1PMCSAP01 Bond Configuration
[Email protected]/]# vi/etc/sysconfig/network-scripts/ifcfg-eth1
Device=eth1
# HWADDR=A0:36:9F:DA:DA:CD
Type=ethernet
Uuid=3ca5c4fe-44cd-4c50-b3f1-8082e1c1c94d
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
[Email protected]/]# Vi/etc/sysconfig/network-scripts/ifcfg-eth3
Device=eth3
# HWADDR=A0:36:9F:DA:DA:CB
Type=ethernet
Uuid=1d47913a-b11c-432c-b70f-479a05da2c71
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
[Email protected]/]# vi/etc/sysconfig/network-scripts/ifcfg-bond0
Device=bond0
# HWADDR=A0:36:9F:DA:DA:CC
Type=ethernet
Uuid=a099350a-8dfa-4d3f-b444-a08f9703cdc2
Onboot=yes
Nm_controlled=no
Bootproto=satic
ipaddr=10.51.66.11
netmask=255.255.248.0
gateway=10.51.71.254
n1pmcsap02 Bond Configuration
[Email protected] ~]# cat/etc/sysconfig/network-scripts/ifcfg-eth1
Device=eth1
# HWADDR=A0:36:9F:DA:DA:D1
Type=ethernet
Uuid=8e0abf44-360a-4187-ab65-42859d789f57
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
[Email protected] ~]# Cat/etc/sysconfig/network-scripts/ifcfg-eth3
Device=eth3
# HWADDR=A0:36:9F:DA:DA:B1
Type=ethernet
Uuid=d300f10b-0474-4229-b3a3-50d95e6056c8
Onboot=yes
Nm_controlled=no
Bootproto=none
Master=bond0
Slave=yes
[Email protected] ~]# cat/etc/sysconfig/network-scripts/ifcfg-bond0
Device=bond0
# hwaddr=a0:36:9f:da:da:d0
Type=ethernet
uuid=2288f4e1-6743-4faa-abfb-e83ec4f9443c
Onboot=yes
Nm_controlled=no
Bootproto=static
ipaddr=10.51.66.12
netmask=255.255.248.0
gateway=10.51.71.254
650) this.width=650; "title=" clip_image019 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image019 "src=" http://s3.51cto.com/wyfs02/M00/97/6C/wKiom1kuX4WieVaqAAA-BOeGN_E081.gif "border=" 0 "height=" 244 "/ >
Configuring the Hosts file in N1pmcsap01 and N1PMCSAP02
#Vi/etc/hosts
Rhel Local Source Configuration |
More/etc/yum.repos.d/rhel-source.repo
[Rhel_6_iso]
Name=local ISO
Baseurl=file:///media
Gpgcheck=1
Gpgkey=file:///media/rpm-gpg-key-redhat-release
[Highavailability]
Name=highavailability
Baseurl=file:///media/highavailability
Gpgcheck=1
Gpgkey=file:///media/rpm-gpg-key-redhat-release
[LoadBalancer]
Name=loadbalancer
Baseurl=file:///media/loadbalancer
Gpgcheck=1
Gpgkey=file:///media/rpm-gpg-key-redhat-release
[Resilientstorage]
Name=resilientstorage
Baseurl=file:///media/resilientstorage
Gpgcheck=1
Gpgkey=file:///media/rpm-gpg-key-redhat-release
[Scalablefilesystem]
Name=scalablefilesystem
Baseurl=file:///media/scalablefilesystem
Gpgcheck=1
Gpgkey=file:///media/rpm-gpg-key-redhat-release
[Email protected]/]# Pvdisplay
Connect () failed on local socket:no such file or directory
Internal cluster locking initialisation failed.
Warning:falling back to local file-based locking.
Volume Groups with the clustered attribute would be inaccessible.
---physical volume---
PV Name/dev/sdi2
VG Name VolGroup
PV Size 556.44 gib/not usable 3.00 MiB
Allocatable Yes (but full)
PE Size 4.00 MiB
Total PE 142448
Free PE 0
Allocated PE 142448
PV UUID 0FSZ8Q-AY1W-EF2N-9VE2-RXZM-T3GV-U4RRQ2
---physical volume---
PV Name/dev/mapper/data
VG Name Vg_data
PV Size 328.40 gib/not usable 1.60 MiB
Allocatable Yes (but full)
PE Size 4.00 MiB
Total PE 84070
Free PE 0
Allocated PE 84070
PV UUID Kjvd3t-t7v5-mulx-7kj6-oi2f-vn3r-qxn8tr
[Email protected]/]# Vgdisplay
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access Read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
ACT PV 1
VG Size 556.44 GiB
PE Size 4.00 MiB
Total PE 142448
Alloc pe/size 142448/556.44 GiB
Free Pe/size 0/0
VG UUID 6Q2TD7-AXWX-4K4K-8VY6-NGRS-IIDP-PEMPCU
---Volume Group---
VG Name Vg_data
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access Read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
ACT PV 1
VG Size 328.40 GiB
PE Size 4.00 MiB
Total PE 84070
Alloc pe/size 84070/328.40 GiB
Free Pe/size 0/0
VG UUID gfmy0o-qcmq-pkt4-zf1i-ykpu-6c2i-juosm2
[Email protected]/]# Lvdisplay
---Logical volume---
LV Path/dev/vg_data/lv_data
LV Name Lv_data
VG Name Vg_data
LV UUID 1amjnu-8unc-mmgb-7s7n-p0wg-eeoj-pxrhv6
LV Write Access Read/write
LV Creation Host, Time N1pmcsap01, 2017-05-26 11:23:04-0400
LV Status Available
# open 1
LV Size 328.40 GiB
Current LE 84070
Segments 1
Allocation inherit
Read ahead Sectors Auto
-Currently set to 256
Block Device 253:5
RHCs Software Component Installation |
# yum-y Install Cman Ricci Rgmanager Luci
650) this.width=650; "title=" clip_image021 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image021 "src=" http://s3.51cto.com/wyfs02/M00/97/6D/wKioL1kuX53RMduOAABwfHooA-M466.gif "border=" 0 "height=" 387 "/ >
Modify Ricci User name password
#passwd Ricci
RICC Password:redhat
RHCs cluster settings Luci graphical interface |
650) this.width=650; "title=" clip_image023 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image023 "src=" http://s3.51cto.com/wyfs02/M02/97/6D/wKioL1kuX56z0eDKAAE5dU7uqKI857.gif "border=" 0 "height=" 426 "/ >
Log in using your browser https://10.51.56.11:8084/
Username root, password redhat (default password not modified)
650) this.width=650; "title=" clip_image025 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image025 "src=" http://s3.51cto.com/wyfs02/M01/97/6C/wKiom1kuX5_he-Q2AAE_n6pxYCY771.gif "border=" 0 "height=" 426 "/ >
Click Manage Cluster in the top left corner to create the cluster
Node Name |
NODE ID |
Votes |
Ricci User |
Ricci Password |
HOSTNAME |
N1pmcsap01-priv |
1 |
1 |
Ricci |
Redhat |
N1pmcsap01-priv |
N1pmcsap02-priv |
2 |
1 |
Ricci |
Redhat |
N1pmcsap02-priv |
Fence Device Configuration |
650) this.width=650; "title=" clip_image026 "style=" Border:0px;background-image:none; "alt=" clip_image026 "src=" http ://s3.51cto.com/wyfs02/m01/97/6c/wkiom1kux6dwhxo2aaabexlizoa356.gif "border=" 0 "height="/>650) this.width= 650; "Title=" clip_image028 "style=" Border:0px;background-image:none "alt=" clip_image028 "src=" http://s3.51cto.com /wyfs02/m02/97/6c/wkiom1kux6dg9_s1aacjn6c1jqg117.jpg "border=" 0 "height=" 158 "/>
Because the cluster has only 2 nodes, it is possible to have a brain fissure situation, so in the event of host failure, you need to use the fence mechanism to arbitrate which host out of the cluster, and fence best practice is to use the motherboard integrated IMM port, IBM is called IMM module, HPE is Ilo,dell for Idrac, The purpose is to force the server to re-post power-on, so as to achieve the purpose of releasing resources, figure in the upper left corner of the RJ45 network port is IMM
650) this.width=650; "title=" clip_image030 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image030 "src=" http://s3.51cto.com/wyfs02/M02/97/6D/wKioL1kuX6GTU2jPAADwcDfuV3A109.gif "border=" 0 "height=" 426 "/ >
N1PMCSAP01 Fence Device Setting parameter, user name: USERID, Password: passw0rd
650) this.width=650; "title=" clip_image032 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image032 "src=" http://s3.51cto.com/wyfs02/M00/97/6C/wKiom1kuX6KTyMgaAAD4Zb9Fp5w462.gif "border=" 0 "height=" 426 "/ >
N1PMCSAP02 Fence Device Setting parameter, user name: USERID, Password: passw0rd
Failover Domain Configuration |
650) this.width=650; "title=" clip_image034 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image034 "src=" http://s3.51cto.com/wyfs02/M00/97/6D/wKioL1kuX6KTvbuXAAEQ_lqYJX4705.gif "border=" 0 "height=" 426 "/ >
Cluster resource configuration |
650) this.width=650; "title=" clip_image036 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image036 "src=" http://s3.51cto.com/wyfs02/M01/97/6C/wKiom1kuX6OQCOSFAAEPNXRtZpk605.gif "border=" 0 "height=" 426 "/ >
In general, an application should contain resources that it relies on, such as IP addresses, storage media, service scripts (applications)
In the cluster resource configuration, you need to configure the attribute parameters of various resources, the 10.51.66.1 is IP resource, the IP address of the cluster application is defined.
650) this.width=650; "title=" clip_image038 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image038 "src=" http://s3.51cto.com/wyfs02/M01/97/6D/wKioL1kuX6SSh2JyAAD3mtju_Nk955.gif "border=" 0 "height=" 426 "/ >
Lv_data is a clustered disk resource that defines the mount point of the disk and the location of the physical block device
650) this.width=650; "title=" clip_image040 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image040 "src=" http://s3.51cto.com/wyfs02/M01/97/6D/wKioL1kuX6WTXIo0AAEOisbUTUc091.gif "border=" 0 "height=" 426 "/ >
Mcscluster is the application startup script that defines the location of the startup script
Service groups Configuration |
650) this.width=650; "title=" clip_image042 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image042 "src=" http://s3.51cto.com/wyfs02/M02/97/6D/wKioL1kuX6aADPh3AAEH9SdNsBM221.gif "border=" 0 "height=" 426 "/ >
Service group defines a set of applications, including all the resources required by the application, and the boot priority level, in which the administrator can manually switch the host on which the service is running, and the recovery policy
650) this.width=650; "title=" clip_image043 "style=" Margin:0px;border:0px;background-image:none "alt=" Clip_ image043 "src=" http://s3.51cto.com/wyfs02/M02/97/6D/wKioL1kuX6aTHi-NAAABHw5ds9Q320.gif "border=" 0 "height=" 25 "/ >650) this.width=650; "title=" clip_image045 "style=" Border:0px;background-image:none "alt=" clip_image045 "src=" Http://s3.51cto.com/wyfs02/M00/97/6D/wKioL1kuX6fg6NG6AAEaXLgRzYs139.gif "border=" 0 "height=" 426 "/>
To prevent fence back and forth when the cluster joins, the post join delay is configured here for 3 seconds
View cluster configuration Files |
Finally, you can view all the configuration of the cluster by command
#cat/etc/cluster/cluster.conf
[Email protected] ~]# vi/etc/cluster/cluster.conf
<fence>
<method name= "N1pmcsap01_method" >
<device name= "N1pmcsap01_fd"/>
</method>
</fence>
</clusternode>
<clusternode name= "N1pmcsap02-priv" nodeid= "2" >
<fence>
<method name= "N1pmcsap02_method" >
<device name= "N1pmcsap02_fd"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes= "1" two_node= "1"/>
<rm>
<failoverdomains>
<failoverdomain name= "N1PMCSAP-FD" nofailback= "1" ordered= "1" >
<failoverdomainnode name= "N1pmcsap01-priv" priority= "1"/>
<failoverdomainnode name= "N1pmcsap02-priv" priority= "2"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address= "10.51.66.1" sleeptime= "ten"/>
<fs device= "/dev/vg_data/lv_data" force_unmount= "1" fsid= "25430" mountpoint= "/u01" name= "Lv_data" self_fence= "1"/ >
<script file= "/home/mcs/cluster/mcscluster" name= "Mcscluster"/>
</resources>
<service domain= "N1PMCSAP-FD" name= "APP" recovery= "Disable" >
<fs ref= "Lv_data" >
<ip ref= "10.51.66.1"/>
</fs>
<script ref= "Mcscluster"/>
</service>
</rm>
<fencedevices>
<fencedevice agent= "Fence_imm" ipaddr= "10.51.188.177" login= "USERID" name= "n1pmcsap01_fd" passwd= "Passw0rd"/>
<fencedevice agent= "Fence_imm" ipaddr= "10.51.188.178" login= "USERID" name= "n1pmcsap02_fd" passwd= "Passw0rd"/>
</fencedevices>
</cluster>
This configuration file is consistent across N1PMCSAP01 and N1PMCSAP02 two hosts
RHCs Cluster Use method view cluster status |
To view the running status using the Clustat command
[Email protected]/]# clustat-l
Cluster Status for N1pmcsap @ Fri 26 14:13:25 2017
Member status:quorate
Member Name ID Status
------ ---- ---- ------
N1pmcsap01-priv 1 Online, Local, Rgmanager
N1pmcsap02-priv 2 Online, Rgmanager
Service Information
------- -----------
Service Name:service:APP
Current state:started (112)
Flags:none (0)
Owner:n1pmcsap01-priv
Last Owner:n1pmcsap01-priv
Last Transition:fri 26 13:55:45 2017
Current state is running for service
Owner is the node that is running the service
[[email protected]/]# clusvcadm-r APP (Service Group Name)-M n1pmcsap02 (Host Member)
#手动从节点1切换至节点2
[[email protected]/]# clusvcadm-d APP (Service Group Name)
#手动停止APP服务
[[email protected]/]# clusvcadm-e APP (Service Group Name)
#手动启用APP服务
[[email protected]/]# clusvcadm-m APP (Service Group Name)-M n1pmcsap02 (Host Member)
#将Owner优先级设定为N1PMCSAP02优先
Automatically switch clusters |
When the server hardware fails, the heartbeat network is not able to reach the other side, the RHCS cluster will automatically restart the failed server, the resources to a good state of another server, when the hardware repair is complete, the administrator can choose whether to cut back the service
Attention:
Do not unplug both server's heartbeat network at the same time, can cause brain crack
This article is from the "Wei Weiye It Technology" blog, so be sure to keep this source http://popeyeywy.blog.51cto.com/745223/1930942
Create a Linux high-availability cluster using RHCS