紅帽6下的HA高可用叢集-RHCS

來源:互聯網
上載者:User

紅帽6下的HA高可用叢集-RHCS
環境 :linux rhel 7 下的兩台6.5虛擬機器且:
1 iptables disabled
2 selinux disabled
兩台6.5虛擬機器,ip分別為192.168.157.111
192.168.157.222 (大家可以在主機名稱上看到區分)

注意:
1紅帽高可用性附加組件最多支援的叢集節點數為 16。
2使用 luci 配置 GUI。

3 該組件不支援在叢集節點中使用 NetworkManager。如果您已經在叢集節點中安裝了 NetworkManager,您應

該刪除該應用程式。
4叢集中的節點使用多播地址彼此溝通。因此必須將紅帽高可用附加組件中的每個網路切換以及關聯的連網設

備配置為啟用多播地址並支援 IGMP(互連網組管理協議)。請確定紅帽高可用附加組件中的每個網路切換
以及關聯的連網裝置都支援多播地址和 IGMP
5紅帽企業版 Linux 6 中使用 ricci 替換 ccsd。因此必需在每個叢集節點中都運行 ricci
6從紅帽企業版 Linux 6.1 開始,您在任意節點中使用 ricci 推廣更新的叢集配置時要求輸入密碼。您在系統

中安裝 ricci 後,請使用 passwd ricci 命令為使用者 ricci 將 ricci 密碼設定為 root。



首先對yum源加以修改

[base]
name=Instructor Server Repository
baseurl=http://localhost/pub/6.5
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HA]
name=Instructor HA Repository
baseurl=http://localhost/pub/6.5/HighAvailability
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[LoadBalancer]
name=Instructor LoadBalancer Repository
baseurl=http://localhost/pub/6.5/LoadBalancer
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ResilientStorage]
name=Instructor ResilientStorage Repository
baseurl=http://localhost/pub/6.5/ResilientStorage
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]
name=Instructor ScalableFileSystem Repository
baseurl=http://localhost/pub/6.5/ScalableFileSystem
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

下面對軟體的安裝,大家注意主機的變化

[root@server222 Desktop]#yum install ricci -y
[root@server111 Desktop]# yum install luci -y
[root@server111 Desktop]# yum install ricci -y


[root@server111 Desktop]# passwd ricci #為ricci設定密碼
Changing password for user ricci.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@server111 Desktop]# /etc/init.d/ricci start #啟動ricci,並且設定開機啟動
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]

[root@server222 Desktop]# passwd ricci #同上為ricci設定密碼
Changing password for user ricci.
New password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.
[root@server222 Desktop]# /etc/init.d/ricci start #啟動ricci ,並且開機啟動
Starting oddjobd: [ OK ]
generating SSL certificates... done
Generating NSS database... done
Starting ricci: [ OK ]
[root@server111 Desktop]#/etc/init.d/luci start #並啟動luci ,開機啟動
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server111.example.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
(none suitable found, you can still do it manually as mentioned above)


Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci... [ OK ]
Point your web browser to https://server111.example.com:8084 (or equivalent) to access luci


點擊上面那個串連進入web設定介面如,如果打不開則說明你沒有本地解析,請在/etc/hosts中添加解析,如所示

需要下載認證


看到主介面如所示

luci原生root使用者與其密碼登入
進入cluster介面,點擊create 的刀以下介面,相關的配置中所示


建立cluster


等待建立完成兩台主機都會自動重啟,結果為所示


成功建立後:
[root@server222 ~]# cd /etc/cluster/

[root@server222 cluster]# ls
cluster.conf cman-notify.d

[root@server222 cluster]# cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id: 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 9
Flags: 2node
Ports Bound: 0 11 177
Node name: 192.168.157.222
Node ID: 2
Multicast addresses: 239.192.30.14
Node addresses: 192.168.157.222

[root@server111 ~]# cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id: 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 7
Flags: 2node
Ports Bound: 0
Node name: 192.168.157.111
Node ID: 1
Multicast addresses: 239.192.30.14
Node addresses: 192.168.157.111

[root@server111 ~]# clustat
Cluster Status for forsaken @ Tue May 19 22:01:06 2015
Member Status: Quorate

Member Name IDStatus
------ ---- ---- ------
192.168.157.111 1 Online,Local
192.168.157.222 2 Online

[root@server222 cluster]# clustat
Cluster Status for forsaken @ Tue May 19 22:01:23 2015
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
192.168.157.111 1 Online
192.168.157.222 2 Online,Local


為節點添加fence機制
注釋:tramisu 為我的物理機,該步驟需要在物理機上完成

[root@tramisu ~]# yum install fence-virtd.x86_64fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64fence-virtd-serial.x86_64 -y
[root@tramisu Desktop]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
libvirt 0.1

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:
No listener module named multicast found!
Use this value anyway [y/N]? y

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listenson all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0 #因為我的物理機的網卡為br0在與虛擬機器通訊使用,大家根據自己實際情況設定

The key file is the shared key information which is used to
authenticate fencing requests. Thecontents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
backends {
libvirt {
uri ="qemu:///system";
}

}

listeners {
multicast {
port = "1229";
family = "ipv4";
interface = "br0";
address ="225.0.0.12"; #多播地址
key_file ="/etc/cluster/fence_xvm.key"; #產生key的地址
}

}

fence_virtd {
module_path ="/usr/lib64/fence-virt";
backend = "libvirt";
listener ="multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@tramisu Desktop]# mkdir /etc/cluster
[root@tramisu Desktop]# fence_virtd -c^C
[root@tramisu Desktop]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128count=1 #運用dd命令產生隨機數的key
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000455837 s, 781 kB/s
[root@tramisu ~]# ll /etc/cluster/fence_xvm.key #產生的key
-rw-r--r-- 1 root root 128 May 19 22:13 /etc/cluster/fence_xvm.key
[root@tramisu ~]# scp /etc/cluster/fence_xvm.key192.168.157.111:/etc/cluster/ #將key遠程拷貝給兩個節點,注意拷貝的目錄
The authenticity of host '192.168.157.111 (192.168.157.111)' can't beestablished.
RSA key fingerprint is 80:50:bb:dd:40:27:26:66:4c:6e:20:5f:82:3f:7c:ab.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.157.111' (RSA) to the list of knownhosts.
root@192.168.157.111's password:
fence_xvm.key100% 128 0.1KB/s00:00
[root@tramisu ~]# scp /etc/cluster/fence_xvm.key192.168.157.222:/etc/cluster/
The authenticity of host '192.168.157.222 (192.168.157.222)' can't beestablished.
RSA key fingerprint is 28:be:4f:5a:37:4a:a8:80:37:6e:18:c5:93:84:1d:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.157.222' (RSA) to the list of knownhosts.
root@192.168.157.222's password:
fence_xvm.key100% 128 0.1KB/s00:00
[root@tramisu ~]# systemctl restart fence_virtd.service #重啟服務
回到luci主機的web網頁設定fence如


設定完成後如所示


回到每一個節點進行設定,如


中第二步具體設定如


建議大家填寫uuid,兩格節點都做上面相同的設定,在此不在重複展示,只是填寫的uuid不一樣

設定完成後如


[root@server111 ~]# cat /etc/cluster/cluster.conf #查看檔案內容的改變
<?xml version="1.0"?>
<cluster config_version="6" name="forsaken">
<clusternodes>
<clusternodename="192.168.157.111" nodeid="1">
<fence>
<methodname="Method">
<devicedomain="c004f9a6-c13c-4e85-837f-9d640359b08b"name="forsaken-fence"/>
</method>
</fence>
</clusternode>
<clusternodename="192.168.157.222" nodeid="2">
<fence>
<methodname="Method">
<devicedomain="19a29893-08a1-48e5-a8bf-688adb8a6eef"name="forsaken-fence"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1"two_node="1"/>
<fencedevices>
<fencedeviceagent="fence_xvm" name="forsaken-fence"/>
</fencedevices>
</cluster>

[root@server222 ~]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="6" name="forsaken">
<clusternodes>
<clusternodename="192.168.157.111" nodeid="1">
<fence>
<methodname="Method">
<devicedomain="c004f9a6-c13c-4e85-837f-9d640359b08b"name="forsaken-fence"/>
</method>
</fence>
</clusternode>
<clusternodename="192.168.157.222" nodeid="2">
<fence>
<methodname="Method">
<devicedomain="19a29893-08a1-48e5-a8bf-688adb8a6eef"name="forsaken-fence"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1"two_node="1"/>
<fencedevices>
<fencedevice agent="fence_xvm"name="forsaken-fence"/>
</fencedevices>
</cluster>

可以發現兩個節點的檔案內容應該是一模一樣的
[root@server222 ~]# clustat #查看節點狀態,兩台節點狀態也應該是一樣的
Cluster Status for forsaken @ Tue May 19 22:31:31 2015
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
192.168.157.111 1 Online
192.168.157.222 2 Online,Local
至此,兩台節點的fence機制設定完全完成,我們可以做一些實驗來測驗是否正常工作
[root@server222 ~]# fence_node 192.168.157.111 #利用命令切掉111節點
fence 192.168.157.111 success
[root@server222 ~]# clustat
Cluster Status for forsaken @ Tue May 19 22:32:23 2015
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
192.168.157.111 1 Offline #第一台被換掉,此時我們查看111主機應該會進入重啟狀態,證明fence機制正常工作
192.168.157.222 2 Online, Local

[root@server222 ~]# clustat #當111節點重新啟動會,會自動唄加入到節點中,此時222主機作為主機,111節點作為備用節點
Cluster Status for forsaken @ Tue May 19 22:35:28 2015
Member Status: Quorate

Member Name ID Status
------ ---- ---- ------
192.168.157.111 1 Online
192.168.157.222 2 Online,Local

[root@server222 ~]# echo c > /proc/sysrq-trigger #也可以使用記憶體破壞等命令,或者手動宕掉網卡等操作來實驗,唄破壞的節點自動重啟,重啟後作為備機,大家可以自行實驗,在此不做多餘的介紹


the end

編者註:本篇博文完結,但是HA還有很多,我會在接下來的博文中繼續這次的介紹,希望大家繼續關注,想學習的不要在這次用完這兩個虛擬機器後就刪除,以後的內容會再次基礎上繼續,謝謝。
by:forsaken627

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.