HA high-availability cluster-RHCS under red hat 6

Source: Internet
Author: User

HA high-availability cluster-RHCS under red hat 6
Environment: Two 6.5 virtual machines under linux rhel 7 and:
1 iptables disabled
2 selinux disabled
Two 6.5 virtual machines with ip addresses 192.168.157.111
192.168.157.222 (you can see the difference on the host name)

Note:
1 Red Hat high availability additional components support a maximum of 16 cluster nodes.
2. Use luci to configure the GUI.

3. This component does not support NetworkManager in cluster nodes. If you have installed NetworkManager on a cluster node, you should

Delete the application.
4. nodes in the cluster communicate with each other using multicast addresses. Therefore, you must switch each network in the red hat high availability additional component and set the associated networking

The slave configuration enables multicast addresses and supports IGMP (Internet Group Management Protocol ). Please confirm each network switch in the red hat high availability additional component
And associated network devices support multicast addresses and IGMP
In Red Hat Enterprise Linux 6, use ricci to replace ccsd. Therefore, you must run ricci on each cluster node.
6. From Red Hat Enterprise Edition Linux 6.1, you must enter a password when using ricci to promote the updated cluster configuration on any node. You have

After installing ricci in, use the passwd ricci command to set the ricci password to root.



First, modify the yum Source

[Base]
Name = Instructor Server Repository
Baseurl = http: // localhost/pub/6.5
Gpgcheck = 1
Gpgkey = file: // etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HA]
Name = Instructor HA Repository
Baseurl = http: // localhost/pub/6.5/HighAvailability
Gpgcheck = 1
Gpgkey = file: // etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


[LoadBalancer]
Name = Instructor LoadBalancer Repository
Baseurl = http: // localhost/pub/6.5/LoadBalancer
Gpgcheck = 1
Gpgkey = file: // etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ResilientStorage]
Name = Instructor ResilientStorage Repository
Baseurl = http: // localhost/putb/6.5/ResilientStorage
Gpgcheck = 1
Gpgkey = file: // etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[ScalableFileSystem]
Name = Instructor ScalableFileSystem Repository
Baseurl = http: // localhost/putb/6.5/ScalableFileSystem
Gpgcheck = 1
Gpgkey = file: // etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

Next, we should pay attention to host changes when installing software.

[Root @ server222 Desktop] # yum install ricci-y
[Root @ server111 Desktop] # yum install luci-y
[Root @ server111 Desktop] # yum install ricci-y


[Root @ server111 Desktop] # passwd ricci # set a password for ricci
Changing password for user ricci.
New password:
Bad password: it is based on a dictionary word
Bad password: is too simple
Retype new password:
Passwd: all authentication tokens updated successfully.
[Root @ server111 Desktop] #/etc/init. d/ricci start # start ricci and set startup
Starting oddjobd: [OK]
Generating SSL certificates... done
Generating NSS database... done
Starting ricci: [OK]

[Root @ server222 Desktop] # passwd ricci # set a password for ricci as above
Changing password for user ricci.
New password:
Bad password: it is based on a dictionary word
Bad password: is too simple
Retype new password:
Passwd: all authentication tokens updated successfully.
[Root @ server222 Desktop] #/etc/init. d/ricci start # start ricci and start it
Starting oddjobd: [OK]
Generating SSL certificates... done
Generating NSS database... done
Starting ricci: [OK]
[Root @ server111 Desktop] #/etc/init. d/luci start # start luci and start it
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to 'server111 .example.com 'address, to the configuration of self-managed certificate '/var/lib/luci/etc/cacert. config '(you can change them by editing'/var/lib/luci/etc/cacert. config ', removing the generated certificate'/var/lib/luci/certs/host. pem 'and restarting luci ):
(None suitable found, you can still do it manually as mentioned above)


Generating a 2048 bit RSA private key
Writing new private key to '/var/lib/luci/certs/host. pem'
Start luci... [OK]
Point your web browser to https://server111.example.com: 8084 (or equivalent) to access luci


Click the connection above to enter the web Settings interface, as shown in figure. If it cannot be opened, it means that you do not have local resolution. Please add resolution in/etc/hosts, as shown in

Download Certificate required


Shows the main interface.

Log on to the luci local root user and his/her password
Go to the cluster interface, and click the create tool interface, as shown in the related configuration.


Create a cluster


The two hosts will be automatically restarted after they are created. The result is as follows:


After successful creation:
[Root @ server222 ~] # Cd/etc/cluster/

[Root @ server222 cluster] # ls
Cman-notify.d for cluster. conf

[Root @ server222 cluster] # cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 9
Flags: 2 node
Ports Bound: 0 11 177
Node name: 192.168.157.222
Node ID: 2
Multicast addresses: 239.192.30.14
Node addresses: 192.168.157.222

[Root @ server111 ~] # Cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: forsaken
Cluster Id 7919
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 2
Expected votes: 1
Total votes: 2
Node votes: 1
Quorum: 1
Active subsystems: 7
Flags: 2 node
Ports Bound: 0
Node name: 192.168.157.111
Node ID: 1
Multicast addresses: 239.192.30.14
Node addresses: 192.168.157.111

[Root @ server111 ~] # Clustat
Cluster Status for forsaken @ Tue May 19 22:01:06 2015
Member Status: Quorate

Member Name IDStatus
--------------------
192.168.157.111 1 Online, Local
192.168.157.222 2 Online

[Root @ server222 cluster] # clustat
Cluster Status for forsaken @ Tue May 19 22:01:23 2015
Member Status: Quorate

Member Name ID Status
--------------------
192.168.157.111 1 Online
192.168.157.222 2 Online, Local


Add a fence mechanism to a node
Note: tramisu is my physical machine. This step must be completed on the physical machine.

[Root @ tramisu ~] # Yum install fence-virtd.x86_64fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64fence-virtd-serial.x86_64-y
[Root @ tramisu Desktop] # fence_mongod-c
Module search path [/usr/lib64/fence-virt]:

Available backends:
Libvirt 0.1

Listener modules are responsible for accepting requests
From fencing clients.

Listener module [multicast]:
No listener module named multicast found!
Use this value anyway [y/N]? Y

The multicast listener module is designed for use environments
Where the guests and hosts may communicate over a network using
Multicast.

The multicast address is the address that a client will use
Send fencing requests to fence_0000d.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_0000d to listen only
On that interface. Normally, it listenson all interfaces.
In environments where the virtual machines are using the host
Machine as a gateway, this * must * be set (typically to virbr0 ).
Set to 'None' for no interface.

Interface [virbr0]: br0 # because the NIC of my physical machine is br0 and it is used for communication with the virtual machine, you can set it according to your actual situation.

The key file is the shared key information which is used
Authenticate fencing requests. Thecontents of this file must
Be distributed to each physical host and virtual machine
A cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests
The appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
Backends {
Libvirt {
Uri = "qemu: // system ";
}

}

Listeners {
Multicast {
Port = "1229 ";
Family = "ipv4 ";
Interface = "br0 ";
Address = "225.0.0.12"; # multicast address
Key_file = "/etc/cluster/fence_xvm.key"; # address for generating the key
}

}

Fence_virtd {
Module_path = "/usr/lib64/fence-virt ";
Backend = "libvirt ";
Listener = "multicast ";
}

=== End Configuration ===
Replace/etc/fence_virt.conf with the above [y/N]? Y
[Root @ tramisu Desktop] # mkdir/etc/cluster
[Root @ tramisu Desktop] # fence_mongod-c ^ C
[Root @ tramisu Desktop] # dd if =/dev/urandom of =/etc/cluster/fence_xvm.key bs = 128 count = 1 # Use the dd command to generate the random number key
1 + 0 records in
1 + 0 records out
128 bytes (128 B) copied, 0.000455837 s, 781 kB/s
[Root @ tramisu ~] # Ll/etc/cluster/fence_xvm.key # generated key
-Rw-r -- 1 root 128 May 19 22:13/etc/cluster/fence_xvm.key
[Root @ tramisu ~] # Scp/etc/cluster/fence_xvm.key192.168.157.111:/etc/cluster/# remotely copy the key to two nodes. Pay attention to the copied directory.
The authenticity of host '192. 168.157.111 (192.168.157.111) 'can't beestablished.
RSA key fingerprint is 80: 50: bb: dd: 40: 27: 26: 66: 4c: 6e: 20: 5f: 82: 3f: 7c: AB.
Are you sure you want to continue connecting (yes/no )? Yes
Warning: Permanently added '192. 168.157.111 '(RSA) to the list of knownhosts.
Root@192.168.157.111's password:
Fence_xvm.key100 % 128 0.1KB/s00: 00
[Root @ tramisu ~] # Scp/etc/cluster/fence_xvm.key192.168.157.222:/etc/cluster/
The authenticity of host '192. 168.157.222 (192.168.157.222) 'can't beestablished.
RSA key fingerprint is 28: be: 4f: 5a: 37: 4a: a8: 80: 37: 6e: 18: c5: 93: 84: 1d: 67.
Are you sure you want to continue connecting (yes/no )? Yes
Warning: Permanently added '192. 168.157.222 '(RSA) to the list of knownhosts.
Root@192.168.157.222's password:
Fence_xvm.key100 % 128 0.1KB/s00: 00
[Root @ tramisu ~] # Systemctl restart fence_mongod.service # restart the service
Return to the web page of the luci host to set the fence, as shown in figure


As shown in figure


Go back to each node for configuration, as shown in figure


Step 2 specific settings as shown in


We recommend that you enter the uuid. Both nodes have the same settings. This setting is not repeated here, but the uuid is different.

As shown in figure


[Root @ server111 ~] # Cat/etc/cluster. conf # View File Content Changes
<? Xml version = "1.0"?>
<Cluster config_version = "6" name = "forsaken">
<Clusternodes>
<Clusternodename = "192.168.157.111" nodeid = "1">
<Fence>
<Methodname = "Method">
<Devicedomain = "c004f9a6-c13c-4e85-837f-9d640359b08b" name = "forsaken-fence"/>
</Method>
</Fence>
</Clusternode>
<Clusternodename = "192.168.157.222" nodeid = "2">
<Fence>
<Methodname = "Method">
<Devicedomain = "19a29893-08a1-48e5-a8bf-688adb8a6eef" name = "forsaken-fence"/>
</Method>
</Fence>
</Clusternode>
</Clusternodes>
<Cman expected_votes = "1" two_node = "1"/>
<Fencedevices>
<Fencedeviceagent = "fence_xvm" name = "forsaken-fence"/>
</Fencedevices>
</Cluster>

[Root @ server222 ~] # Cat/etc/cluster. conf
<? Xml version = "1.0"?>
<Cluster config_version = "6" name = "forsaken">
<Clusternodes>
<Clusternodename = "192.168.157.111" nodeid = "1">
<Fence>
<Methodname = "Method">
<Devicedomain = "c004f9a6-c13c-4e85-837f-9d640359b08b" name = "forsaken-fence"/>
</Method>
</Fence>
</Clusternode>
<Clusternodename = "192.168.157.222" nodeid = "2">
<Fence>
<Methodname = "Method">
<Devicedomain = "19a29893-08a1-48e5-a8bf-688adb8a6eef" name = "forsaken-fence"/>
</Method>
</Fence>
</Clusternode>
</Clusternodes>
<Cman expected_votes = "1" two_node = "1"/>
<Fencedevices>
<Fencedevice agent = "fence_xvm" name = "forsaken-fence"/>
</Fencedevices>
</Cluster>

We can find that the file content of the two nodes is exactly the same.
[Root @ server222 ~] # Clustat # Check the node status. The status of the two nodes should be the same.
Cluster Status for forsaken @ Tue May 19 22:31:31 2015
Member Status: Quorate

Member Name ID Status
--------------------
192.168.157.111 1 Online
192.168.157.222 2 Online, Local
So far, the fence mechanism of the two nodes has been completely set up. We can do some experiments to test whether the two nodes work normally.
[Root @ server222 ~] # Fence_node 192.168.157.111 # use commands to cut down 111 nodes
Fence 192.168.157.111 success
[Root @ server222 ~] # Clustat
Cluster Status for forsaken @ Tue May 19 22:32:23 2015
Member Status: Quorate

Member Name ID Status
--------------------
192.168.157.111 1 Offline # The first server is replaced. In this case, we check that the 111 host will enter the restart status, proving that the fence mechanism works properly.
192.168.157.222 2 Online, Local

[Root @ server222 ~] # Clustat # When the 111 node restarts, it will be automatically added to the node. At this time, the 222 host serves as the host and the 111 node serves as the slave node.
Cluster Status for forsaken @ Tue May 19 22:35:28 2015
Member Status: Quorate

Member Name ID Status
--------------------
192.168.157.111 1 Online
192.168.157.222 2 Online, Local

[Root @ server222 ~] # Echo c>/proc/sysrq-trigger # You can also use commands such as memory corruption, or manually stop the NIC and perform other operations. The destroyed node is automatically restarted, restart the instance and use it as the backup server. You can perform experiments on your own.


The end

Editor's note: this blog post is complete, but there are still many HA posts. I will continue this introduction in the next blog post. I hope you will continue to pay attention to it, if you want to learn more, do not delete these two virtual machines after they are used up this time. In the future, you will continue to learn more. Thank you.
By: forsaken627

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.