RHCs detailed description of high-availability clusters under Linux

Source: Internet
Author: User

RHCs detailed description of high-availability clusters under Linux

1.rhcs:redhat Cluster Suite, Red Hat cluster Suite

RHCs Essentials Package: Cman Rgmanager system-cluster-config

2.RHCS Cluster Deployment Basic Prerequisites:

2.1. Time synchronization, using NTP service is recommended

2.2. Springboard and each node name resolution and the host name of each host and its ' uname-n ' consistent;

2.3. SSH key authentication between springboard machine and each node

2.4. Configure Yum for each node;

3. This experiment uses 3 node hosts to implement RHCS cluster, GW host as a springboard host, IP distribution is as follows:

1.1.1.18 node1.wilow.com Node1

1.1.1.19 node2.wilow.com Node2

1.1.1.20 node2.wilow.com Node3

1.1.1.88 GW.wilow.com GW #作为跳板主机, managing NODE1,NODE2,NODE3 node hosts

4. Install Cman,rgmanager,system-cluster-config on the GW.willow.com springboard host to 3 node hosts respectively

[[email protected] ~]# for I in {1..3}; Do ssh node$i ' yum install-y cman rgmanager system-config-cluster '; Done

[Email protected] ~]#

5.RHCS Cluster service startup prerequisites:

5.1. Each cluster has a unique cluster name;

5.2. At least one fence device;

5.3. There should be at least three nodes; the Qdisk quorum disk is used in a two-node scenario;

6. Create the cluster name, node, and fence device

[Email protected] cluster]# System-config-cluster &

6.1. Create a new cluster name:

650) this.width=650; "src=" http://s1.51cto.com/wyfs02/M02/86/1A/wKioL1e1DjDBjpCIAAFVOHUWqSQ283.jpg "title=" 1.jpg " alt= "Wkiol1e1djdbjpciaafvohuwqsq283.jpg"/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/86/1A/wKiom1e1Dz7zJG_IAADHoIqHk2Q971.jpg "style=" float: none; "title=" 2.jpg "alt=" Wkiom1e1dz7zjg_iaadhoiqhk2q971.jpg "/>

6.2. Add 3 Node hosts:

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/86/1A/wKioL1e1Dz-QaDdDAAHD3yh1ZRM702.jpg "style=" float: none; "title=" 3.jpg "alt=" Wkiol1e1dz-qadddaahd3yh1zrm702.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/86/1A/wKioL1e1D0CjOcFaAAEy0cFalc8330.jpg "style=" float: none; "title=" 4.jpg "alt=" Wkiol1e1d0cjocfaaaey0cfalc8330.jpg "/>

6.3. Add a fence device:

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M00/86/1A/wKiom1e1EDTic0V6AAGqW-V86nE892.jpg-wh_500x0-wm_3 -wmp_4-s_4098894486.jpg "style=" Float:none; "title=" 5.jpg "alt=" Wkiom1e1edtic0v6aagqw-v86ne892.jpg-wh_50 "/>

6.4. Save configuration: File->save, default cluster configuration file saved to/etc/cluster/cluster.conf

650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/86/1A/wKiom1e1EDSCMEO4AAGv3G0pNio021.jpg-wh_500x0-wm_3 -wmp_4-s_199756036.jpg "style=" Float:none; "title=" 6.jpg "alt=" Wkiom1e1edscmeo4aagv3g0pnio021.jpg-wh_50 "/>

6.5. Start the Cman service on each node, otherwise the fence device is stuck and cannot be started

and automatically starts the CCSD service, which propagates the cluster.conf file to other nodes,

Ensure all node cluster.conf files are synchronized consistently

[[Email protected] cluster]# service Cman start

Starting cluster:

Loading modules ... done

Mounting Configfs ... done

Starting CCSD... done

Starting Cman ... done

Starting daemons ... done

Starting fencing ... done

[OK]

[[Email protected] cluster]# service Cman start

[[Email protected] cluster]# service Cman start

6.6. Start the Rgmanager service to manage information such as resources

[[email protected] ~]# for I in {1..3}; Do SSH node$i ' service rgmanager start '; Done

6.7. Install an Apache service to do cluster testing:

[[email protected] ~]# for I in {1..3}; Do ssh node$i ' yum install-y httpd '; Done

[[Email protected] ~] #ssh node1 ' echo node1.willow.com >/var/www/html/index.html '

[[Email protected] ~] #ssh node2 ' echo node2.willow.com >/var/www/html/index.html '

[[Email protected] ~] #ssh node3 ' echo node3.willow.com >/var/www/html/index.html '

[[email protected] ~]# for I in {1..3}; Do ssh node$i ' chkconfig httpd off '; Done

6.8. View basic information about the cluster:

[Email protected] ~]# cman_tool status

version:6.2.0

Config Version:2

Cluster Name:tcluster

Cluster id:28212

Cluster Member:yes

Cluster Generation:12

Membership State:cluster-member

Nodes:3

Expected Votes:3

Total Votes:3

Node votes:1

Quorum:2

Active Subsystems:8

Flags:dirty

Ports bound:0 177

Node name:node1.willow.com

Node id:1

Multicast addresses:239.192.110.162

Node addresses:1.1.1.18

6.9. View the status of each node in the cluster:

[Email protected] ~]# Clustat

Cluster Status for Tcluster @ Thu 18 10:37:31 2016

Member status:quorate


Member Name ID Status

------ ----                   ---- ------

Node1.willow.com 1 Online, Local

Node2.willow.com 2 Online

Node3.willow.com 3 Online

6.10. Add a VIP and a httpd cluster resource:

[Email protected] cluster]# System-config-cluster &


650) this.width=650; "src=" http://s1.51cto.com/wyfs02/M00/86/1B/wKiom1e1IaDhJcOFAAHTl6S4KUE486.jpg "style=" float: none; "title=" 7.jpg "alt=" Wkiom1e1iadhjcofaahtl6s4kue486.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/86/1B/wKioL1e1IaWjkwDWAAHDi4N4djw285.jpg "style=" float: none; "title=" 8.jpg "alt=" Wkiol1e1iawjkwdwaahdi4n4djw285.jpg "/>

6.11. Because the resource cannot start in Cman, you must create a service: Create a WebService service as follows

650) this.width=650; "src=" http://s1.51cto.com/wyfs02/M00/86/1B/wKiom1e1Itry20mwAAG20mr8F1E058.jpg "style=" float: none; "title=" 1.jpg "alt=" Wkiom1e1itry20mwaag20mr8f1e058.jpg "/>

650) this.width=650; "src=" http://s2.51cto.com/wyfs02/M00/86/1B/wKioL1e1ItuQ9PoKAAJAHdOWc5g066.jpg "style=" float: none; "title=" 2.jpg "alt=" Wkiol1e1ituq9pokaajahdowc5g066.jpg "/>

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M01/86/1B/wKioL1e1IvSSQ3Z6AAIX3kNScv8966.jpg-wh_500x0-wm_3 -wmp_4-s_2788824165.jpg "title=" 3.jpg "alt=" Wkiol1e1ivssq3z6aaix3knscv8966.jpg-wh_50 "/>

6.12. Propagate the cluster.conf file information that you just configured to another node:

650) this.width=650; "Src=" Http://s1.51cto.com/wyfs02/M02/86/1B/wKioL1e1I1ig9D1GAAHF4Znrpu0215.jpg-wh_500x0-wm_3 -wmp_4-s_2461790080.jpg "title=" 4.jpg "alt=" Wkiol1e1i1ig9d1gaahf4znrpu0215.jpg-wh_50 "/>

6.13. See which node the Webservcie service is running on

[Email protected] ~]# Clustat

Cluster Status for Tcluster @ Thu 18 10:55:17 2016

Member status:quorate

Member Name ID Status

------ ----                 ---- ------

Node1.willow.com 1 Online, Local, Rgmanager

Node2.willow.com 2 Online, Rgmanager

Node3.willow.com 3 Online, Rgmanager


Service Name Owner (last) state

------- ----                 ----- ------      -----

Service:webservice Node1.willow.com started

6.14. View VIP Boot Status: ifconfig command cannot view

[[Email protected] ~]# IP addr Show

1:lo: <LOOPBACK,UP,LOWER_UP> MTU 16436 Qdisc noqueue

Link/loopback 00:00:00:00:00:00 BRD 00:00:00:00:00:00

inet 127.0.0.1/8 Scope host Lo

INET6:: 1/128 Scope Host

Valid_lft Forever Preferred_lft Forever

2:eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> MTU Qdisc pfifo_fast Qlen 1000

Link/ether 00:0C:29:10:47:B6 BRD FF:FF:FF:FF:FF:FF

inet 1.1.1.18/24 BRD 1.1.1.255 Scope Global eth0

inet 1.1.1.100/24 Scope Global secondary eth0

At this point, accessing http://1.1.1.100 through the Web test will display Web information for the node1 node

6.15. View Clusvcadm Help:

[Email protected] cluster]# clusvcadm-h

6.16. Migrate the WebService service group that is currently running node1.willow.com to node2.willow.com

[Email protected] ~]# clusvcadm-r webservice-m node2.willow.com

Trying to relocate Service:webservice to node2.willow.com ... Success

Service:webservice is now running on node2.willow.com

[Email protected] ~]# Clustat

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/86/1B/wKiom1e1KDrCvhysAAIObcUaFu4238.jpg "title=" 1.jpg " alt= "Wkiom1e1kdrcvhysaaiobcuafu4238.jpg"/>

At this point, accessing http://1.1.1.100 through the Web test will display Web information for the node2 node

7. Share the same page with 3 nodes via NFS for shared storage

7.1. Create an NFS share and increase NFS resources

[Email protected] ~]# mkdir/web/ha/

[Email protected] ~]# Vim/etc/exports

/web/ha 1.1.1.0/24 (RO)

[[Email protected] ~]# Service NFS Start

[[email protected] ~]# chkconfig NFS on

[Email protected] cluster]# System-config-cluster &

650) this.width=650; "src=" http://s5.51cto.com/wyfs02/M01/86/1C/wKioL1e1MH7CC--FAAKoCG7w41M097.jpg "title=" 1.jpg " alt= "Wkiol1e1mh7cc--faakocg7w41m097.jpg"/>



650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/86/1C/wKioL1e1MqrguJMgAAGmV9YSdYY281.jpg "title=" 8.jpg " alt= "Wkiol1e1mqrgujmgaagmv9ysdyy281.jpg"/>

7.2. Restart the WebService service group

[Email protected] ~]# Clusvcadm-r WebService

In this case, accessing http://1.1.1.100 through the Web test will display the shared web information

8. Configure and start the basic cluster as a command, but the resources and services cannot be configured by command mode

8.1. Stop each Cluster service and empty the cluster configuration file

[Email protected] ~]# Clusvcadm-s WebService

[[email protected] ~]# for I in {1..3};d o ssh node$i ' service rgmanager stop '; Done

[[email protected] ~]# for I in {1..3};d o ssh node$i ' service cman stop '; Done

[[email protected] ~]# for I in {1..3};d o ssh node$i ' rm-rf/etc/cluster/* '; Done

8.2. Create a cluster name

[Email protected] cluster]# Ccs_tool Create Tcluster

8.3. Add a fence device

[Email protected] cluster]# ccs_tool addfence meatware fence_manual

8.3. Adding nodes

[Email protected] cluster]# Ccs_tool addnode-h #查看帮助

[Email protected] cluster]# ccs_tool addnode-n 1-v 1-f meatware node1.willow.com

[Email protected] cluster]# ccs_tool addnode-n 2-v 1-f meatware node2.willow.com

[Email protected] cluster]# ccs_tool addnode-n 3-v 1-f meatware node3.willow.com

[Email protected] cluster]# Ccs_tool lsnode #查看刚加入的节点主机

Cluster Name:tcluster, Config_version:5


Nodename votes Nodeid Fencetype

Node1.willow.com 1 1 meatware

Node2.willow.com 1 2 Meatware

Node3.willow.com 1 3 Meatware

8.4. Start Cman and Rgmanager cluster services

[[Email protected] cluster]# service Cman start

[[Email protected] cluster]# service Cman start

[[Email protected] cluster]# service Cman start

[[email protected] ~]# for I in {1..3}; Do SSH node$i ' service rgmanager start '; Done

[Email protected] cluster]#

8.5. If you need to configure additional resources and services, use only graphical tools System-config-cluster


The blog will continue to be updated ........


This article is from the "Xavier Willow" blog, please be sure to keep this source http://willow.blog.51cto.com/6574604/1839845

RHCs detailed description of high-availability clusters under Linux

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.