Cluster Foundation (IV): Create a RHCS clustered environment, create a highly available Apache service

Source: Internet
Author: User
Tags failover

First, create a RHCS cluster environment

Goal:

Prepare four KVM VMS, three as cluster nodes, one for installing Luci and configuring iSCSI storage services for the following functions:

    • Use RHCS to create a cluster named Tarena
    • All nodes in a cluster need to mount iSCSI shared storage
    • Partition iSCSI settings using any node in the cluster
    • Installing Luci Virtual Host requires additional 20G hard drive
    • The physical host IP address is 192.168.4.1, the host name is desktop1.example.com

Scheme:

Using 4 virtual machines, 1 as Luci and iSCSI servers, 3 as node servers, as shown in the topology.

The host name and corresponding IP address of all hosts are shown in the following table.

Steps:

Step one: Prepare before installation

1) Configure the Yum source for all nodes, and note that all virtual hosts need to mount the installation CD.

[Email protected] ~]# Mount/dev/cdrom/media
[Email protected] ~]# rm-rf/etc/yum.repos.d/*
[Email protected] ~]# Vim/etc/yum.repos.d/dvd.repo
[DVD]
name=red hat
baseurl=file:///media/
Enabled=1
Gpgcheck=0
[Highavailability]
Name=highavailability
Baseurl=file:///media/highavailability
Enabled=1
Gpgcheck=0
[LoadBalancer]
Name=loadbalancer
Baseurl=file:///media/loadbalancer
Enabled=1
Gpgcheck=0
[Resilientstorage]
Name=resilientstorage
Baseurl=file:///media/resilientstorage
Enabled=1
Gpgcheck=0
[Scalablefilesystem]
Name=scalablefilesystem
Baseurl=file:///media/scalablefilesystem
Enabled=1
Gpgcheck=0
[email protected] ~]# Yum Clean all
[Email protected] ~]# Mount/dev/cdrom/media
[Email protected] ~]# rm–rf/etc/yum.repos.d/*
[Email protected] ~]# Vim/etc/yum.repos.d/dvd.repo
[DVD]
name=red hat
baseurl=file:///media/
Enabled=1
Gpgcheck=0
[Highavailability]
Name=highavailability
Baseurl=file:///media/highavailability
Enabled=1
Gpgcheck=0
[LoadBalancer]
Name=loadbalancer
Baseurl=file:///media/loadbalancer
Enabled=1
Gpgcheck=0
[Resilientstorage]
Name=resilientstorage
Baseurl=file:///media/resilientstorage
Enabled=1
Gpgcheck=0
[Scalablefilesystem]
Name=scalablefilesystem
Baseurl=file:///media/scalablefilesystem
Enabled=1
Gpgcheck=0
[email protected] ~]# Yum Clean all
[Email protected] ~]# Mount/dev/cdrom/media
[Email protected] ~]# rm-rf/etc/yum.repos.d/*
[Email protected] ~]# Vim/etc/yum.repos.d/dvd.repo
[DVD]
name=red hat
baseurl=file:///media/
Enabled=1
Gpgcheck=0
[Highavailability]
Name=highavailability
Baseurl=file:///media/highavailability
Enabled=1
Gpgcheck=0
[LoadBalancer]
Name=loadbalancer
Baseurl=file:///media/loadbalancer
Enabled=1
Gpgcheck=0
[Resilientstorage]
Name=resilientstorage
Baseurl=file:///media/resilientstorage
Enabled=1
Gpgcheck=0
[Scalablefilesystem]
Name=scalablefilesystem
Baseurl=file:///media/scalablefilesystem
Enabled=1
Gpgcheck=0
[email protected] ~]# Yum Clean all
[Email protected] ~]# Mount/dev/cdrom/media
[Email protected] ~]# rm-rf/etc/yum.repos.d/*
[Email protected] ~]# Vim/etc/yum.repos.d/dvd.repo
[DVD]
name=red hat
baseurl=file:///media/
Enabled=1
Gpgcheck=0
[Highavailability]
Name=highavailability
Baseurl=file:///media/highavailability
Enabled=1
Gpgcheck=0
[LoadBalancer]
Name=loadbalancer
Baseurl=file:///media/loadbalancer
Enabled=1
Gpgcheck=0
[Resilientstorage]
Name=resilientstorage
Baseurl=file:///media/resilientstorage
Enabled=1
Gpgcheck=0
[Scalablefilesystem]
Name=scalablefilesystem
Baseurl=file:///media/scalablefilesystem
Enabled=1
Gpgcheck=0
[email protected] ~]# Yum Clean all

2) Modify the/etc/hosts and sync to all hosts.

[Email protected] ~]# vim/etc/hosts
192.168.4.1 node1.example.com
192.168.4.2 node2.example.com
192.168.4.3 node3.example.com
192.168.4.4 luci.example.com
[[email protected] ~]# for i in {1..3};d o scp/etc/hosts 192.168.4. $i:/etc/;d One

3) All nodes are closed NetworkManager, SELinux service.

[[Email protected] ~]# service NetworkManager stop
[Email protected] ~]# Chkconfig NetworkManager off
[Email protected] ~]# sed-i '/selinux=/s/enforcing/permissive/'/etc/sysconfig/selinux
[Email protected] ~]# Setenforce 0
[Email protected] ~]# iptables-f; Service Iptables Save
[[Email protected] ~]# service NetworkManager stop
[Email protected] ~]# Chkconfig NetworkManager off
[Email protected] ~]# sed-i '/selinux=/s/enforcing/permissive/'/etc/sysconfig/selinux
[Email protected] ~]# Setenforce 0
[Email protected] ~]# iptables-f; Service Iptables Save
[[Email protected] ~]# service NetworkManager stop
[Email protected] ~]# Chkconfig NetworkManager off
[Email protected] ~]# sed-i '/selinux=/s/enforcing/permissive/'/etc/sysconfig/selinux
[Email protected] ~]# Setenforce 0
[Email protected] ~]# iptables-f; Service Iptables Save
[[Email protected] ~]# service NetworkManager stop
[Email protected] ~]# Chkconfig NetworkManager off
[Email protected] ~]# sed-i '/selinux=/s/enforcing/permissive/'/etc/sysconfig/selinux
[Email protected] ~]# Setenforce 0
[Email protected] ~]# iptables-f; Service Iptables Save

Step two: Deploy iSCSI Services

1) Deploy the iSCSI server on the Luci host and share the/DEV/SDB using the iSCSI service.

Tip: The server iqn name is: Iqn.2015-06.com.example.luci:cluster.

[[email protected] ~]# yum-y install scsi-target-utils//installation software
.. ..
[Email protected] ~]# rpm-q scsi-target-utils
Scsi-target-utils-1.0.24-10.el6.x86_64
[Email protected] ~]# vim/etc/tgt/targets.conf
<target iqn.2015-06.com.example.luci:cluster>
# List of files to export as LUNs
BACKING-STORE/DEV/SDB//Defining storage devices
Initiator-address 192.168.4.0/24//define ACLs
</target>
[[Email protected] ~]# service TGTD start//Start Services
Starting SCSI target daemon: [OK]
[Email protected] ~]# chkconfig TGTD on

2) All node servers mount the iSCSI share.

[[email protected] ~]# yum-y install iscsi-initiator-utils//installation software
[Email protected] ~]# iscsiadm-m discovery-t sendtargets-p 192.168.4.4:3260
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260-l//Mount iSCSI Share
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260-l
[[email protected] ~]# yum-y install iscsi-initiator-utils//installation software
[Email protected] ~]# iscsiadm-m discovery-t sendtargets-p 192.168.4.4:3260
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260–l//Mount iSCSI Share
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260-l
[[email protected] ~]# yum-y install iscsi-initiator-utils//installation software
[Email protected] ~]# iscsiadm-m discovery-t sendtargets-p 192.168.4.4:3260
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260-l//Mount iSCSI Share
[Email protected] ~]# iscsiadm-m node-t \
>iqn.2015-06.com.example.luci:cluster \
>-p 192.168.4.4:3260–l

Step three: Install the cluster software

1) Install the Luci on the luci.example.com host and start the service.

[Email protected] ~]# Yum–y install Luci
[[Email protected] ~]# service Luci Start;chkconfig Luci on

2) Install Ricci in all cluster nodes and start the service.

[Email protected] ~]# yum-y install Ricci
[Email protected] ~]# echo "11111" |passwd--stdin Ricci
[R[email protected] ~]# service Ricci Start;chkconfig Ricci on
[Email protected] ~]# yum-y install Ricci
[Email protected] ~]# echo "11111" |passwd--stdin Ricci
[[Email protected] ~]# service Ricci Start;chkconfig Ricci on
[Email protected] ~]# yum-y install Ricci
[Email protected] ~]# echo "11111" |passwd--stdin Ricci
[[Email protected] ~]# service Ricci Start;chkconfig Ricci on

Step Four: Configure the cluster

1) Browser access Luci, any host using browser access.

[email protected] ~]# Firefox https://luci.example.com:8084

2) Create the cluster.

After using the browser to access the Luici page, click on the "Manage Clusters" page and click on the "Create" button for creating a new cluster, as shown in.

Next, in the pop-up box, enter the name of the cluster "Tarena", tick "Download Packages", "Reboot Nodes Befor joining Cluster", "Enable Shared Storage Support ", the effect is as shown.

After waiting for all the nodes to restart, the Luci page will display the page shown in 4, representing all the nodes and joining the Tarena cluster.

Tip: If some nodes fail to join the cluster automatically after the node restarts, you can synchronize the/etc/cluster/cluster.conf files in the normal node system to the other failed nodes, and ensure that the Cman and Rgmanager services of the failed nodes are in the boot state.

Ii. creation of highly available Apache services

Goal:

In practice one, use the nodes in the cluster to create a highly available Apache service that achieves the following goals:

    • On any node will practice one mounted iSCSI disk partition and format
    • Adding a valid fence device to a cluster
    • To create a failover domain named Prefer_node1
    • Apache Service first runs on Node1 node
    • The VIP address provided by the Apache service is 192.168.4.100
    • The client accesses the Web page by visiting the VIP address

Scheme:

Follow the topology of practice one and do the following in turn:

    1. Deploying HTTPD Services
    2. Creating Fence Devices
    3. Create a failover domain
    4. Create a VIP Resource
    5. Create a storage resource
    6. Create an Apache service resource
    7. Create a service group

Steps:

Step one: Prepare storage resources and install httpd software on all nodes

1) partition and format the iSCSI disk in either cluster node.

[Email protected] ~]# Parted/dev/sdb Mklabel msdos
[[email protected] ~]# Parted/dev/sdb Mkpart Primary 1 1000
[Email protected] ~]# Partprobe
[Email protected] ~]# MKFS.EXT4/DEV/SDB1

2) Install httpd software

[Email protected] ~]# yum–y install httpd
[Email protected] ~]# yum–y install httpd
[Email protected] ~]# yum–y install httpd

3) Mount the shared store and generate the test Web page

[Email protected] ~]# mount/dev/sdb1/var/www/html
[Email protected] ~]# echo "test page for RHCs" >/var/www/html/index.html

Step two: Create a fence device

1) Install the package and generate the shared secret key.

[[email protected] ~]# yum-y install \
>FENCE-VIRTD Fence-virtd-libvirt Fence-virtd-multicast
[Email protected] ~]# Mkdir/etc/cluster
[[email protected] ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
[Email protected] ~]# Systemctl enable FENCE_VIRTD

2) Copy secret key

[[email protected] ~]# for i in {1..3};d o scp/etc/cluster/fence_xvm.key \
> 192.168.4. $i:/etc/cluster/; Done

3) Configuration Fence

Note: The interface entry fills the network interface, backend module entry is filled in libvirt.

[Email protected] ~]# fence_virtd–c
[Email protected] ~]# systemctl start FENCE_VIRTD

4) Login Luci Add fence to the cluster node

In either browser access https://luci.example.com:8084, configure fence. After selecting the "Fence Devices" menu, select the Add button and the effect is as shown.

In the pop-up box, select the fence device type fence virt (multicast Mode), and set the name of the fence device, as shown in the effect.

After creating the fence device, the next step is to add the fence device to each node, first click on the "Nodes" menu in the cluster, select all the nodes, and then add the fence device to each node. As shown, select Node1 to add the fence device to the node.

After selecting a node in the node list, add the Fence device by clicking "Add Fence Method", as shown in the results.

After selecting "Add Fence Method", enter the name in the pop-up box as shown in.

Next, take the final action, add an instance for the Fence device, select "Add Fence Instance", as shown in the effect.

In the pop-up box, select the previously created fence instance "FENCE_XVM" and fill in the corresponding virtual machine domain name for each node, noting that the name must be the same as the virtual machine name, as stated.

Step three: Create a highly available Apache service

1) Create a failover domain

On any node using the browser to access https://luci.example.com:8084, click on the "Failover Domains" menu and create the failover domain with the Add button below, as shown in.

In the pop-up box, fill in the name for the failover domain: prefer_node1, tick "prioritized", "Restricted", "No failback", and set the priority for each node, in order to ensure that the service first runs on the Node1, We set the Node1 priority to 1, as shown in.

2) Create a resource (VIP, shared storage, Apache service)

To create a resource, you can create it by clicking on the "Resources" menu, using the "ADD" button, as shown in.

In the pop-up box, we select the IP address of this resource, and set specific parameters for the resource, the VIP address is 192.168.4.100, as shown in the effect.

Using the same method, create a shared store, select FileSystem in the resource list, and set specific parameters to mount the shared iSCSI disk to/var/www/html, in order to ensure the uniqueness of the mount point, Here we use the device's UUID (please students according to their actual situation to fill in the specific values), the method of viewing the UUID is BLKID/DEV/SDB1, the final setting effect as shown.

Finally create the Apache service resource, select Apache in the resource list, and set the name to Web_service as shown in.

3) Create a service group

To access https://luci.example.com:8084 using a browser at any node, click on the "Service Groups" menu and create a service group with the Add button below, as shown in.

In the pop-up box, enter the specific parameters, set the service name to WEB_CLU, select the fault domain as PREFRE_NODE1, restore the policy to relocate, and add the service resource to the service group via add resource. Add the 3 resources that you created in the previous step to the service group, as shown in.

Step four: Verify the test

In any cluster node, enter Clustat to view the cluster and the running status of the highly available service, as shown in.

Cluster Foundation (IV): Create a RHCS clustered environment, create a highly available Apache service

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.