Constructing a RHCS-based hot standby web Cluster

Source: Internet
Author: User

Constructing a RHCS-based hot standby web Cluster
RHCS-based web dual-machine hot standby Cluster construction RHCS cluster operating principle and function Introduction

1. Distributed Cluster Manager (CMAN)
Cluster Manager (CMAN) is a distributed Cluster management tool that runs on each node of the Cluster and provides Cluster management tasks for RHCS. CMAN is used to manage cluster members, messages, and notifications. It monitors the running status of each node to understand the relationship between node members. When a node in the cluster fails, the node member relationship will change, and CMAN promptly notifies the underlying layer of this change, and then make corresponding adjustments.

2. Lock Management (DLM)
Distributed Lock Manager (DLM) is a Distributed Lock Manager. It is an underlying component of RHCS and provides a common Lock operation mechanism for clusters, in the RHCS cluster system, DLM runs on each node of the Cluster. GFS uses the lock manager to synchronize access to the file system metadata. CLVM synchronously updates data to LVM volumes and volume groups through the lock manager. DLM does not need to set the lock management server. It adopts the peering lock management method, which greatly improves the processing performance. At the same time, DLM avoids the performance bottleneck of overall recovery when a single node fails. In addition, DLM requests are local and do not require network requests, so the requests take effect immediately. Finally, the DLM uses a layered mechanism to implement parallel lock modes for multiple lock spaces.

3. Configuration File Management (CCS)
Cluster Configuration System (CCS) is mainly used for managing Cluster Configuration files and synchronizing Configuration files between nodes. CCS runs on each node of the cluster and monitors a single configuration file/etc/cluster on each cluster node. conf status. When this file changes, it is updated to every node in the cluster, and the configuration file of each node is synchronized at all times. For example, the administrator updates the cluster configuration file on node A. After CCS finds that the configuration file of node A has changed, the change will be immediately transmitted to other nodes. The RHCS configuration file is cluster. conf, which is an xml file, including the cluster name, cluster node information, cluster resources and service information, and fence devices.

4. Gate device (FENCE)
FENCE devices are an essential part of the RHCS cluster. FENCE devices can be used to avoid split-brain problems caused by unpredictable situations and FENCE devices, to solve these problems, Fence devices directly issue hardware management commands to servers or storage through the hardware management interfaces of servers or storage, or external power management devices, restart or shut down the server, or disconnect from the network. FENCE works in the following way: when a host is abnormal or down due to an accident, the Standby opportunity first calls the FENCE device, and then restarts or isolates the abnormal host from the network through the FENCE device, after the FENCE operation is successfully executed, the information is returned to the slave machine. After receiving the successful information of FENCE, the slave machine starts to take over the services and resources of the host. In this way, resources occupied by abnormal nodes are released through the FENCE device, ensuring that resources and services always run on one node. Rhcs fence devices can be divided into two types: Internal FENCE and external FENCE. Common internal FENCE devices include ibm rsaii cards, HP iLO cards, and IPMI devices, external fence devices include UPS, san switch, and network switch.

5. high-availability Service Manager
High Availability service management is mainly used to supervise, start, and stop cluster applications, services, and resources. It provides a management capability for cluster services. When a node fails, the high availability cluster service management process can transfer services from this failed node to other healthy nodes, in addition, this service transfer capability is automatic and transparent. RHCS uses rgmanager to manage cluster services. rgmanager runs on each cluster node and the corresponding process on the server is clurgmgrd. In a RHCS cluster, high availability services include cluster services and cluster resources. Cluster services are actually application services, such as apache and mysql. There are many cluster resources, for example, an IP address, a running script, and an ext3/GFS file system. In a RHCS cluster, the high availability service is combined with a Failover domain. A failover domain is a collection of cluster nodes that run specific services. In the failed transfer domain, you can set a priority for each node. The priority is used to determine the order of service transfer when the node fails. If no priority is specified for the node, the Cluster High Availability service will be transferred between any nodes. Therefore, by creating a failed transfer domain, you can not only set the order of service transfer between nodes, but also restrict a service to switch only within the node specified by the failed transfer domain.

6. cluster configuration management tools
RHCS provides a variety of cluster configuration and management tools, common GUI-based system-config-cluster, Conga, and other management tools.
System-config-cluster is a graphical management tool used to create clusters and configure cluster nodes. It consists of cluster node configuration and cluster management, create a cluster node configuration file and maintain the node running status. It is generally used in earlier versions of RHCS. Conga is a new network-based cluster configuration tool. Unlike system-config-cluster, Conga configures and manages cluster nodes on the web. Conga consists of two parts: luci and ricci. luci is installed on an independent computer for cluster configuration and management. ricci is installed on each cluster node, luci communicates with each node in the cluster through ricci. RHCS also provides some powerful cluster command line management tools, commonly used include clustat, cman_tool, ccs_tool, fence_tool, and clusvcadm. The usage of these commands will be described below.

7. Redhat GFS
GFS is a storage solution provided by RHCS for cluster systems. It allows multiple nodes in the cluster to share storage at the block level. Each node shares a storage space, GFS is a cluster file system provided by RHCS. multiple nodes mount one file system partition at a time, while the file system data is not damaged, this is a single file system, such as EXT3, EXT2 cannot do.
To enable multiple nodes to perform simultaneous read/write operations on a file system, GFS uses the lock manager to manage I/O operations. When a write process operates on a file, the file is locked, at this time, other processes are not allowed to perform read and write operations, and the lock is released until the write process completes normally. Only after the lock is released can other read and write processes operate on the file. In addition, after a node modifies data on the GFS file system, the modification is immediately visible on other nodes through the underlying communication mechanism of RHCS. When building a RHCS cluster, GFS is generally used as shared storage and runs on each node. It can be configured and managed through RHCS management tools. The relationships between RHCS and GFS need to be explained. Generally, it is easy for beginners to confuse this concept: Running RHCS, GFS is not necessary, and GFS is required only when shared storage is required, to build a GFS cluster file system, you must have the underlying support of RHCS. Therefore, you must install the RHCS component on the nodes where the GFS file system is installed.
Cluster Environment

Master node, RealServer1: 192.168.10.121

Slave node, RealServer2: 192.168.10.122

Storage node, Node1: 192.168.10.130

Floating IP address of the cluster: 192.168.10.254

Configure ssh mutual trust between hosts

① Execute the code on each host

/usr/bin/ssh-keygen -t rsa/usr/bin/ssh-keygen -t dsacat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keyscat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys


② Establish ssh mutual trust between RealServer1 and RealServer2

Run the following command on RealServer1:

ssh server2 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys ssh server2 cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys

Run the following command on RealServer2:

ssh server1 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys ssh server1 cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys


Configure target Storage
yum install iscsi-target-utils -y
service tgtd restartchkconfig tgtd onHOSTNAME="iqp.2014-08-25.edu.nuist.com:storage.disk"tgtadm --lld iscsi --mode target --op  new --tid 1 --targetname $HOSTNAMEtgtadm --lld iscsi --op new --mode logicalunit --lun 1 --tid 1 -b /dev/sdbtgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL#tgtadm --lld iscsi --op show --mode target |grep Targettgt-admin -s
Configure the initiator of all nodes
yum install iscsi-initiator* -yservice iscsi startservice iscsid startchkconfig iscsidiscsiadm -m discovery -p 192.168.10.130:3260 -t sendtargetsiscsiadm -m node --targetname iqp.2014-08-25.edu.nuist.com:storage.disk -p 192.168.10.130:3260 --login

Create a GFS2 File System (only one node) (created on 192.168.10.121) (create an lvm)

pvreate /dev/sdbvgcreate webvg /dev/sdb lvcreate -L 2G -n webvg_lv1 webvg

Install cluster software on each node

yum -y install cman* yum -y install rgmanager*yum -y install gfs2-utilsyum -y install system-config-cluster*yum -y install lvm2-cluster

Format the File System (format once)

mkfs.gfs2 -p lock_dlm -t httpd_cluster:webvg_lv1 -j 2 /dev/webvg/webvg_lv1

Use the system-config-cluster graphical tool to generate the cluster. conf configuration file

① Start system-config-cluster and create httpd_cluster

② Add a new node

③ Add fence

④ Bind fence to a node

⑤ Add resources: Add IP Resources

⑥ Add resources: Add GFS Resources

7. Add a resource: Add a Script resource.

Creating an invalid domain

⑨ Create cluster service

After the configuration is complete, the file content is as follows:

<?xml version="1.0" ?><cluster config_version="2" name="httpd_cluster"><fence_daemon post_fail_delay="0" post_join_delay="3"/><clusternodes><clusternode name="RealServer1" nodeid="1" votes="1"><fence><method name="1"><device name="fence1" nodename="RealServer1"/></method></fence></clusternode><clusternode name="RealServer2" nodeid="2" votes="1"><fence><method name="1"><device name="fence2" nodename="RealServer2"/></method></fence></clusternode></clusternodes><cman expected_votes="1" two_node="1"/><fencedevices><fencedevice agent="fence_manual" name="fence1"/><fencedevice agent="fence_manual" name="fence2"/></fencedevices><rm><failoverdomains><failoverdomain name="httpd_fail" ordered="0" restricted="1"><failoverdomainnode name="RealServer1" priority="1"/><failoverdomainnode name="RealServer2" priority="1"/></failoverdomain></failoverdomains><resources><ip address="192.168.10.254" monitor_link="1"/><script file="/etc/init.d/httpd" name="httpd"/><clusterfs device="/dev/webvg/webvg_lv1" force_unmount="1" fsid="8669" fstype="gfs2" mountpoint="/var/www/html" name="docroot" options=""/></resources><service autostart="1" domain="httpd_fail" name="httpd_srv" recovery="relocate"><ip ref="192.168.10.254"/><script ref="httpd"/><clusterfs ref="docroot"/></service></rm></cluster>

Copy the generated cluster. conf to another node through scp. (Manual replication is required for the first time. After the cluster service is started, the configuration files can be distributed to all nodes through system-config-cluster .)


Install apache on RealServer1 and RealServer2

Yum install httpd

Configure RealServer1 (similar to RealServer2)

NameVirtualHost 192.168.10.121:80ServerName   www.example.com<VirtualHost 192.168.10.121:80>DocumentRoot /var/www/htmlServerName   www.example.com</VirtualHost>

Set apache to not start

Chkconfig httpd off

Start cluster service

service cman startservice rgmanager startlvmconf --enable-clusterservice clvmd start

After the cluster is started, enable system-config-cluster, as shown in. You can modify the Cluster configuration file. After the modification, you can Send to Cluster to distribute the configuration file to all nodes. This tool has the built-in version control function. Each time you modify the configuration file, a new version is automatically generated. After the Cluster service is started, "Cluster Manager" appears, as shown in the right figure. The cluster node, cluster service status, and master node are displayed respectively.


Test

Now the cluster has been built

View Cluster status: clustat

View mount status: mount

Manually switch master node: clusvcadm-r httpd_srv backup Host Name

View the floating point ip that is attached to the Host: ip addr

Access apache through a browser: http: // 192.168.10.254

Reference

1. 51CTO Zhu Weihong video tutorial, address: http://edu.51cto.com/course/course_id-2.html

2. Baidu Library: four versions of the Red Hat Cluster suite RHCS


There are already too many other

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.