1. RHCS introduction RHCS is the abbreviation of RedHatClusterSuite, which is the Red Hat Cluster suite. RHCS is a cluster tool set that provides high availability, high reliability, load balancing, storage sharing, and low cost, it integrates the three cluster architectures in the cluster system to provide a secure and stable running environment for web applications and database applications. More specifically, RHCS is a fully functional cluster application solution. It provides an effective cluster from application front-end access to backend data storage.
1. Introduction to RHCS
RHCS is the abbreviation of Red Hat Cluster Suite, which is the Red Hat Cluster Suite. RHCS is a Cluster tool set that provides high availability, high reliability, load balancing, storage sharing, and low cost, it integrates the three cluster architectures in the cluster system to provide a secure and stable running environment for web applications and database applications.
More specifically, RHCS is a fully functional cluster application solution. It provides an effective cluster architecture implementation from application front-end access to backend data storage, this solution provided by RHCS not only ensures the persistent and stable provision of services for front-end applications, but also ensures the security of backend data storage.
RHCS provides three cluster architectures in the cluster system: high availability cluster, Server Load balancer cluster, and storage cluster.
Reading:
RHCS + GFS (Red Hat HA + GFS) http://www.linuxidc.com/Linux/2014-04/99574.htm
Web Service shared storage cluster architecture http://www.linuxidc.com/Linux/2013-05/84888.htm Based on RHCS + iSCSI + CLVM
RHCS multi-node deployment application enterprise environment http://www.linuxidc.com/Linux/2013-05/84456.htm
RedHat 5.8x64 RHCS Oracle 10gR2 HA practice configuration http://www.linuxidc.com/Linux/2013-03/81589.htm
Java-based HA dual-machine Hot Standby http://www.linuxidc.com/Linux/2012-12/76587.htm of RHCS
Diagram of RHCS installation configuration and application http://www.linuxidc.com/Linux/2012-11/75111.htm
Ii. functions and composition of the RHCS Cluster
The core function of RHCS is to provide highly available clusters. When a node fails, RHCS can automatically and quickly switch from one node to another through the high availability service management component, in this way, the application can continuously and continuously provide external services, so as to implement the high-availability cluster of RHCS.
RHCS uses LVS to provide a load balancing cluster. When a request comes in, LVS distributes the request through the load balancing scheduling algorithm to achieve load balancing. When a node fails, LVS automatically blocks this node through the Failover function of the service node, kicks out the faulty node, and transfers the service of this node to other nodes. After the node resumes service, LVS automatically adds it to the service to ensure stable service operation!
RHCS provides the storage cluster function through the GFS (Global File System) File System. GFS is a cluster File System that allows multiple servers to read and write the same shared storage File System at the same time, the storage cluster puts data in a shared storage to ensure data consistency. At the same time, GFS isolates read/write data through the lock management mechanism, thus ensuring data security!
RHCS is a cluster suite that consists of the following parts:
1. cluster architecture MANAGER: Basic suite of RHCS, which provides basic cluster functions, including distributed Cluster Manager (CMAN), lock management (DLM), and configuration file management (CCS) FENCE)
2. rgmanager high-availability Service Manager
Provides node service monitoring and service failover. When a node service fails, the service is transferred to another healthy node.
3. Cluster Management Tools
RHCS is configured through system-config-cluster. This is a graphical interface-based tool that can be easily and clearly configured.
4. Server Load balancer tools
RHCS uses LVS to achieve load balancing between services. LVS is a suite in the system kernel and has good performance.
5. GFS
The cluster file system is developed by RedHat. The GFS file system allows multiple services to read and write a single disk partition at the same time. GFS enables centralized data management, this eliminates the trouble of Data Synchronization and copying, but GFS cannot be isolated. Installing GFS requires the support of the underlying components of RHCS.
6. Cluster Logical Volume Manager
Cluster logical volume management, that is, CLVM, is an extension of LVM. This extension allows machines in the cluster to use LVM to manage shared storage. However, before configuration, you must enable lvm to support Cluster functions.
7. ISCSI
ISCSI is an Internet Protocol that encapsulates fc, fc-xx, and so on using the TCP/IP mechanism for transmission in the network. Isici is based on the C/S architecture. Data is first encapsulated into scsi packets, iscsi packets are encapsulated, and TCP/IP packets are finally encapsulated for transmission! Iscsi is based on tcp. The Listener is listening on port 3260 and provides TCP/IP Services through port 3260. The isisc session is always saved and established, knowing the session Introduction and then disconnecting. RHCS can use ISCSI technology to export and allocate shared storage.
Iii. RHCS Construction
1. Environment Introduction:
IP address |
Functions |
Install software |
Host Name |
192.168.1.201 |
RHCS management end |
Luci, ansible |
Node1.linuxidc.com |
192.168.1.202 |
RHCS Composition |
Ricci |
Node2.linuxidc.com |
192.168.1.203 |
RHCS Composition |
Ricci |
Node3.linuxidc.com |
192.168.1.204 |
RHCS Composition |
Ricci |
Node4.linuxidc.com |
2. install and configure
1. Install ansible, on the installation of ansible detailed process, this is no longer cumbersome, please move to my relevant article http://www.linuxidc.com/Linux/2014-04/100810.htm
2. Install lusi. Disable NetworkManager before installation to enable the network function.
[Root @ node1 ~] # Ansible node-m shell-a "chkconfig NetworkManager off"
[Root @ node1 ~] # Yum install luci-y
Note: If the epel source is enabled in the system, disable it.
3. Install ricci on three nodes and set it to start automatically
[Root @ node1 ~] # Ansible node-m yum-a "name = ricci state = present"
[Root @ node1 ~] # Ansible node-m service-a "name = ricci state = started enabled = yes"
192.168.1.203 | success >> {
"Changed": true,
"Enabled": true,
"Name": "ricci ",
"State": "started"
}
192.168.1.204 | success >> {
"Changed": true,
"Enabled": true,
"Name": "ricci ",
"State": "started"
}
192.168.1.202 | success >> {
"Changed": true,
"Enabled": true,
"Name": "ricci ",
"State": "started"
}
4. Start luci
[Root @ node1 ~] # Service luci start
Generating a 2048 bit RSA private key
Writing new private key to '/var/lib/luci/certs/host. pem'
Starting saslauthd: [OK]
Start luci... [OK]
Point your web browser to https://node1.linuxidc.com: 8084 (or equivalent) to access luci
5. Add a password for each ricci
[Root @ node1 ~] # Ansible node-m shell-a "echo linuxidc | passwd -- stdin ricci"
6. Configure RHCS
Log on to the RHCS configuration page.
Create a Resource Group and add each node
Start to automatically install the required package
For more details, refer to the highlights on the next page.: Http://www.linuxidc.com/Linux/2014-04/100809p2.htm