Linux RHCs clustered high-availability Web server

Source: Internet
Author: User
Tags ssl connection

RHCs cluster, High availability server

Highly Available
Red Hat cluster Suite for high availability, high reliability, load balancing, fast switching from one node to another (up to 16 nodes)
Load Balancing
Load balancing via LVS, LVS distributes the load from the client to the server node through a load-allocation policy
When a server node fails to serve, the node is removed from the cluster
Storage cluster Capabilities
RHCS provides storage clustering capabilities via GFS file system
GFS, the global file system, allows multiple services to read and write to a single shared file system at the same time
Eliminate the hassle of synchronizing data between applications with GFs
Coordinate and manage the read and write operations of multiple service nodes to the same file system through lock management mechanism

RHCS components
Core components:
Cman: Cluster Manager
Rgmanger: Cluster explorer
Corosync: Inter-cluster communication software
Rcci: Cluster remote manager
Add-ons:
Lvs
GFS: Red Hat Company development, GFS cannot exist in isolation, need to RHCS the underlying group support
CLVM: Cluster Logical Volume management, LVM extension, allows the cluster to use LVM to manage shared storage
ISCSI: New Storage technology
GNBD: Global network module, a supplemental component of GFS, for RHCS allocation and management of shared storage

RHCs Operating principle
Distributed Cluster Manager Cman
Run to provide cluster management tasks on all nodes
For managing cluster members, messages, and notifications
Count the number of legal nodes according to the running state on each node as the basis for whether the cluster survives
Distributed lock manager DLM
is a bottom-up foundation of RHCS, providing a common lock-running mechanism for the cluster
Running on each node, GFS synchronizes access to the file system's metadata through the DLM lock mechanism
CLVM synchronizing update data to LVM volumes and volume groups via DLM
Avoids a performance bottleneck that requires overall recovery for a single node failure
Gate Equipment Fence
An essential part of a cluster to avoid brain fissures caused by unpredictable conditions
Brain crack refers to the fact that they are the main node because they can not know each other's information, so the situation of resource competition appears.
When the master node is abnormal or down, the standby machine first calls the fence device, restarts the exception node or isolates it from the network
The fence mechanism can be implemented via power fence or storage fence


——————————————————————————————————————————————————————————————————————


Example: Building a RHCS server
Environment: Two nodes participate in the cluster, and the third node provides shared storage
Server: Jiedian1
eth0:192.168.4.1
eth1:192.168.2.1
eth2:201.1.1.1 (external service provided)
eth3:201.1.2.1
Server: Jiedian2
eth0:192.168.4.2
eth1:192.168.2.2
eth2:201.1.1.2 (external service provided)
eth3:201.1.2.2
Storage nodes require additional hard disks and do not need to configure IP for external service
Server: Cunchu
eth0:192.168.4.3
eth1:192.168.2.3
eth3:201.1.2.3
Qemu-img create-f qcow2/var/lib/libvirt/images/iscsi1.img 20G

Configure Yum
[[email protected] ~]# Cat/etc/yum.repos.d/rhel6.repo (make all the warehouses Yum and copy the configuration files to the Yum configuration of the other hosts)
[Rhel-6]
Name=linux NSD
baseurl=file:///root/myiso/
... ... ...
[HIG]
Name=linux NSD
baseurl=file:///root/myiso/highavailability/
... ... ...
[Loa]
Name=linux NSD
Baseurl=file:///root/myiso/loadbalancer
... ... ...
[Res]
Name=linux NSD
Baseurl=file:///root/myiso/resilientstorage
... ... ...
[SCA]
Name=linux NSD
Baseurl=file:///root/myiso/scalablefilesystem
... ... ...
[Email protected] ~]# Scp/etc/yum.repos.d/rhel6.repo xxx.xxx.x.x:/etc/yum.repos.d/
Configuring Iscis on the storage side
[[email protected] ~]# PARTED/DEV/VDB (partition for storage disk)
(parted) mklable GPT
(parted) Mkpart primary EXT4 1m-1
[Email protected] ~]# yum-y install Scsi-target-utils
[Email protected] ~]# vim/etc/tgt/targets.conf
<target iqn.2017-09.org.hydra.hydra007>
Backing-store/dev/vdb1
Initiator-address 192.168.4.1
Initiator-address 192.168.4.2
Initiator-address 192.168.2.1
Initiator-address 192.168.2.2
</target>
[[email protected] ~]#/ETC/INIT.D/TGTD start; Chkconfig TGTD on
[Email protected] ~]# tgt-admin-s
... ...
Lun:1 (can be lun1)
... ...
Configure iSCSI clients and Multipathing (two for the same operation) on two cluster nodes
[Email protected] ~]# yum-y install Iscsi-initiator-utils
[Email protected] ~]# iscsiadm--mode discoverydb--type sendtargets--portal 192.168.4.3--discover
[Email protected] ~]# iscsiadm--mode discoverydb--type sendtargets--portal 192.168.2.3--discover
[[Email protected] ~]# service iSCSI Restart
[[email protected] ~]# lsblk (view block HDD)
SDB 8:16 0 20G 0 disk
SDA 8:0 0 20G 0 disk
[Email protected] ~]# chkconfig iscsid on
[[email protected] ~]# yum-y install Device-mapper-multipath (multi-path package installation)
[[email protected] ~]# mpathconf--user_friendly_names n (generate config file)
[Email protected] ~]# scsi_id--whitelisted--DEVICE=/DEV/SDA (get wwid info)
1IET 00010001
[Email protected] ~]# vim/etc/multipath.conf
Defaults {
User_friendly_names No
Getuid_callout "/lib/udev/scsi_id--whitelisted--device=/dev/%n"
}
... ...
multipaths {
multipath {
Wwid "1IET 00010001" (Fill in wwid-hardware information)
Alias Mpatha (alias)
}
}
[Email protected] ~]#/ETC/INIT.D/MULTIPATHD start;chkconfig MULTIPATHD on
If the NetworkManager service is installed on the cluster node, the NetworkManager service needs to be shut down
Configure name resolution for all three servers
[Email protected] ~]# vim/etc/hosts
201.1.1.1 jiedian1.public.hydra.cn
201.1.2.1 jiedian1.private.hydra.cn Jiedian1
201.1.1.2 jiedian2.public.hydra.cn
201.1.2.2 jiedian2.private.hydra.cn jiedian2
201.1.2.3 jiedian3.private.hydra.cn JIEDIAN3
201.1.2.254 host.private.hydra.cn Host
Install Ricci on both cluster nodes (same operation on both nodes)
[Email protected] ~]# yum-y install Ricci
[Email protected] ~]# echo ' Redhat ' | passwd--stdin Ricci (Ricci is a tool for linking clusters, setting passwords is more secure)
[Email protected] ~]#/etc/init.d/ricci Start;chkconfig Ricci on
Install the Luci (any one can be installed, preferably not installed on the cluster node)
[Email protected] ~]# yum-y install Luci
[Email protected] ~]#/etc/init.d/luci start;chkconfig Luci on
Web browser to https://cunchu:8084 (after launch will give a URL address, in which installed hostname is which, access to the Web page input IP can be)
[email protected] ~]# Firefox https://201.1.2.3:8084
When the page comes out, enter the root and password of this server
Click Manage CL usters "Create"
The cluster error occurred
Authentication failed authentication error, Ricci password is faulty
SSL Connection connection error, check if Ricci service is running, firewall, SELinux
After installation, some nodes are abnormal, display red, Cman and Rgmanager set to boot after the start reboot
When the installation is complete, there is no display in the Web page, in the Web page add
Start the service after the physical machine is configured fence and the configuration file is generated
FENCE-VIRTD Fence-virtd-libvirt Fence-virtd-multicast
[Email protected] ~]# yum-y install FENCE-VIRTD fence-virtd-libvirt fence-virtd-multicast
[Email protected] ~]# fence_virtd-c
...
Interface [None]: public2 (changed to Public2)
Backend module [checkpoint]: Libvirt (changed to Libvirt)
...
[[email protected] ~]# mkdir/etc/cluster (create key file)
[[email protected] ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1 (Create key file random code)
[Email protected] ~]# Scp/etc/cluster/fence_xvm.key 201.1.2.1:/etc/cluster/
[Email protected] ~]# Scp/etc/cluster/fence_xvm.key 201.1.2.2:/etc/cluster/
[[email protected] ~]#/ETC/INIT.D/FENCE_VIRTD start; Chkconfig FENCE_VIRTD on
In the Web page configuration
[email protected] ~]# Firefox https://201.1.2.3:8084
Click Fence Devices "add" drop-down list to select your own fence
Click Nodes "Select Host" Add Fence method "After you create it, click Add Fence instnce
Domain here to fill in the full name rh6_node01
Click Failover Domains

Build a highly available Web cluster
Resources: Apache, just install it, do not start, shared storage uses iSCSI-provided shared storage, and IP addresses provide services externally 201.1.1.100

Install Apache (two nodes do the same operation, partition format in a single can do, it will be shared)
[Email protected] ~]# yum-y install httpd
[Email protected] ~]# Parted/dev/mapper/mpatha
(parted) Mklabel GPT
(parted) Mkpart primary EXT4 1M 1024M
[[email protected] ~]# partprobe (load partition)
[Email protected] ~]# MKFS.EXT4/DEV/MAPPER/MPATHAP1
[Email protected] ~]# mount/dev/mapper/mpathap1/var/www/html/
[Email protected] ~]# vim/var/www/html/index.html
Hydra
[[email protected] ~]# umount/var/www/html/(after writing the page to uninstall, who mounts determined by the cluster)
In the Web page configuration
[email protected] ~]# Firefox https://201.1.2.3:8084
Click Resources "Add" Apache "Naem casually write, the others are deleted, the last line to write 3
Click Resources "Add" IP addres "netmask must write 24
Click Resources "Add" filesystem
Click Service groups "Add"
Firefo Http://201.1.1.100/(test verification)
——————————————————————————————————————————————————————————————————————————————————————————

Linux RHCs clustered high-availability Web server

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.