Premise:
There are two test nodes in this configuration, respectively Node1 and Node2, the IP address of the phase is 202.207.178.6 and 202.207.178.7 respectively.
(To avoid impact, first turn off firewall and SELinux)
First, install the configuration Corosync and related software package
1. Preparatory work
1) The node name must match the execution result of the Uname-n command
Node1:
# hostname Node1
# vim/etc/sysconfig/network
Hostname=node1
Node2:
# hostname Node2
# vim/etc/sysconfig/network
Hostname=node2
2) Communication between nodes must be trusted via SSH
[[email protected] ~]# ssh-keygen-t rsa-f ~/.ssh/id_rsa-p '
[Email protected] ~]# ssh-copy-id-i ssh/id_rsa.pub [email protected]
[[email protected] ~]# ssh-keygen-t rsa-f ~/.ssh/id_rsa-p '
[Email protected] ~]# ssh-copy-id-i ssh/id_rsa.pub [email protected]
3) Time between nodes in the cluster must be synchronized
Synchronizing time with NTP server
Ntpdate IP (host address with NTP service configured)
4) Configure local resolution:
[Email protected] ~]# vim/etc/hosts
202.207.178.6 Node1
202.207.178.7 Node2
[Email protected] ~]# scp/etc/hosts node2:/etc/
2, install the following RPM package:
Cluster-glue,cluster-glue-libs
Corosync,corosynclib
Heartbeat,heartbeat-libs
Libesmtp
Pacemaker,pacemaker-cts,pacemaker-libs
Resource-agents
# yum Install Cluster-glue
# yum Install--nogpgcheck *.rpm (copy heartbeat-3.0.4-2.el6.i686.rpm and heartbeat-libs-3.0.4-2.el6.i686.rpm to the home directory)
# yum Install Corosync
# yum-y Install LIBESMTP
# Yum Install pacemaker
# yum Install Pacemaker-cts
3. Configure Corosync, (the following command is performed on Node1)
# Cd/etc/corosync
# CP Corosync.conf.example corosync.conf
Then edit corosync.conf and add the following:
Modify the following statement:
bindnetaddr:202.207.178.0#网络地址, the network address segment where the node resides
Secauth:on#打开安全认证
Threads:2#启动的线程数
To_syslog:no (not logging in the default location)
Timestamp:no (here to improve system performance, do not log timestamps, because recording time = stamp requires system calls, wasting resources)
Add the following, define pacemaker to start with Corosync, and define working users and groups for Corosync:
Service {
ver:0
Name:pacemaker
}
aisexec {
User:root
Group:root
}
The authentication key file used to generate communication between nodes:
# Corosync-keygen
Copy Corosync and Authkey to Node2:
# scp-p Corosync Authkey node2:/etc/corosync/
4. Try to start, (the following command is performed on Node1):
# service Corosync Start
Note: Starting node2 needs to be done on Node1, not directly on the Node2 node.
# ssh Node2 '/etc/init.d/corosync start '
5, test is normal
To see if the Corosync engine starts properly:
# grep-e "Corosync Cluster Engine"-E "configuration file"/var/log/cluster/corosync.log
OCT 00:38:06 Corosync [MAIN] corosync Cluster Engine (' 1.4.7 '): Started and ready to provide service.
OCT 00:38:06 Corosync [main] successfully read main configuration file '/etc/corosync/corosync.conf '
To view the initialization of a member node notification if it is issued normally:
# grep Totem/var/log/cluster/corosync.log
OCT 00:38:06 Corosync [TOTEM] Initializing Transport (UDP/IP multicast).
OCT 00:38:06 Corosync [TOTEM] Initializing transmit/receive security:libtomcrypt sober128/sha1hmac (mode 0).
OCT 00:38:06 Corosync [TOTEM] The network interface [202.207.178.6] is now up.
OCT 00:39:35 Corosync [TOTEM] A processor joined or left the membership and A new membership was formed.
Check the startup process for any errors that occur:
# grep ERROR:/var/log/messages | Grep-v unpack_resources
To see if the pacemaker starts normally:
# grep Pcmk_startup/var/log/cluster/corosync.log
OCT 00:38:06 Corosync [PCMK] info:pcmk_startup:CRM:Initialized
OCT 00:38:06 Corosync [PCMK] logging:initialized Pcmk_startup
OCT 00:38:06 Corosync [PCMK] info:pcmk_startup:Maximum core file size is:4294967295
OCT 00:38:06 Corosync [PCMK] Info:pcmk_startup:service:9
OCT 00:38:06 Corosync [PCMK] info:pcmk_startup:Local Hostname:node1
Use the following command to view the startup status of the cluster node:
# CRM Status
Last Updated:tue Oct 17:28:10 change:tue Oct 17:21:56 by Hacluster via CRMD on Node1
Stack:classic Openais (with plugin)
Current Dc:node1 (version 1.1.14-8.el6_8.1-70404b0)-Partition with quorum
2 nodes and 0 resources configured, 2 expected votes
Online: [Node1 Node2]
From the information above, it can be seen that two nodes have started normally, and the cluster is already in normal working condition.
Ii. allocation of resources and constraints
1. Install the CRMSH package:
pacemaker itself is just a resource manager, we need an interface to define and manage resources on Pacemker , and Crmsh is the Pacemaker configuration interface, starting with pacemaker 1.1.8, Crmsh Developed into a standalone project that is no longer available in pacemaker. CRMSH provides a command-line interface to manage the pacemaker cluster, which has more powerful management capabilities, is also more user-friendly, and has been widely used in more clusters, like software and PCs;
Add the following in the configuration file under/etc/yum.repo.d/
[Ewai]
Name=aaa
baseurl=http://download.opensuse.org/repositories/network:/ha-clustering:/stable/centos_centos-6/
Enabled=1
Gpgcheck=0
# Yum Clean All
# yum Makecache
[email protected] yum.repos.d]# Yum install Crmsh
2. Check the configuration file for any syntax errors
CRM (Live) configure# verify
We can disable Stonith by the following command:
# CRM Configure Property Stonith-enabled=false
or CRM (live) configure# property Stonith-enabled=false
CRM (Live) configure# commit
3. Configure resources (Configure a Web service cluster. If the HTTPD service is already installed and it does not boot automatically)
CRM (Live) configure# primitive Webip ocf:heartbeat:IPaddr params ip=202.207.178.4 nic=eth0 cidr_netmask=24
CRM (Live) configure# verify
CRM (Live) configure# commit
CRM (Live) configure# primitive httpd lsb:httpd
CRM (Live) configure# commit
CRM (live) configure# Group WebService Webip httpd
CRM (Live) configure# commit
Now that the resources have been configured, you are ready to start testing!
Welcome to criticize and correct!
This article is from the "10917734" blog, please be sure to keep this source http://10927734.blog.51cto.com/10917734/1866146
Corosync+pacemaker enabling high availability of Web services