Linux Corosync + Pacemaker
Complete HA structure:
Install and configure a high-availability cluster:
1. node name: the names of each node in the cluster must be resolved to each other.
/Etc/hosts
In hosts, the forward and reverse resolution results of the host name must be consistent with those of "uname-n;
2. Time must be synchronized
Use Network Time Server synchronization time
3. Not required: each node can communicate with each other through ssh key authentication;
Installation:
[Root @ marvin heartbeat] # yum install corosync-y
Configuration:
[Root @ sherry heartbeat] # cd/etc/corosync/
[Root @ sherry corosync] # ls
Corosync. conf. example corosync. conf. example. udpu service. d uidgid. d
[Root @ sherry corosync] # cp corosync. conf. example corosync. conf
[Root @ sherry corosync] # vim corosync. conf
Compatibility: whitetank # compatibility with versions earlier than 0.8
Totem {
Version: 2 # communication protocol
Secauth: on # security authentication function off others know that multi-wave address can join the best to enable
Threads: 0 #0 indicates parallel threads during Default Authentication
Interface {
Ringnumber: 0 # define the ring number to prevent the network card from sending heartbeat information cyclically.
Bindnetaddr: 192.168.1.0 # bind a network address
Mcastaddr: 225.122.111.111 #224.0.1.0 ~ We recommend that you use this set of temporary
Mcastport: 5405 # multi-wave Port
Ttl: 1 # Avoid loop once
}
}
Logging {
Fileline: off
To_stderr: no # standard error output
To_logfile: yes
Logfile:/var/log/cluster/corosync. log
To_syslog: no # enable one log
Debug: off
Timestamp: no # Whether to record the timestamp
Logger_subsys {
Subsys: AMF
Debug: off
}
}
Amf {
Mode: disabled # Programming Problems
}
Install pacemaker:
[Root @ sherry corosync] # yum install pacemaker-y
Automatic Start of pacemaker: (the service will not be started, but Manual start is required)
[Root @ sherry corosync] # vim corosync. conf
Service {
Ver: 1 # Run pacemaker as a plug-in
Name: pacemaker
}
Aisexec {
User: root
Group: root
}
Key file:
# Manually knock on the key in the production environment
[Root @ sherry corosync] # mv/dev/random. bak
[Root @ sherry corosync] # mv/dev/urandom/dev/random
[Root @ sherry corosync] # corosync-keygen
[Root @ sherry corosync] # mv/dev/random/dev/urandom
[Root @ sherry corosync] # mv/dev/random. bak/dev/random
Key production: (permission 400)
[Root @ sherry corosync] # ll
Total 24
-R -------- 1 root 128 May 31 20:54 authkey
-Rw-r -- 1 root 476 May 31 20:50 corosync. conf
-Rw-r -- 1 root 2663 May 11 corosync. conf. example
-Rw-r -- 1 root 1073 May 11 corosync. conf. example. udpu
Drwxr-xr-x 2 root 4096 May 11 service. d
Drwxr-xr-x 2 root 4096 May 11 uidgid. d
Copy the configuration file to the corresponding node:
[Root @ sherry corosync] # scp-P6789-p authkey corosync. conf root @ marvin:/etc/corosync/
Crmsh installation:
[Root @ sherry yum. repos. d] # cd/etc/yum. repos. d/
[Root @ sherry yum. repos. d] # wget http://download.openSUSE.org/repositories/network:ha-clustering:Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
[Root @ sherry yum. repos. d] # yum install crmsh-y
STARTUP script:
[Root @ sherry corosync] #/etc/init. d/corosync start
[Root @ sherry corosync] #/etc/init. d/pacemaker start
# Reverse order when stopping
Lab server: marvin sherry
Initialization:
Corosync enables stonith by default, but the current cluster does not have the corresponding stonith device. We can use the following command to disable stonith first:
Crm (live) # configure
Crm (live) configure # property stonith-enabled = false
Crm (live) configure # verify
Crm (live) configure # commit
Set Voting
Crm (live) configure # property no-quorum-policy = ignore
Crm (live) configure # verify
Crm (live) configure # commit
View All configuration information:
Crm (live) configure # show
Node marvin
Node sherry
Property cib-bootstrap-options :\
Have-watchdog = false \
Dc-version = 1.1.14-8. el6-70404b0 \
Cluster-infrastructure = "classic openais (with plugin )"\
Expected-quorum-votes = 2 \
Stonith-enabled = false
No-quorum-policy = ignore
Define resources:
Define an ip Address:
Crm (live) configure # primitive webip ocf: heartbeat: IPaddr params ip = 192.168.1.199
Crm (live) configure # verify
Crm (live) configure # commit
Crm (live) configure # show
Node marvin
Node sherry
Primitive webip IPaddr \
Params ip = 192.168.1.199
Property cib-bootstrap-options :\
Have-watchdog = false \
Dc-version = 1.1.14-8. el6-70404b0 \
Cluster-infrastructure = "classic openais (with plugin )"\
Expected-quorum-votes = 2 \
Stonith-enabled = false
[Root @ marvin ~] # Ip addr show eth1
3: eth1: <BROADCAST, MULTICAST, UP, LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
Link/ether 00: 0c: 29: 0c: 34: 2c brd ff: ff
Inet 192.168.1.220/24 brd 192.168.1.255 scope global eth1
Inet 192.168.1.199/24 brd 192.168.1.255 scope global secondary eth1
Inet6 fe80: 20c: 29ff: fe0c: 342c/64 scope link
Valid_lft forever preferred_lft forever
Post-definition monitoring: it will be automatically started after kill
Crm (live) configure # monitor webserver 30 s: 15 s # Delete can directly edit 30 s monitor once 15 s delay
Crm (live) configure # verify
Crm (live) configure # commit
Crm (live) configure # show
Primitive webserver lsb: nginx \
Meta target-role = Stopped \
Op monitor interval = 30 s timeout = 15 s
Nfs: (define monitoring) (correct definition, not submitted)
Crm (live) configure # primitive webstore ocf: heartbeat: Filesystem params device = "sherry: /nfsshared/node1 "directory ="/mnt/nfs/node1 "fstype = 'nfs 'op monitor interval = 20 s timeout = 40 s op start timeout = 60 s op stop timeout = 60 s on-fail = restart
Crm (live) configure # verify
Nginx: (define monitoring)
Crm (live) configure # primitive webserver lsb: nginx op monitor interval = 30 s timeout = 15 s on-fail = restart
Crm (live) configure # verify
Crm (live) configure # commit
Crm (live) configure # show
Node marvin
Node sherry
Primitive webip IPaddr \
Params ip = 192.168.1.199
Primitive webserver lsb: nginx
Property cib-bootstrap-options :\
Have-watchdog = false \
Dc-version = 1.1.14-8. el6-70404b0 \
Cluster-infrastructure = "classic openais (with plugin )"\
Expected-quorum-votes = 2 \
Stonith-enabled = false
Status:
Crm (live) # status
Last updated: Wed Jun 1 20:38:32 2016 Last change: Wed Jun 1 20:36:32 2016 by root via cibadmin on sherry
Stack: classic openais (with plugin)
Current DC: marvin (version 1.1.14-8. el6-70404b0)-partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [marvin sherry]
Full list of resources:
Webip (ocf: heartbeat: IPaddr): Started marvin
Webserver (lsb: nginx): Started sherry
Stop Resource:
Crm (live) # resource
Crm (live) resource # stop webserver
Crm (live) resource # status
Webip (ocf: heartbeat: IPaddr): Started
Webserver (lsb: nginx): (target-role: Stopped) Stopped
Clear resource status:
Crm (live) resource # cleanup webserver
Cleaning up webserver on marvin, removing fail-count-webserver
Cleaning up webserver on sherry, removing fail-count-webserver
* The configuration specifies that 'webserver' should remain stopped
Waiting for 2 replies from the CRMd .. OK
Group operation:
Define resources first and add them to the group
Crm (live) # status
Last updated: Wed Jun 1 20:38:32 2016 Last change: Wed Jun 1 20:36:32 2016 by root via cibadmin on sherry
Stack: classic openais (with plugin)
Current DC: marvin (version 1.1.14-8. el6-70404b0)-partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [marvin sherry]
Full list of resources:
Webip (ocf: heartbeat: IPaddr): Started marvin
Webserver (lsb: nginx): Started sherry
Crm (live) # configure
Crm (live) configure # group webservice webip webserver
Crm (live) configure # verify
Crm (live) configure # commit
Crm (live) configure # show
Node marvin
Node sherry
Primitive webip IPaddr \
Params ip = 192.168.1.199
Primitive webserver lsb: nginx
Group webservice webip webserver
Property cib-bootstrap-options :\
Have-watchdog = false \
Dc-version = 1.1.14-8. el6-70404b0 \
Cluster-infrastructure = "classic openais (with plugin )"\
Expected-quorum-votes = 2 \
Stonith-enabled = false
Delete group:
Crm (live) resource # stop webservice
Crm (live) configure # delete webservice # Resources in the Group still exist
Node operation:
Node offline:
Crm (live) # node
Crm (live) node # standby marvin # Automatic Resource Transfer
Node launch:
Crm (live) node # online marvin
Node cleanup: (Resource Information cleanup on the node)
Crm (live) node # clearstate marvin
Location constraints:
Bound together:
Crm (live) configure # colocation webserver_and_webip inf: webserver webip
Crm (live) configure # verify
Crm (live) configure # commit
View
Crm (live) configure # show
Node marvin \
Attributes standby = off
Node sherry
Primitive webip IPaddr \
Params ip = 192.168.1.199
Primitive webserver lsb: nginx
Colocation webserver_and_webip inf: webserver webip
Property cib-bootstrap-options :\
Have-watchdog = false \
Dc-version = 1.1.14-8. el6-70404b0 \
Cluster-infrastructure = "classic openais (with plugin )"\
Expected-quorum-votes = 2 \
Stonith-enabled = false
For details, see:
Crm (live) configure # show xml
<Rsc_colocation id = "webserver_and_webip" score = "INFINITY" rsc = "webserver" with-rsc = "webip"/> # webserver follows webip
Sequence constraints:
Crm (live) configure # order webip-before-webserver mandatory: webip webserver # sequential
Crm (live) configure # verify
Crm (live) configure # commit
Crm (live) configure # status
Order webip-before-webserver Mandatory: webip webserver
Crm (live) configure # show xml
<Rsc_order id = "webip-before-webserver" kind = "Mandatory" first = "webip" then = "webserver"/>
Location Constraints
Crm (live) configure # location webip_on_marvin webips 200: marvin
Crm (live) configure # verify
Crm (live) configure # commit
View:
Crm (live) # status
Last updated: Wed Jun 1 21:11:58 2016 Last change: Wed Jun 1 21:11:32 2016 by root via cibadmin on sherry
Stack: classic openais (with plugin)
Current DC: marvin (version 1.1.14-8. el6-70404b0)-partition with quorum
2 nodes and 2 resources configured, 2 expected votes
Online: [marvin sherry]
Full list of resources:
Webip (ocf: heartbeat: IPaddr): Started marvin
Webserver (lsb: nginx): Started marvin
Over
Ansible + Corosync + Pacemaker + NFS for http High Availability
High-availability cluster for Web Servers Based on Corosync + Pacemaker + DRBD + LNMP
Build a high-availability MySQL Cluster Based on Corosync + DRBD
Use Corosync to configure high-availability clusters based on the NFS service and DRBD service.
Corosync for Linux high availability (HA) CLUSTERS
Set up a high-availability cluster using pacemaker + Corosync
Corosync + pacemaker + RA for high MySQL availability
This article permanently updates the link address: