The following commands are implemented using crm and pcs.
1. Install corosync and pacemaker with the rpm package in CentOS6.5.
Yum-y install corosync pacemaker
2. Configure pacemaker to run as the corosync plug-in:
Add the following configuration in/etc/corosync. conf:
Secauth: on # enable security authentication for communications between cluster nodes
Interface {
Bindnetaddr: 192.168.0.0 # specify the network address of the NIC
}
Service {
Name: pacemaker # name of the Resource Manager
Ver: 0 #0: indicates that pacemaker runs as the corosync plug-in; 1: indicates that pacemaker runs as a single daemon, that is, you must manually start the pacemaker daemon after corosync is started.
}
Aisexec {
User: root # indicates which user is used to run the pacemaker Resource Manager.
Group: root # indicates which group to run the pacemaker Resource Manager.
}
3. Configure the key used for inter-node communication:
Corosync-keygen
4. Configure cluster attributes
Crm configure property no-quorum-policy = ignore
[Pcs property set no-quorum-policy = ignore]
# Indicates that resources continue to run regardless of the situation
Crm configure property stonith-enabled = false
[Pcs property set stonith-enabled = false]
# Indicates disabling stonith. By default, the cluster cannot be started if there is no stonith device.
5. Configure resources and monitor resources
1. Configure the virtual ip Address
Crm configure primitive myip ocf: heartbeat: IPaddr params ip = "192.168.0.100" nic = "eth0" cidr_netmask = "24" op monitor interval = 20 s timeout = 30 s
[Pcs resource create myip ocf: heartbeat: IPaddr params ip = "192.168.0.100" nic = "eth0" cidr_netmask = "24" op monitor interval = 20 s timeout = 30s]
# Define a primary resource myip, ocf: heartbeat: IPaddr indicates that the resource proxy type is ocf, the provider is heartbeat, and the specific proxy is IPaddr, params indicates that the parameter of the proxy IPaddr is ip = "192.168.0.100" nic = "eth0" cidr_netmask = "24", and op indicates the operation on this resource, monitor, start, stop, status, etc. Interval indicates that the health status of the resource is checked every 20 s. If the check fails, it times out after 30 s.
2. Configure the mysqld service
Crm configure primitive myservice ocf: heartbeat: mysql params binary = "/usr/bin/mysqld_safe" config = "/etc/my. cnf "datadir ="/var/lib/mysql "pid ="/var/run/mysqld/mysql. pid "socket ="/tmp/mysql. sock "additional_parameters =" -- bind-address = 192.168.0.100 "op start timeout = 120 s op stop timeout = 120 s op monitor interval = 20 s timeout = 30 s
[Pcs resource create myservice ocf: heartbeat: mysql params binary = "/usr/bin/mysqld_safe" config = "/etc/my. cnf "datadir ="/var/lib/mysql "pid ="/var/run/mysqld/mysql. pid "socket ="/tmp/mysql. sock "additional_parameters =" -- bind-address = 192.168.0.100 "op start timeout = 120 s op stop timeout = 120 s op monitor interval = 20 s timeout = 30s]
3. Configure shared storage
Crm configure primitive mystore ocf: heartbeat: Filesystem params device = 192.168.0.13: /mysqldata directory =/var/lib/mysql fstype = nfs op start timeout = 60 s op stop timeout = 60 s op monitor interval = 20 s timeout = 60 s
[Pcs resource create mystore ocf: heartbeat: Filesystem params device = 192.168.0.13: /mysqldata directory =/var/lib/mysql fstype = nfs op start timeout = 60 s op stop timeout = 60 s op monitor interval = 20 s timeout = 60s]
6. Configure Constraints
1. Configure arrangement Constraints
Crm configure colocation myip_with_mystore_myservice inf: myip mystore myservice
[Pcs constraint colocation add myip mystore myservice INFINITY]
# It indicates that the resources of myip, mystore, and myservice must be together, or a group of resources must be defined and put into the group.
2. Configure order constraints
Crm configure myip_then_mystore_then_myservice inf: myip mystore myservice
[Pcs constraint order myip then mystore then myservice]
# Indicates that the startup sequence is myip, mystore, and myservice.
7. configure network ping node monitoring
1. Configure the ping node master Resource
Crm configure primitive pnode ocf: pacemaker: ping params host_list = 192.168.0.200 multiplier = 100 op monitor interval = 10 s timeout = 60 s op start timeout = 60 s
[Pcs resource create pnode ocf: pacemaker: ping params host_list = 192.168.0.200 multiplier = 100 op monitor interval = 10 s timeout = 60 s op start timeout = 60 s]
#192.168.0.200 is the gateway address or other nodes that can be pinged. There can be multiple nodes separated by spaces. The multiplier indicates that if the node in the cluster can be pinged, the score of the node will be multiplied by the value specified by this parameter to accumulate. For example, if the value is nodeA, the ping is 1*100, ping 2*100 twice, and so on.
2. Configure pnode clone Resources
Crm configure clone cl_pnode pnode
[Pcs resource clone pnode]
3. Resource failover when the ping node fails
Crm configure location mystore_on_ping rule-inf: not_defined pingd or pingd number: lte 0
[Pcs constraint location mystore rule score =-INFINITY: not_defined pingd]
[Pcs constraint location mystore rule sorce =-INFINITY: pingd lte 0 type = number]
# Indicates that if the score obtained by ping node is less than or equal to 0, or a node does not have any attribute related to ping node, the resource will be transferred.
8. Check whether the preceding configuration has syntax errors and submit and save the configuration.
1. Check for syntax errors
Crm configure verify
2. Submit and save
Crm configure commit
9. Test
1. Use crm node standby nodeX
# NodeX indicates which node you want to suspend
2. Use ping prohibited information to test
Iptables-a output-p icmp-j DROP
Build a high-availability MySQL Cluster Based on Corosync + DRBD
Use Corosync to configure high-availability clusters based on the NFS service and DRBD service.
Corosync for Linux high availability (HA) CLUSTERS
Set up a high-availability cluster using pacemaker + Corosync
Corosync + pacemaker + RA for high MySQL availability