#以node01为例修改主机名
#node02需要同样的配置
[Email protected] ~]# cat/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.10.5 node01
10.10.10.6 NODE02
#关闭防火墙以及SElinux
#node02需要同样配置
[Email protected] ~]# Systemctl stop Firewalld
[Email protected] ~]# systemctl disable FIREWALLD
Removed Symlink/etc/systemd/system/dbus-org.fedoraproject.firewalld1.service.
Removed Symlink/etc/systemd/system/basic.target.wants/firewalld.service.
[Email protected] ~]# Setenforce 0
[Email protected] ~]# sed-i ' s/selinux=enforcing/selinux=disabled/'/etc/selinux/config
#配置pacemaker的EPEL源
#node02需要同样配置
[Email protected] ~]# Cat/etc/yum.repos.d/pacemaker.repo
[Pacemaker]
Name=pacemaker
baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
Enabled=1
Gpgcheck=0
#下载pacemaker安装包
#node02需要同样配置
[email protected] ~]# Yum install pacemaker PCs resource-agents-y
#配置节点之间认证
#node02需要同样配置
[[email protected] ~]# ssh-keygen-t rsa-p '
[Email protected] ~]# Ssh-copy-id NODE02
#修改pacemaker的用户密码 (pacemaker user is hacluster, software installed after the user to add)
#node02需要同样配置
[Email protected] ~]# passwd hacluster
#启动pcs服务
#node02需要同样配置
[Email protected] ~]# systemctl restart PCSD
#设置节点认证
#只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# pcs cluster auth node01 node02
Username:hacluster
Password:
Node02:authorized
Node01:authorized
#创建一个名为mycluster的集群, and add node01 and NODE02 to the cluster node
#只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# pcs cluster setup--force--name mycluster node01 node02
Destroying cluster on NODES:NODE01, NODE02 ...
Node01:stopping Cluster (Pacemaker) ...
Node02:stopping Cluster (Pacemaker) ...
Node01:successfully destroyed cluster
Node02:successfully destroyed cluster
Sending cluster config files to the nodes ...
node01:succeeded
node02:succeeded
Synchronizing PCSD certificates on Nodes node01, node02 ...
Node02:success
Node01:success
Restarting PCSD on the nodes in order to reload the certificates ...
Node02:success
Node01:success
[Email protected] ~]# pcs cluster start--all
Node01:starting Cluster ...
Node02:starting Cluster ...
#查看集群状态
[[Email protected] ~]# pcs status
Cluster Name:mycluster
Warning:no Stonith devices and stonith-enabled is not false
Stack:corosync
Current DC:NODE02 (version 1.1.15-11.el7_3.5-e174ec8)-Partition with quorum
Last Updated:mon Sep one 22:54:14 last Change:mon Sep one 22:53:39 by Hacluster via CRMD on NODE02
2 nodes and 0 resources configured
Online: [Node01 NODE02]
No Resources
Daemon Status:
Corosync:active/disabled
Pacemaker:active/disabled
Pcsd:active/disabled
#查看corosync的状态
[[Email protected] ~]# pcs status Corosync
Membership Information
----------------------
Nodeid votes Name
1 1 node01 (local)
2 1 NODE02
#查看状态是否正常
[Email protected] ~]# Crm_verify-l-V
Error:unpack_resources:Resource start-up disabled since no STONITH resources have been defined
Error:unpack_resources:Either Configure some or disable STONITH with the stonith-enabled option
Error:unpack_resources:NOTE:Clusters with GKFX data need STONITH to ensure data integrity
Errors found during check:config not valid
* * Found an error
#关闭报错信息
[Email protected] ~]# PCs property Set Stonith-enabled=false
#设置VIP
#只在node01操作, all operations are automatically synced to NODE02
[[Email protected] ~]# pcs resource create Clusterip ocf:heartbeat:IPaddr2 nic=ens34 ip=10.10.10.8 cidr_netmask=32 op mo Nitor interval=30s
#安装http服务
#node02需要同样配置
[Email protected] ~]# yum-y install httpd
#编辑apache首页
#node02需要同样配置 (change node01 to NODE02 on Node2)
[Email protected] ~]# vi/var/www/html/index.html
<body>welcome to Node 1 </body>
#配置apache的URL;
#为了监视您的Apache实例的健康和恢复它如果失败, the pacemaker assumes the URL of the state of the resource proxy used.
#node02需要同样配置
[Email protected] ~]# vi/etc/httpd/conf/httpd.conf
<Location/server-status>
SetHandler Server-status
Order Deny,allow
Deny from all
Allow from 127.0.0.1
</Location>
#将apache加入集群
#只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# pcs resource create Web Ocf:heartbeat:apache configfile=/etc/httpd/conf/httpd.conf Statusurl = "Http://localhost/server-status" OP monitor interval=1min
#设置apache的超时时间
# #只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# PCs resource op defaults timeout=240s
#将VIP和apache捆绑在一起
# #只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# pcs constraint colocation add Web clusterip INFINITY
#设置启动顺序
# #只在node01操作, all operations are automatically synced to NODE02
[Email protected] ~]# pcs constraint order Clusterip then Web
Adding Clusterip Web (kind:mandatory) (Options:first-action=start Then-action=start)
#查看集群状态
[[Email protected] ~]# pcs status
Cluster Name:mycluster
Stack:corosync
Current dc:node01 (version 1.1.15-11.el7_3.5-e174ec8)-Partition with quorum
Last Updated:tue Sep 16:06:59 last Change:tue Sep 16:06:49 by Root via cibadmin on node01
2 nodes and 2 resources configured
Online: [Node01 NODE02]
Full list of resources:
Clusterip (OCF::HEARTBEAT:IPADDR2): Started node01
Web (Ocf::heartbeat:apache): Started node01
Daemon Status:
Corosync:active/disabled
Pacemaker:active/disabled
Pcsd:active/disabled
#此时集群都在node01上.
#我们宕掉node01, view the cluster status on NODE02
[[Email protected] ~]# pcs status
Cluster Name:mycluster
Stack:corosync
Current DC:NODE02 (version 1.1.15-11.el7_3.5-e174ec8)-Partition with quorum
Last Updated:tue Sep 17:02:24 last Change:tue Sep 17:01:57 by Root via cibadmin on node01
2 nodes and 2 resources configured
Online: [NODE02]
OFFLINE: [NODE01]
Full list of resources:
Clusterip (OCF::HEARTBEAT:IPADDR2): Started node02
Web (Ocf::heartbeat:apache): Started node02
Daemon Status:
Corosync:active/disabled
Pacemaker:active/disabled
Pcsd:active/disabled
This article is from the "Openstack+kvm+linux" blog, make sure to keep this source http://wangzhice.blog.51cto.com/12875919/1964681
Pacemaker building an HTTP cluster