Lab Environment:
Server:
192.168.145.208 (cluster node one)
192.168.145.209 (cluster node II)
192.168.145.210 (NFS server)
Operating system: Three are CENTOS7
Configuration steps:
There are no key communication configurations in the two cluster nodes:
Ssh-keygen-t rsa-p ""
Ssh-copy-id-i ~/.ssh/id_rsa.pub [email protected]
To close the firewall for each node:
Systemctl Stop Firewalld
To edit the Corosync configuration file:
Totem {
Version:2
crypto_cipher:aes128
Crypto_hash:sha1
Secauth:on #开启认证
interface {
ringnumber:0
bindnetaddr:192.168.145.0 #绑定的网络地址
mcastaddr:239.255.1.1 #多播组地址
mcastport:5405 #多播监听端口
Ttl:1
}
}
nodelist {
Node {
Ring0_addr:web1
Nodeid:1
}
Node {
Ring0_addr:web2
Nodeid:2
}
}
Logging {
Fileline:off
To_stderr:no
To_logfile:yes
LogFile:/var/log/cluster/corosync.log
To_syslog:no
Debug:off
Timestamp:on
Logger_subsys {
Subsys:quorum
Debug:off
}
}
Quorum {
Provider:corosync_votequorum
}
Use the Corosync-keygen command to generate the Authkey file and copy the Authkey and corosync.conf files to the other cluster nodes.
Start Corosync and pacemaker on each node:
Systemctl Start Corosync.service
Systemctl Start Pacemaker.service
Install the httpd process on two nodes and set it to boot
[email protected] ~]# Yum install httpd-y
Systemctl Enable Httpd.service
[email protected] ~]# Yum install httpd-y
Systemctl Enable Httpd.service
6. Install the Crmsh tool and download it separately
python-pssh-2.3.1-4.2.x86_64.rpm
pssh-2.3.1-4.2.x86_64.rpm
crmsh-2.1.4-1.1.x86_64.rpm
Then install: yum-y install python-pssh-2.3.1-4.2.x86_64.rpm pssh-2.3.1-4.2.x86_64.rpm crmsh-2.1.4-1.1.x86_64.rpm, The Crmsh tool only needs to be installed on a single node.
7. Use the CRM command to enter the CRMSH command line:
650) this.width=650; "src="/e/u261/themes/default/images/spacer.gif "style=" Background:url ("/e/u261/lang/zh-cn/ Images/localimage.png ") no-repeat center;border:1px solid #ddd;" alt= "Spacer.gif"/>650 "this.width=650;" src= "http ://s2.51cto.com/wyfs02/m02/7f/2d/wkiom1cv1ouwhyx9aaafmeg7p34332.png "title=" ~yr]ja0y%v '}%4PFF3Y_9B6.png "alt=" Wkiom1cv1ouwhyx9aaafmeg7p34332.png "/>
8. Configuring Resources
650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M00/7F/2B/wKioL1cV1zbBDck1AABN5xJMUUk349.png "title=" egd%b ( Swd%jtm0_ea1471xi.png "alt=" Wkiol1cv1zbbdck1aabn5xjmuuk349.png "/>
9. Verify the cluster, Access http://{$WEBIP}, this test cluster WEBIP to 192.168.145.200
650) this.width=650; "src=" Http://s5.51cto.com/wyfs02/M02/7F/2B/wKioL1cV14LyLr2yAAA_yCelpKY046.png "title="}1{ 61sap1}o]pnj7sq}5g{q.png "alt=" Wkiol1cv14lylr2yaaa_ycelpky046.png "/>650" this.width=650; "src=" http:// S5.51cto.com/wyfs02/m01/7f/2e/wkiom1cv1zbxqg9maaatbk6c-3q600.png "title="]%l4e]9zhj5z7tldilz2mbh.png "alt=" Wkiom1cv1zbxqg9maaatbk6c-3q600.png "/>
You can view the cluster resource transfer situation by setting the WEB1 node to the standby state.
This article is from the "JC" blog, be sure to keep this source http://jackeychen.blog.51cto.com/7354471/1765380
Corosync + Pacemaker implements HTTPD service high availability cluster