options, nodes, resources and their interrelationships and the current status of the definition). When he collects changes in any resource, and changes in node statistics, it integrates together the latest information from the current cluster and distributes it to each node of the cluster.
Pengine (Policy engine): it is primarily responsible for the information sent by CRM to calculate the next state of the cluster according to the various settings in the configuration file (based on the cu
2.2.3 install crmsh for resource management:Since pacemaker 1.1.8, crm has evolved into an independent project crmsh. That is to say, after pacemaker is installed, there is no crm command. To manage cluster resources, crmsh must be installed independently, and crmsh depends on pssh.Special purpose:With crmsh, you do not need to install heartbeat. in earlier versions, you need to install heartbeat to use it
Corosync: It belongs to the Openais (Open Application Interface Specification) in a project Corosync a version of itself does not have the voting function, after Corosync 2.0 introduced Votequorum subsystem also has the voting function, if we use the 1 version of the , but also need to use the votes to make the decision of how to be good, of course, in Red Hat Cman + Corosync combined with, but early cman and pacemaker can not combine, if want to use
Linux Corosync + Pacemaker
Complete HA structure:
Install and configure a high-availability cluster:1. node name: the names of each node in the cluster must be resolved to each other./Etc/hostsIn hosts, the forward and reverse resolution results of the host name must be consistent with those of "uname-n;2. Time must be synchronizedUse Network Time Server synchronization time3. Not required: each node can communicate with each other through ssh key au
cluster, and the CIB content is automatically synchronized across the cluster, using Pengine to compute the ideal state of the cluster, generating a list of instructions, and then conveying it to the DC.The DC node of all nodes in the pacemaker cluster is elected as the main decision node. If the node of the selected DC is down, it will quickly establish a new DC on all nodes. DCS pass Pengine generated policies to LRMD or CRMD on other nodes through
Therefore, to define a master-slave resource, you must first define it as a master resource. To ensure that the primary resource can be mounted on the drdb device at the same time, you need to define the Filesystem
Prerequisites:
The name of the drbd device and the mount point of the drbd device must be consistent with that of the peer node;
Because we
Define resources and use the device name and mount point. Therefore, the drbd device names and mount points at both ends must be consistent;
How
Lab Environment: Two hosts: centos6.5+httpd2.4+php5.5 the basic environment that makes up the web, and the Web page is accessed properly, and ensures that the HTTPD24 service does not boot up. Node1.mylinux.com 10.204.80.79 Node2.mylinux.com 10.204.80.80 I am here to enable ansible to facilitate the management of two nodes, enable a host as a management node, ip:10.204.80.71, in the hosts of the three hosts to add the corresponding name resolution, management node to node1,node2 two nodes enable
: Drbdadm----Overwrite-data-of-peer primary MySQL, making this VM a primary for the DRBD group. When you view the status of DRBD by Drbd-overview, you can see that two disks are synchronizing at this time:0:mysql/0 syncsource primary/secondary uptodate/inconsistent C r-----???? [A]. ..... ......... Sync ' ed:0.6% (20372/20476) MThe format is now on the master node: mkfs.ext4/dev/drbd0?At this point, the DRBD installation configuration is complete. At
First, the concept of pacemaker(1) Pacemaker (pacemaker), is a highly available cluster resource manager. It achieves maximum availability of resource management for node and resource level fault detection and recovery by using the message and member capabilities provided by the preferred cluster infrastructure (Corosync or heartbeat). It monitors and recovers no
One: Experimental environment
Node
OS
Ip
Drbd_ip
DRBD with HDD
Vip
Web1
centos5.10
192.168.10.11
172.16.1.1
/dev/sdb
192.168.10.100
Web2
centos5.10
192.168.10.12
172.16.1.2
/dev/sdb
1. Where the two-node IP address is set as above:
2. Two nodes have been configured with mutual SSH trust and have done time synchronization
Two: Install related software (both nodes 1 and 2 are installed)
1. Insta
Tags: corosync pacemakerFirst, the environment introduction:Three are dual network cards:Openstack-control.example.com Openstack-controleth0:172.16.171.100eth1:10.1.1.100Openstack-nova.example.com Openstack-novaeth0:172.16.171.110eth1:10.1.1.110Openstack-neutron.example.com Openstack-neutroneth0:172.16.171.120eth1:10.1.1.120Second, Corosync and pacemaker configuration steps are as follows:1. Configure time zones and sync times2. Configure the cluster
High Availability of servers through HA, that is, high availability cluster of mysql servers through corosync + drbd + pacemaker. Main steps to implement the case application: 1. Preparations: 2. install and configure DRBD. 3. mysq
High Availability of servers through HA, that is, high availability cluster of mysql servers through corosync + drbd + pacemaker. Main steps to implement the case application: 1.
Ansible + Corosync + Pacemaker + NFS for http High Availability
Directory:
(1) experiment environment(2) Preparations(3) Configure basic configurations for node1 and node2(4) deploying nfs using ansible(5) deploying corosync and pacemaker using ansible(6) Use ansible to install the crmsh Tool(7) Use crmsh to configure http High Availability(8) Verification(9) Notes
(1) experiment environment1.1 environment
The search algorithm for the Niang, as well as the bidding rankings, I just want to say that I bought a watch last year.
One, MySQL replication master configuration
Please refer to: MySQL replication master (master-slave) Sync http://www.111cn.net/database/mysql/83904.htm
Second, Corosync pacemaker installation configuration
Please refer to: Corosync pacemaker Ng
OCF:HEARTBEAT:IPADDR2 ip=192.168.0.183 cidr_netmask=32 op Monitor interval= 505
Another important message is OCF:HEARTBEAT:IPADDR2. This tells pacemaker three things, the first part OCF, indicating the standard (type) used for this resource and where to find it. The second section identifies the namespace of the resource script in the OCF, in this case the heartbeat. The last section indicates the name of the resource script.
[Root@node1 ~]# pcs st
The version of the Pacemaker cluster configuration. About the version of the Pacemaker cluster configuration, CIB in Pacemaker has a version composed of admin_epoch, epoch, and num_updates. when a node is added to the cluster, the version number is used, obtain the version of the Pacemaker cluster.
In
) = joinedRuntime.totem.pg.mrp.srp.members.167772173.config_version (U64) = 0Runtime.totem.pg.mrp.srp.members.167772173.ip (str) = R (0) IP (10.0.0.13)Runtime.totem.pg.mrp.srp.members.167772173.join_count (U32) = 2Runtime.totem.pg.mrp.srp.members.167772173.status (str) = joined167772172 is the member ID, its IP is 10.0.0.12, the state is joined;167772173 is the member ID, its IP is 10.0.0.13, the state is joined;Corosync Service status is correct.Start the P
Tags: host server IP addressLab Environment:Two MARIADB servers 172.16.10.20 172.16.10.21fip:172.16.10.28MARIADB file storage share: 172.16.10.22Experiment Preparation:1, two node host name and corresponding IP address resolution service can work, and the host name of each node needs to be consistent with the results of the "uname-n" commandVim/etc/hosts
172.16.10.20 21.xuphoto.com 20xu
172.16.10.21 22.xuphoto.com 21xu
Node1:
# sed-i ' [email protected]\ (hostname=\). *@\[email protected] '
# HO
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.