Build a high-availability cluster in CentOS 7

Source: Internet
Author: User

Build a high-availability cluster in CentOS 7

This article uses two machines to implement dual-collector for high-availability clusters. the IP address of the Host Name node1 is 192.168.123168, And the IP address of the Host Name node2 is 192.168.123169.

1. Install cluster Software

Software pcs, pacemaker, corosync, fence-agents-all are required. If you need to configure related services, install the corresponding software.

Ii. Configure Firewall 1. Disable firewall and selinux
 
 
  1. # systemctl disable firewalld
  2. # systemctl stop firewalld

Modify/etc/sysconfig/selinux to make sure selinux = disabled, and then run setenforce 0 or reboot server to take effect.

2. Set firewall rules
 
 
  1. # firewall-cmd --permanent --add-service=high-availability
  2. # firewall-cmd --add-service=high-availability
Iii. Mutual host name resolution between nodes

Modify the two host names node1 and node2 respectively. In centos 7, add/etc/hostname to the Host Name and table of the local host, and then restart the network service.

 
 
  1. #vi /etc/hostname
  2. node1
  3.  
  4. #systemctl restart network.service
  5. #hostname
  6. node1

Configure the host tables of the two hosts and add them to/etc/hosts.

 
 
  1. 192.168.122.168 node1
  2. 192.168.122.169 node2
4. Time synchronization between nodes

Perform time synchronization on node1 and node2. you can use ntp.

 
 
  1. [Root @ node1 ~] # Ntpdate 172.16.0.1 // 172.16.0.1 is the time server.
5. Configure ssh password-less access for each node.

The following operations must be performed on each node.

 
 
  1. # Ssh-keygen-t rsa-p' # generate a public key and a key with an empty password. Copy the public key to the peer node.
  2. # Ssh-copy-id-I/root/. ssh/id_rsa.pub root @ node2 # login username for the host name of the other party

The two hosts must communicate with each other, so both hosts must generate a key and copy a public key. The hosts file on each node must be resolved to the Host Name of the other host, 192.168.123168 node1 192.168.123169 node2

 
 
  1. # Ssh node2 'date'; date # test whether mutual trust is established
6. Use pacemaker to manage high-availability clusters. 1. Create cluster users.

To facilitate the communication between nodes and configure clusters, create a hacluster user on each node. The passwords on each node must be the same.

 
 
  1. # passwd hacluster
  2.  
  3. Changing password for user hacluster.
  4. New password:
  5. Retype new password:
  6. passwd: all authentication tokens updated successfully.
2. Set pcsd to start automatically
 
 
  1. # systemctl start pcsd.service
  2. # systemctl enable pcsd.service
3. perform authentication between nodes in the Cluster
 
 
  1. # pcs cluster auth node1 node2Username: hacluster Password: node1: Authorized node2: Authorized

4. Create and start a cluster

 
 
  1. [root@z1 ~]# pcs cluster setup --start --name my_cluster node1 node2
  2.  
  3. node1: Succeeded
  4. node1: Starting Cluster...
  5. node2: Succeeded
  6. node2: Starting Cluster...
5. Set the cluster to start automatically
 
 
  1. # pcs cluster enable –all

6. view Cluster status information

 
 
  1. [root@z1 ~]# pcs cluster status
7. Set the fence Device

For more information, see <Red Hat Enterprise Linux 7 High Availability Add-On Reference>

Corosync enables stonith by default, and the current cluster does not have the corresponding stonith device. Therefore, the default configuration is not currently available, which can be verified by the following command:

 
 
  1. #crm_verify -L -V

You can disable stonith by running the following command:

 
 
  1. # Pcs property set stonith-enabled = false the default value is true)
8. Configure Storage

High-availability clusters can use local disks to build a software-only image cluster system, or use dedicated shared disk devices to build large-scale shared disk Cluster Systems, fully meet different customer needs.

Shared disks mainly include iscsi or DBRD. This document does not use shared disks.

9. Configure floating point IP Address

No matter where the cluster service runs, we need a fixed address to provide the service. Here I select 192.168.122.101 as the floating IP address, give it a memorable name ClusterIP, and tell the cluster to check it every 30 seconds.

 
 
  1. # pcs resource create VIP ocf:heartbeat:IPaddr2 ip=192.168.122.170 cidr_netmask=24 op monitor interval=30s
  2. # pcs update VIP op monitor interval=15s
10. Configure the apache service

Install httpd on node1 and node2, and make sure httpd is disabled

 
 
  1. # systemctl status httpd.service;

If the httpd monitoring page is not configured, You can monitor it through systemd.

 
 
  1. # cat > /etc/httpd/conf.d/status.conf << EOF
  2. SetHandler server-status
  3. Order deny,allow
  4. Deny from all
  5. Allow from localhost
  6. EOF

First, create a home page for Apache. The default Apache docroot on centos is/var/www/html, So we create a home page under this directory.

Modify Node 1 as follows:

 
 
  1. [root@node1 ~]# cat <<-END >/var/www/html/index.html
  2. <body>Hello node1</body>
  3.  
  4. END

Modify Node 2 as follows:

 
 
  1. [root@node2 ~]# cat <<-END >/var/www/html/index.html
  2. <body>Hello node2</body>
  3.  
  4. END

The following statement adds httpd to the cluster as a resource:

 
 
  1. #pcs resource create WEB apache configfile="/etc/httpd/conf/httpd.conf" statusurl="http://127.0.0.1/server-status"
11. Create a group

Bind VIP and WEB resource to this group to switch to the cluster as a whole. This configuration is optional)

 
 
  1. # pcs resource group add MyGroup VIP
  2. # pcs resource group add MyGroup WEB
12. Configure the service startup sequence

To avoid resource conflicts, the syntax is as follows: (this configuration is optional)

 
 
  1. # pcs constraint order [action] then [action]
  2. # pcs constraint order start VIP then start WEB
13. Specify the priority Location. This configuration is optional)

Pacemaker does not require that the hardware configuration of your machine be the same. Some machines may be better configured than other machines. In this situation, we want to set a rule that runs resources on a node when the node is available. To achieve this, we create a location constraint. Similarly, we give him a descriptive name (prefer-node1), indicating that we want to run the WEB service on it and want to run on it (we now specify the score to 50, but in a dual-node cluster, any value greater than 0 can achieve the desired effect) and the name of the target node:

 
 
  1. # pcs constraint location WEB prefers node1=50
  2. # pcs constraint location WEB prefers node2=45

The greater the value, the more you want to run on the corresponding node.

14. Resource stickiness this configuration is optional)

In some environments, it is required to avoid resource migration between nodes as much as possible. Migration of resources usually means that services cannot be provided within a period of time. Some complex services, such as Oracle databases, may take a long time.

To achieve this, Pacemaker has a concept called "resource viscosity value", which can control how much a service (Resource) wants to stay on its running node.

To achieve optimal distribution of resources, Pacemaker sets this value to 0 by default. We can define different viscosity values for each resource, but in general, it is enough to change the default viscosity value. Resource stickiness indicates whether the resource tends to stay on the current node. If it is a positive integer, it indicates a tendency. If it is a negative number, it will leave.-inf indicates negative infinity, and inf indicates positive infinity.

 
 
  1. # pcs resource defaults resource-stickiness=100
Common commands:

View Cluster status: # pcs status

View the current cluster configuration: # pcs config

Cluster Auto-start after startup: # pcs cluster enable-all

Start cluster: # pcs cluster start-all

View Cluster resource Status: # pcs resource show

Verify cluster configuration: # crm_verify-L-V

Test resource configuration: # pcs resource debug-start resource

Set the node to the standby status: # pcs cluster standby node1

From: http://linux.cn/article-3963-1.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.