& Lt; exercise & gt; Use heartbeat to implement dual-node mysql high-availability cluster

Source: Internet
Author: User
Tags md5 digest

I. Objectives

Create a cluster with two nodes

Use HeartBeat to check the cluster status,

When one node in the cluster fails, start the mysqld service on the other node.

The two nodes share the NFS database.

You can use 3rd nodes to provide the nfs service. Here, you are too lazy to use one of the nodes to provide the NFS service)

Ii. Preparation

VirtualBox

Two virtual nodes with centos 6.4 x_86.64-bit System

Each node has two ethernet NICs

Heartbeat-2.1.4-12.el6.x86_64.rpm

Heartbeat-pils-2.1.4-12.el6.x86_64.rpm

Libnet-1.1.5-1.el6.x86_64.rpm (not available in yum source, need to be downloaded separately)


Iii. Installation

Node 1:


# Mkdir build # cd build/# rpm-ivh libnet -- version. rpm must be the first step.) if there is an error message, satisfy your requirements. # rpm-ivh heartbeat-pils (the second step is required) # rpm-ivh heartbeat *. rpm # yum install mysql-server # yum install nfs-utils

Node 2:

Similar to node 1, nfs-utils does not need to be installed.

Iv. Planning

Node1

Eth0: 172.16.100.1/24

Eth1: 1.1.1.1/24vbox intranet)

Node2

Eth0: 172.16.100.2/24

Eth1: 1.1.1.2/24vbox intranet)

Virtual IP Address

VIP: 172.16.100.80/24

Mysql database directory

/Share/data

NFS shared directory

/Mydata

V. Principle

1. node1 provides the NFS service to share its/mydata directory,

2. node1 and node2 can automatically mount node1:/mydata to the/share directory.

3. Mysql database directory datadir =/share/data

4. HeartBeat checks the node status. After node1 fails, the VIP is immediately transferred to node2 and the mysqld service of node2 is started.

Vi. Implementation

1. NFS part

Node1


# Vim/etc/exports # Add the following information/mydata 172.16.100. */24 (rw, sync, no_root_squash) The no_root_squash parameter is very important to avoid nfs Directory mounting permissions # service nfs start # exportfs-avr if nfs has changed, use this command to update the shared directory, eliminating the need to restart the nfs service)



Node2

# Showmount-e 172.16.100.1 confirm the following output: Export list for 172.16.100.1/mydata 172.16.100. */24

Mount the file for testing and uninstall it to ensure that nfs is in normal use.


2. Mysql Section

Node1



# Mkdir-pv/mydata/data # chown-R mysql. mysql/mydata # vim/etc/my. add the datadir =/mydata/data parameter to cnf # service mysqld start (troubleshooting by yourself) # mysql_secure_installation (initialize and set the root password. Do not reject the remote login of the root user and use it for testing in the future) # mysql-u root-p database Initialization is complete


Node2

Because node1 has been initialized, node2 does not need to do anything. Mount/mydata of node1 to/share,

Pay attention to the file system permissions, and then directly start mysqld. After the test is complete, exit.


3. HeartBeat Section


Heartbeat is based on the "node name". Therefore, we do not recommend that you use DNS to allocate the node by name. Once the DNS crashes, the entire cluster is suspended, therefore, the/etc/hosts file is used for naming.

Node1


# vim /etc/hosts172.16.100.1 node1172.16.100.2 node2


# Hostname to view the current host name) # Set hostname node1 to node1) node2 performs the same operation)

Copy/etc/hosts of node1 to node2,


# scp /etc/hosts root@node2:/etc/hosts

I find that I need to enter the password for each scp. Simply do the key-free login.


# Ssh-keygen-t rsa # ssh-copy-id node2node2 perform the same operation


Set HeartBeat

Node1


# Cd/usr/share/doc/heartbeat-2.1.4/# cp authkey haresource ha. cf/etc/ha. d/# openssl rand-hex 8>/etc/ha. d/authkey to generate a 16-bit md5 value) # cd/etc/ha. d/# vim authkey Add the following data auth 33 md5 Digest # chmod 600 authkey (ensure permission) # vim haresource Add the following data node1 172.16.100.80/24/eth0/172.16.100.255 Filesystem: 172.16.100.1: /mydata:/share: nfs mysqld # vim ha. cfbcast eth1udpport 694 keepalive 2 logfile/var/log/ha-logauto_failback onnode node1node node2 (which nodes are defined in the cluster)

Service heartbeat start after setting

Configure node2 as needed.

4. Test

# Mysql-u root-h '2017. 16.100.80 '-p


By default, node1 is the master node and should be connected to 172.16.100.1.

On node1


# Cd/usr/lib64/heartbeat/

#./Hb_standby

You can see the resource Conversion


On node2

# Mount (check whether nfs is mounted)

# Service mysqld status to view mysql status)


If yes, the cluster is working normally.

Conclusion:

In addition to installing heartbeat, it is quite easy to install.



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.