Heartbeat+mysql+nfs Implementing a Highly available (HA) MySQL cluster

Source: Internet
Author: User
Tags gpg

Directory

First, the Environment preparation

Second, topology preparation

Third, installation and configuration Heartbrat

Iv. Configuring NFS Services

V. Install and configure MySQL

VI. CRM Configuration Resources


First, the Environment preparation

1. Operating system

Redhat 5.4 i386 bit system

2. Software Environment

mysql-5.5.20

heartbeat-2.1.4-11.el5.i386.rpm

heartbeat-pils-2.1.4-11.el5.i386.rpm

heartbeat-stonith-2.1.4-11.el5.i386.rpm

heartbeat-gui-2.1.4-11.el5.i386.rpm

Additional dependency package: libnet-1.1.4-3.el5.i386.rpm perl-mailtools-1.77-1.el5.noarch.rpm


3. High Availability Cluster usage conditions

(1). The node name must match the execution result of the Uname-n command

node1:# uname-n node1.example.com # vim/etc/hosts 192.168.0.101 node1.example.com node1 192.168.0.102 Node2.exampl  e.com node2node2:# uname-n node2.example.com #vim/etc/hosts 192.168.0.101 node1.example.com node1 192.168.0.102 Node2.example.com Node2


(2). SSH trust between the nodes

Node1: #ssh-keygen-t rsa-f ~/.ssh/id_rsa-p "#ssh-copy-id-i. ssh/id_rsa.pub [email protected] Node2: #ssh-keygen -T rsa-f ~/.ssh/id_rsa-p "#ssh-copy-id-i. ssh/id_rsa.pub [Email protected]


(3). Time synchronization between nodes

Node1: # ntpdate-u 210.72.145.44#CRONTAB-E*/30 * * * */sbin/ntpdate-u 210.72.145.44 node2: # ntpdate-u 210.72.145.44# CRONTAB-E*/30 * * * * */sbin/ntpdate-u 210.72.145.44 210.72.145.44 is the official server of China National Service Center


4. Add Epel Yum Source

node1,node2:# wget http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm # RPM-IVH EPEL-RELEASE-5-4.NOARCH.RPM # rpm--import/etc/pki/rpm-gpg/rpm-gpg-key-centos-5 # Yum List

5. Turn off the firewall and SELinux

Node1,node2: # service iptables Stop # Vim/etc/selinux/config selinux=disabled

Second, topology preparation


650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/41/EA/wKiom1PWah_DLIMiAAFImGr7U5o839.jpg "title=" 0.png " alt= "Wkiom1pwah_dlimiaafimgr7u5o839.jpg"/>


Third, installation and configuration Heartbrat

1.heartbeat v2 Installation and configuration

(1). Install Heartbrat

node1:# yum-y Install heartbeat* (--skip-broken may require parameters) node2:# yum-y Install heartbeat*


(2). Configure Heartbeat

Description: Default installation Heartbeat no configuration file, but sample file # cd /usr/share/doc/heartbeat-2.1.4/    #cp   Authkeys ha.cf  /etc/ha.d/Note: Here we only need two files,ha.cf  and  AUTHKEYS#CD /ETC/HA.D/# DD  if=/dev/random bs=512count=1| openssl md5  #生成密钥随机数   a4d20b0dd3d5e35e0f87ce4266d1dd64  #chmod  600 authkeys# vim /etc/ha.d/authkeys  Auth 1  1 md5 a4d20b0dd3d5e35e0f87ce4266d1dd64# chmod 600 authkeys    #修改密钥文件的权限为600 # vim ha.cf  mainly modifies two places (others can default):  (1). Change the way heartbeat information is propagated (this is multicast)  mcast  eth0 225.100.100.100 694 1 0  (2). Configure the number of nodes in the cluster  node     node1.example.com node    node2.example.com  (3). Enable crm crm  On2. Copy the above two configuration files to Node2 on the # scp authkeys ha.cf  node2:/etc/ha.d/3. Boot node # ssh  node2  "Service heartbeat start"  # service heArtbeat start      4. Look at the port node1:# netstat -ntulp   active internet connections  (only servers)   tcp         0      0 0.0.0.0:5560                 0.0.0.0:*                   LISTEN       3170/mgmtd        tnode2:# netstat -ntulp   tcp        0      0 0.0.0.0:5560                 0.0.0.0:*                    listen       3170/mgmtd        t Note: The above port number 5560 can be seen, heartbeat has started normally. 5. Check the cluster status note: Two nodes are online, resources are not yet configured.

6. Test the Hb_gui graphical configuration interface

# Hb_gui &

Note: The cluster must be configured on the DC directory.


Iv. Configuring NFS Services

1. Create an LVM logical volume (for storing MySQL data files)

# pvcreate/dev/sdb #创建物理卷 # vgcreate myvg/dev/sdb #创建卷组 # lvcreate-l 10g-n mydata myvg #创建逻辑卷 # MKFS.EXT3/DEV/MYVG /mydata #格式化逻辑卷 # LVS #查看逻辑卷 # mkdir/mydata #创建挂载目录 # mount/dev/myvg/mydata/mydata/#挂载 # cd/mydata/#进入挂载目录 # mk Dir Data #创建数据目录

2. Create a MySQL user with MySQL group

Node1,node2,nfs: (Three nodes to create the same user and group) (1). Create a MySQL group # groupadd-g 3306 MySQL (2). Create MySQL User # useradd-u 3306-g mysql-s/sbin/no Login-m MySQL (3). View # ID MySQL uid=3306 (mysql) gid=3306 (MySQL) groups=3306 (MySQL)

Description: Create the same users and groups in Node1 and Node2

3. Modify the Data directory for users and groups

# chow-r mysql.mysql/mydata/data/# ll/mydata/Total drwxr-xr-x 6 mysql mysql 4096 08-12 data

4. Modify the NFS configuration file

# Vim/etc/exports/mydata 192.168.0.0/24 (No_root_squash,rw,async)


5. Re-Export the NFS

# Exportfs-arv Exporting 192.168.0.0/24:/mydata


6. View the NFS shared storage for the output

# showmount-e 192.168.0.208 Export list for 192.168.0.208:/mydata 192.168.0.0/24


7. Test Mounts

node1:# Mkdir/mydata # mount-t NFS 192.168.0.100:/mydata//mydata/# ll/mydata/total drwxr-xr-x 6 mysql MySQL 409  6 13:40 Datanode2: #mkdir/mydata # mount-t NFS 192.168.0.100:/mydata//mydata/# ll/mydata/total Drwxr-xr-x 6 MySQL mysql 4096 13:50 data


V. Install and configure MySQL

Node1:

1. Unzip and link MySQL

# TAR-ZXVF mysql-5.5.33-linux2.6-x86_64.tar.gz-c/usr/local/#直接解压到/usr/local Directory # LN-SV mysql-5.5.33-linux2.6-x86_64 MySQL #设置一个软链接 # cd/usr/local/mysql #chown root:mysql *

2. Initialize MySQL

#/usr/local/mysql/scripts/mysql_install_db--datadir=/mydata/data/--user=mysql


3. provide MySQL configuration file

# cp/usr/local/mysql/support-files/my-large.cnf/etc/my.cnf # vim/etc/my.cnf DataDir =/mydata/data #指定数据目录 Innodb_fi le_per_table = 1 #innodb表单独表空间


4. Provide MySQL startup script

# cp/usr/local/mysql/support-files/mysql.server/etc/init.d/mysqld # chmod +x/etc/init.d/mysqld


5. Start MySQL

# service mysqld start starting MySQL ... [OK]

6. View Data Catalog

# mount  /dev/sda2 on / type ext3  (rw)   192.168.0.100:/ mydata/ on /mydata type nfs  (rw,addr=192.168.0.100)   # cd /mydata /DATA/  # LL  TOTAL 28784  -RW-RW---- 1 mysql mysql  18874368 AUG 12 13:40 IBDATA1  -RW-RW---- 1 mysql mysql   5242880 AUG 12 14:27 IB_LOGFILE0  -RW-RW---- 1 mysql  Mysql  5242880 aug 12 08:05 ib_logfile1  -rw-r--r-- 1 root   root      4721 Aug 12 07:55 my.cnf   DRWX------ 2 mysql mysql    4096 aug 12 08:07 mydb   DRWX------ 2 MYSQL ROOT      4096 AUG 12  07:39 mysql  -RW-RW---- 1 mysql mysql      126 aug 12 13:40  MYSQL-BIN.000001  -RW-RW---- 1 mysql mysql       19 aug 12 13:18 mysql-bin.index  -rw-r----- 1 mysql root     18748 AUG 12 14:28 NODE1.EXAMPLE.COM.ERR  -RW-RW---- 1  mysql mysql        6 Aug 12 14:27  NODE1.EXAMPLE.COM.PID  DRWX------ 2 mysql mysql    4096 aug  12 07:39 PERFORMANCE_SCHEMA  DRWX------ 2 mysql root       4096 aug 12 07:39 test


7. Login Test

# mysql mysql> show databases; +--------------------+  |  Database | +--------------------+  |  Information_schema | |  MyDB | |  MySQL | |  Performance_schema | |  Test | +--------------------+ 5 rows in Set (0.02 sec)

Note: node1 node MySQL configuration is complete, let's configure Node2

Node2:

1. Uninstalling the Data Files directory

# service Mysqld Stop [OK] # CD # umount/mydata/

2. Mount the data directory on the Node2

# mount -t nfs 192.168.0.100:/mydata/ /mydata/  # mount    192.168.0.100:/mydata/ on /mydata type nfs  (rw,addr=192.168.0.100)   #  CD /MYDATA/DATA/  # LL  TOTAL 28780  -RW-RW---- 1  MYSQL MYSQL 18874368 AUG 12 14:30 IBDATA1  -RW-RW---- 1  MYSQL MYSQL  5242880 AUG 12 14:30 IB_LOGFILE0  -RW-RW----  1 mysql mysql  5242880 aug 12 08:05 ib_logfile1  - Rw-r--r-- 1 root  root      4721 aug 12 07:55  MY.CNF  DRWX------ 2 MYSQL MYSQL    4096 AUG 12  08:07 MYDB  DRWX------ 2 mysql root       4096 aug 12 07:39 MYSQL  -RW-RW---- 1 mysql mysql      126 aug  12 13:40 MYSQL-BIN.000001  -RW-RW---- 1 mysql mysql       19 aug 12 13:18 mysql-bin.index  -rw-r----- 1 mysql  root    19162 aug 12 14:30 node1.example.com.err  - Rw-r----- 1 mysql root      4442 aug 12 13:40  NODE2.EXAMPLE.COM.ERR  DRWX------ 2 mysql mysql    4096  AUG 12 07:39 PERFORMANCE_SCHEMA  DRWX------ 2 mysql root       4096 aug 12 07:39 test


3. Copy the configuration file and the startup script to the Node2

# scp /etc/my.cnf node2:/etc/  my.cnf                                                                                     100% 4721    4.6kb/s  00:00  # scp /etc/init.d/mysqld  node2:/etc/init.d/  mysqld                                                                                    100%  11kb  10.6kb/s   00:004. Start Mysql# service mysqld start  starting mysql.                                             [  OK  ]


5. Log in and view

# mysql mysql> show databases; +--------------------+  |  Database | +--------------------+  |  Information_schema | |  MyDB | |  MySQL | |  Performance_schema | |  Test | +--------------------+ 5 rows in Set (0.01 sec)


6. Stop MySQL and uninstall the data directory

# Service Mysqld stop shutting down MySQL. [OK] # CD # umount/mydata/


Note: To complete the MySQL configuration here, let's configure the MySQL cluster with high availability.

1. Set the password for the Hacluster user

# echo "123456" |passwd--stdin hacluster

2. Start the heartbeat graphical interface to configure MySQL high-availability cluster resources

# Hb_gui &

3. Open the Heartbeat run status detector

#crm_mon Monitor heartbeat operation last Updated:mon Jul 23:54:16 2014Current DC:node2.example.com (ea4acb13-d59e-4948-b17b-18 177D30E6CA) 2 Nodes Configured.1 Resources configured.============node:node2.example.com ( EA4ACB13-D59E-4948-B17B-18177D30E6CA): standbyNode:node1.example.com (8b2d315f-af4e-4b91-a14d-105103ba6004): Online

The following is a configuration of MySQL highly available cluster resources

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/41/EB/wKioL1PWa5vAQyQGAAF8mXzxjkU659.jpg "style=" float: none; "title=" 1.png "alt=" Wkiol1pwa5vaqyqgaaf8mxzxjku659.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/41/EB/wKioL1PWa52SC0UUAAI0ZwIRlIY404.jpg "style=" float: none; "title=" 2.png "alt=" Wkiol1pwa52sc0uuaai0zwirliy404.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/41/EA/wKiom1PWaoWQ568LAAOZ0_7M5Qw601.jpg "style=" float: none; "title=" 3.png "alt=" Wkiom1pwaowq568laaoz0_7m5qw601.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/41/EA/wKiom1PWaomS5ooxAAOUwuF2Zag003.jpg "style=" float: none; "title=" 4.png "alt=" Wkiom1pwaoms5ooxaaouwuf2zag003.jpg "/>

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/41/EA/wKiom1PWaouizwTtAAOQzyougOc704.jpg "style=" float: none; "title=" 5.png "alt=" Wkiom1pwaouizwttaaoqzyougoc704.jpg "/>


You can see that Node2 is the primary node and all resources are in Node2

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/41/EA/wKiom1PWao-i8l4TAAHBzeaKBg4845.jpg "style=" float: none; "title=" 7.png "alt=" Wkiom1pwao-i8l4taahbzeakbg4845.jpg "/>




To switch Node2 to an alternate node:

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/41/EA/wKiom1PWapDwVLrXAAItIFwkUfE191.jpg "style=" float: none; "title=" 8.png "alt=" Wkiom1pwapdwvlrxaaitifwkufe191.jpg "/>


You can see that Node1 is the primary node and all resources are in Node1

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/41/EA/wKiom1PWaoCjvcgIAAHH1vi4TQs989.jpg "title=" 9.png " Style= "Float:none;" alt= "wkiom1pwaocjvcgiaahh1vi4tqs989.jpg"/>

At this point, the experiment is over.

This article from the "Day Walk Jian, gentleman self-Improvement" blog, please be sure to keep this source http://feilong0663.blog.51cto.com/3265903/1531806

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.