1. experiment environment
HA1: 192.168.1.17RHEL5.8 _ 32bit, web server)
HA2: 192.168.1.18RHEL5.8 _ 32bit, web server)
NFS: 192.168.1.19RHEL5.8 _ 32bit, nfs server)
VIP: 192.168.1.20
2. Build an NFS server
<1> Create an LVM logical volume
# fdisk /dev/sdan --> e --> n --> +8G --> t --> 5 --> 8e --> w# partprobe /dev/sda# pvcreate /dev/sda5# vgcreate myvg /dev/sda5# lvcreate -L 5G -n mydata myvg# lvs# mke2fs -j /dev/myvg/mydata# mkdir /mydata# vim /etc/fstab/dev/myvg/mydata /mydata ext3 defaults 0 0# mount -a
<2> Create a MySQL account
# groupadd -g 3306 mysql# useradd -u 3306 -g mysql -s /sbin/nologin -M mysql# mkdir /mydata/data# chown -R mysql.mysql /mydata/data
<3> Configure the NFS service
# vim /etc/exports/mydata 192.168.1.17(no_root_squash,rw) 192.168.1.18(no_root_squash,rw)# exportfs -arv
3. Create MySQL account nodes)
Each node and NFS have a mysql account and the ID must be consistent.
# groupadd -g 3306 mysql# useradd -u 3306 -g 3306 -s /sbin/nologin -M mysql# mkdir /mydata
4. Mount the NFS directory and test whether nodes can be written)
# mount 192.168.1.19:/mydata /mydata# ll /mydata# usermod -s /bin/bash mysql# su - mysql$ cd /mydata/data$ touch a$ ls$ rm a$ exit# usermod -s /sbin/nologin mysql# umount /mydata
5. General binary installation and configuration of MySQL service nodes)
<1> decompress the package and modify the directory permissions.
# tar xf mysql-5.5.28-linux2.6-i686.tar.gz -C /usr/local# cd /usr/local# ln -sv mysql-5.5.28-linux2.6-i686 mysql# cd mysql# chown -R root:mysql ./*
<2> mount the NFS directory and initialize MySQL
The root account is used for MySQL initialization. the NFS service requires the configuration of no_root_squash, but the security is not guaranteed. Therefore, try to configure the sharing of each node address as much as possible.
HA1: MySQL Initialization is not required on other nodes. This step is omitted)
# mount 192.168.1.19:/mydata /mydata# scripts/mysql_install_db --user=mysql --datadir=/mydata/data/# ll /mydata/data/
<3> modify the configuration file
# cp support-files/my-large.cnf /etc/my.cnf# vim /etc/my.cnf[mysqld]datadir = /mydata/datainnodb_file_per_table = 1
<4> Add a STARTUP script to disable auto-start)
# cp support-files/mysql.server /etc/init.d/mysqld# chkconfig --add mysqld# chkconfig mysqld off
6. Mount the NFS directory and test whether the MySQL service is normal for each node)
# mount 192.168.1.19:/mydata /mydata# service mysqld start# /usr/local/mysql/bin/mysqlmysql> create database mydb;mysql> show databases;mysql> show global variables like '%innodb%';# service mysqld stop# umount /mydata
7. Start Heartbeat v2 and use crm to configure Resources
HA1:
# Service heartbeat start # ssh node2 'service heartbeat start' # crm_mon to check which master node) # hb_gui &
Crm configuration resource group: Resource startup sequence: vip, mysqlstore, mysqld)
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/041F251O-0.png "title =" crm01.png "alt =" 234152470.png"/>
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/041F2I64-1.png "style =" float: none; "title =" crm02.png "alt =" 232620.12.png"/>
650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/041F211E-2.png "style =" float: none; "title =" crm03.png "alt =" 233664570.png"/>
8. Use the mysql client for testing
<1> Configure a mysql account for remote access on the master node
# /usr/local/mysql/bin/mysqlmysql> grant all on *.* to root@'%' identified by 'redhat';mysql> flush privileges;
<2> client logon Test
# mysql -uroot -h192.168.1.20 -pmysql> show databases;mysql> use mydb;mysql> create table test (id int unsigned not null auto_increment primary key,name char(20));
<3> test the standby node and then log on to the client.
# mysql -uroot -h192.168.1.20 -pmysql> show tables;mysql> desc test;
This article from the "Don't dead birds a Hui" blog, please be sure to keep this source http://phenixikki.blog.51cto.com/7572938/1304683