Install Mysql MMM On Redhat6.3, mmmredhat6.3
Haha, the last article at the end of the year. This afternoon's holiday.
MMM official introduction:
MMM (Multi-Master Replication Manager for MySQL) is a set of flexible scripts to perform monitoring/failover and management of MySQL master-master replication deployments (with only one node writable at any time ).
The toolset also has the ability to read balance standard master/slave invocations with any number of slaves, so you can use it to move virtual IP addresses around und a group of servers depending on whether they are behind in replication.
The current version of this software is stable, but the authors wocould appreciate any comments, suggestions, bug reports about this version to make it even better. current version 2.0 development is led by Pascal Hofmann. if you require support, advice or maintenance ance with deployment, please contact Percona or Open Query.
Installation environment:
One Monitoring server, two mysql servers are both master and slave
Monitor:OS:redhat6.3Name:zbdba1IP:192.168.56.170Mysql Server1:OS:redhat6.3Name:zbdba2IP:192.168.56.171Mysql Server2:OS:redhat6.3Name:zbdba3IP:192.168.56.172
1. Install MMM monitoring
2. Install the MMM agent
3. Install mysql
4. configuration is mutually active/standby
5. Create a user
6. Configuration
7. Start MMM
8. Test
1. Install MMM monitoring
Here, the yum source of epel is installed, but the following packages are missing:
Rpm-ivh ftp: // 195.220.108.108/linux/dag/redhat/el6/en/x86_64/extras/RPMS/perl-Algorithm-Diff-1.1902-1.el6.rfx.noarch.rpm
Rpm-ivh http://pkgs.repoforge.org/perl-Email-Date-Format/perl-Email-Date-Format-1.002-1.el6.rfx.noarch.rpm
Local Source: yum install rrdtool *
Rpm-ivh rrdtool-perl-1.3.8-6.el6.x86_64.rpm
Finally:
Yum install mysql-mmm *
2. Install the MMM agent
The same as the first step.
Yum-y install mysql-mmm-agent
3. Install mysql
4. configuration is mutually active/standby
These two steps are not described in detail.
5. Create a user
In any database:
GRANT SUPER, REPLICATION CLIENT, PROCESS ON *.* TO 'mmm_agent'@'192.168.56.%' IDENTIFIED BY 'mysql';GRANT REPLICATION CLIENT ON *.* TO 'mmm_monitor'@'192.168.56.%' IDENTIFIED BY 'mysql';
6. Configuration
Cat/etc/mysql-mmm/mmm_common.conf [root @ zbdba1 mysql-mmm] # cat/etc/mysql-mmm/mongowriter
7. Start MMM
Start the agent node:
[Root @ zbdba2 default] #/etc/init. d/mysql-mmm-agent start
[Root @ zbdba3 default] #/etc/init. d/mysql-mmm-agent start
Start the monitoring node:
[Root @ zbdba1 default] #/etc/init. d/mysql-mmm-monitor start
View status:
[root@zbdba1 mysql-mmm]# mmm_control show db1(192.168.56.171) master/ONLINE. Roles: reader(192.168.56.175), writer(192.168.56.173) db2(192.168.56.172) master/ONLINE. Roles: reader(192.168.56.174)[root@zbdba2 default]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:70:d2:ad brd ff:ff:ff:ff:ff:ff inet 192.168.56.171/24 brd 192.168.56.255 scope global eth0 inet 192.168.56.175/32 scope global eth0 inet 192.168.56.173/32 scope global eth0 inet6 fe80::a00:27ff:fe70:d2ad/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:57:10:81 brd ff:ff:ff:ff:ff:ff inet 192.168.253.111/24 brd 192.168.253.255 scope global eth1 inet6 fe80::a00:27ff:fe57:1081/64 scope link valid_lft forever preferred_lft forever[root@zbdba3 mysql-mmm]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:39:b0:e7 brd ff:ff:ff:ff:ff:ff inet 192.168.56.172/24 brd 192.168.56.255 scope global eth0 inet 192.168.56.174/32 scope global eth0 inet6 fe80::a00:27ff:fe39:b0e7/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:08:3b:71 brd ff:ff:ff:ff:ff:ff inet 192.168.253.112/24 brd 192.168.253.255 scope global eth1 inet6 fe80::a00:27ff:fe08:3b71/64 scope link valid_lft forever preferred_lft forever
Zbdba2 becomes a read node and zbdba3 is a write node.
8. Test
Disable mysql of zbdba2
[Root @ zbdba2 default] # service mysql stop
Shutting down MySQL... SUCCESS!
View the monitor status again:
[root@zbdba1 mysql-mmm]# mmm_control show db1(192.168.56.171) master/HARD_OFFLINE. Roles: db2(192.168.56.172) master/ONLINE. Roles: reader(192.168.56.174), reader(192.168.56.175), writer(192.168.56.173)[root@zbdba3 mysql-mmm]# ip a1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:39:b0:e7 brd ff:ff:ff:ff:ff:ff inet 192.168.56.172/24 brd 192.168.56.255 scope global eth0 inet 192.168.56.174/32 scope global eth0 inet 192.168.56.175/32 scope global eth0 inet 192.168.56.173/32 scope global eth0 inet6 fe80::a00:27ff:fe39:b0e7/64 scope link valid_lft forever preferred_lft forever3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 08:00:27:08:3b:71 brd ff:ff:ff:ff:ff:ff inet 192.168.253.112/24 brd 192.168.253.255 scope global eth1 inet6 fe80::a00:27ff:fe08:3b71/64 scope link valid_lft forever preferred_lft forever
It is found that all VIPs have been moved to zbdba3.