Mysql-mmm Installation Guide (multi-master Replication Manager for MySQL) _mysql

Source: Internet
Author: User
Tags db2 diff rrdtool

The most basic MMM installation must require at least 2 database servers and a monitoring server the MySQL cluster environment to be configured contains four database servers and a monitoring server, as follows:

function IP hostname Server ID
Monitoring host 192.168.0.10 Mon -
Master 1 192.168.0.11 Db1 1
Master 2 192.168.0.12 Db2 2
Slave 1 192.168.0.13 Db3 3
Slave 2 192.168.0.14 Db4 4



If it is a personal learning installation, it is not easy to find 5 machines, you can complete the virtual machine.

After the configuration is complete, use the following virtual IP to access the MySQL Cluster, who are assigned to different servers via MMM.

Role
IP Description
192.168.0.100 Writer Applications should be connected to this IP for write operations
192.168.0.101 Reader The application should link to one of these IP read operations
192.168.0.102 Reader
192.168.0.103 Reader
192.168.0.104 Reader



The structure diagram is as follows:

2. Basic Configuration of Master 1

We install MySQL on all hosts:
Aptitude install Mysql-serverthen We edit the configuration file/etc/mysql/my.cnf and add the following lines-be T o use different server IDs to all hosts:

Copy Code code as follows:

server_id = 1
Log_bin =/var/log/mysql/mysql-bin.log
Log_bin_index =/var/log/mysql/mysql-bin.log.index
Relay_log =/var/log/mysql/mysql-relay-bin
Relay_log_index =/var/log/mysql/mysql-relay-bin.index
Expire_logs_days = 10
Max_binlog_size = 100M
Log_slave_updates = 1






Then Remove the following entry:


Bind-address = 127.0.0.1Set to Number of masters:


Auto_increment_increment = 2Set to a unique, incremented number, less than auto_increment_increment, on each server





Auto_increment_offset = 1Do not bind to any specific IP, use 0.0.0.0 instead:





Bind-address = 0.0.0.0Afterwards We need to restart MySQL for our changes to take effect:





/etc/init.d/mysql restart





3. Create Usersnow We can create the required users. We ' ll need 3 different users:





function Description Privileges
Monitor user Used by the MMM Monitor to check the health of the MySQL servers REPLICATION CLIENT
Agent User Used by the MMM agent to change read-only mode, replication master, etc. SUPER, REPLICATION CLIENT, PROCESS
Relication User Used for replication REPLICATION SLAVE



Copy Code code as follows:

GRANT REPLICATION CLIENT on *.* to ' mmm_monitor ' @ ' 192.168.0.% ' identified by ' Monitor_password ';
GRANT SUPER, REPLICATION CLIENT, PROCESS on *.* to ' mmm_agent ' @ ' 192.168.0.% ' identified by ' Agent_password ';
GRANT REPLICATION SLAVE on *.* to ' REPLICATION ' @ ' 192.168.0.% ' identified by ' Replication_password ';

Note:we could be more restrictive here regarding the hosts from which the users are allowed to Connect:mmm_monitor are US Ed from 192.168.0.10. Mmm_agent and replication are used from 192.168.0.11-192.168.0.14.
Note:don ' t use a replication_password longer than characters

4. Synchronisation of data between both databases

I ' ll assume that DB1 contains the correct data. If you have a empty database, you still have to syncronize the accounts we have just.
The ' no one ' is altering the ' sure ' is the ' the ' ' the data while we create a backup.

Copy Code code as follows:

(DB1) mysql> FLUSH TABLES with READ LOCK;

Then get the current position in the Binary-log. We'll need this values when we setup the replication on DB2, DB3 and DB4.

Copy Code code as follows:

(DB1) mysql> show MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | binlog_do_db | binlog_ignore_db |
+------------------+----------+--------------+------------------+
|      mysql-bin.000002 |              374 |                  | |
+------------------+----------+--------------+------------------+
1 row in Set (0.00 sec)

DON ' T close this mysql-shell. If you close it, the database lock would be removed. Open A second console and type:

db1$ mysqldump-u root-p--all-databases >/tmp/database-backup.sql

Now we can remove the database-lock. Go to the the "a" shell:

(DB1) mysql> UNLOCK TABLES; Copy the database backup to DB2, DB3 and DB4.

Copy Code code as follows:

db1$ Scp/tmp/database-backup.sql <user>@192.168.0.12:/tmp
db1$ Scp/tmp/database-backup.sql <user>@192.168.0.13:/tmp
db1$ Scp/tmp/database-backup.sql <user>@192.168.0.14:/tmp

Then import this into DB2, DB3 and DB4:

Copy Code code as follows:

db2$ Mysql-u Root-p </tmp/database-backup.sql
db3$ Mysql-u Root-p </tmp/database-backup.sql
db4$ Mysql-u Root-p </tmp/database-backup.sql

Then flush the privileges on DB2, DB3 and DB4. We have altered the user-table and MySQL has to reread this table.

Copy Code code as follows:

(DB2) mysql> FLUSH privileges;
(DB3) mysql> FLUSH privileges;
(DB4) mysql> FLUSH privileges;

On Debian and Ubuntu, copy the passwords in/etc/mysql/debian.cnf from DB1 to DB2, DB3 and DB4. This password was used for starting and stopping MySQL.
Both databases now contain the same data. We can setup replication to keep it that way.
Note:import just only add records from dump file. You are should drop all databases before import dump file.

5. Setup Replication

Configure Replication on DB2, DB3 and DB4 with the following commands:

Copy code code as follows:

(DB2) mysql> change MASTER to master_host= ' 192.168.0.11 ', master_port=3306, master_user= ' replication ',
Master_password= ' Replication_password ', master_log_file= ' <file> ', master_log_pos=<position>;
(DB3) mysql> change MASTER to master_host= ' 192.168.0.11 ', master_port=3306, master_user= ' replication ',
Master_password= ' Replication_password ', master_log_file= ' <file> ', master_log_pos=<position>;
(DB4) mysql> change MASTER to master_host= ' 192.168.0.11 ', master_port=3306, master_user= ' replication ',
Master_password= ' Replication_password ', master_log_file= ' <file> ', master_log_pos=<position>;

Please insert the ' Values return by ' Show Master Status ' on DB1 at the <file> and <position> tags.
Start the slave-process on all 3 hosts:

Copy Code code as follows:

(DB2) mysql> START SLAVE;
(DB3) mysql> START SLAVE;
(DB4) mysql> START SLAVE;

Now check if the replication are running correctly on all hosts:

Copy Code code as follows:



(DB2) mysql&gt; show SLAVE Status\g


1. Row ***************************


Slave_io_state:waiting for Master to send event


master_host:192.168.0.11


Master_user:replication


master_port:3306


Connect_retry:60


...


(DB3) mysql&gt; show SLAVE Status\g


1. Row ***************************


Slave_io_state:waiting for Master to send event


master_host:192.168.0.11


Master_user:replication


master_port:3306


Connect_retry:60


...


(DB4) mysql&gt; show SLAVE Status\g


1. Row ***************************


Slave_io_state:waiting for Master to send event


master_host:192.168.0.11


Master_user:replication


master_port:3306


Connect_retry:60


...


Now we have to make DB1 replicate from DB2. The values for Master_log_file and master_log_pos of the have to determine:

Copy Code code as follows:

(DB2) mysql> show MASTER STATUS;
+------------------+----------+--------------+------------------+
| File | Position | binlog_do_db | binlog_ignore_db |
+------------------+----------+--------------+------------------+
|       mysql-bin.000001 |              98 |                  | |
+------------------+----------+--------------+------------------+
1 row in Set (0.00 sec)

Now we configure replication on DB1 with the following command:

Copy Code code as follows:

(DB1) mysql> change MASTER to Master_host = ' 192.168.0.12 ', master_port=3306, master_user= ' replication ',
Master_password= ' Replication_password ', master_log_file= ' <file> ', master_log_pos=<position>;

Now inserts the values return by ' Show Master Status ' on DB2 at the <file> and <position> tags.

Start the slave-process:

(DB1) mysql> START SLAVE; Now check if the replication are running correctly on DB1:

Copy Code code as follows:

(DB1) mysql> show SLAVE Status\g
1. Row ***************************
Slave_io_state:waiting for Master to send event
master_host:192.168.0.12
Master_user: <replication>
master_port:3306
Connect_retry:60
...

Replication between the nodes should now is complete. Try it by inserting some data to both DB1 and DB2 and check that the data would appear on all other nodes.

6. Install MMM

Create User
Optional:create user that'll be the owner of the MMM scripts and configuration files. This is provide an easier method to securely manage the monitor scripts.

Useradd--comment "MMM Script owner"--shell/sbin/nologin mmmdmonitoring Host
The Install dependencies:

Copy Code code as follows:

Aptitude install Liblog-log4perl-perl Libmailtools-perl Liblog-dispatch-perl Libclass-singleton-perl Libproc-daemon-perl Libalgorithm-diff-perl Libdbi-perl Libdbd-mysql-perl

Then fetch the latest mysql-mmm-common*.deb and Mysql-mmm-monitor*.deb and install it:

Dpkg-i Mysql-mmm-common_*.deb Mysql-mmm-monitor*.deb

Database hosts
On Ubuntu install dependencies:

Aptitude install Liblog-log4perl-perl Libmailtools-perl Liblog-dispatch-perl Iproute Libproc-daemon-perl Libalgorithm-diff-perl Libdbi-perl libdbd-mysql-perlthen fetch the latest mysql-mmm-common*.deb and Mysql-mmm-agent*.deb and install it:

Dpkg-i Mysql-mmm-common_*.deb Mysql-mmm-agent_*.debon RedHat

Yum Install-y Mysql-mmm-agentthis would take care the dependencies, which may include:

Installed:

Mysql-mmm-agent.noarch 0:2.2.1-1.el5

Dependency installed:

Copy Code code as follows:



Libart_lgpl.x86_64 0:2.3.17-4


Mysql-mmm.noarch 0:2.2.1-1.el5


Perl-algorithm-diff.noarch 0:1.1902-2.el5


Perl-dbd-mysql.x86_64 0:4.008-1.RF


Perl-datemanip.noarch 0:5.44-1.2.1


Perl-ipc-shareable.noarch 0:0.60-3.el5


Perl-log-dispatch.noarch 0:2.20-1.el5


Perl-log-dispatch-filerotate.noarch 0:1.16-1.el5


Perl-log-log4perl.noarch 0:1.13-2.el5


Perl-mime-lite.noarch 0:3.01-5.el5


Perl-mail-sender.noarch 0:0.8.13-2.el5.1


Perl-mail-sendmail.noarch 0:0.79-9.el5.1


Perl-mailtools.noarch 0:1.77-1.el5


Perl-net-arp.x86_64 0:1.0.6-2.1.el5


Perl-params-validate.x86_64 0:0.88-3.el5


Perl-proc-daemon.noarch 0:0.03-1.el5


Perl-timedate.noarch 1:1.16-5.el5


Perl-xml-dom.noarch 0:1.44-2.el5


Perl-xml-parser.x86_64 0:2.34-6.1.2.2.1


Perl-xml-regexp.noarch 0:0.03-2.el5


Rrdtool.x86_64 0:1.2.27-3.el5


Rrdtool-perl.x86_64 0:1.2.27-3.el5


Configure MMM

All generic configuration-options are grouped in a separate file called/etc/mysql-mmm/mmm_common.conf. This file is the same on all hosts in the system:

Copy Code code as follows:



Active_master_role writer

&lt;host default&gt;


Cluster_interface eth0


Pid_path/var/run/mmmd_agent.pid


bin_path/usr/lib/mysql-mmm/


Replication_user replication


Replication_password Replication_password


Agent_user mmm_agent


Agent_password Agent_password


&lt;/host&gt;


&lt;host db1&gt;


IP 192.168.0.11


Mode Master


Peer DB2


&lt;/host&gt;


&lt;host db2&gt;


IP 192.168.0.12


Mode Master


Peer DB1


&lt;/host&gt;


&lt;host db3&gt;


IP 192.168.0.13


mode slave


&lt;/host&gt;


&lt;host db4&gt;


IP 192.168.0.14


mode slave


&lt;/host&gt;

&lt;role writer&gt;


Hosts DB1, DB2


IPs 192.168.0.100


Mode Exclusive


&lt;/role&gt;


&lt;role reader&gt;


Hosts DB1, DB2, DB3, DB4


IPs 192.168.0.101, 192.168.0.102, 192.168.0.103, 192.168.0.104


Mode Balanced


&lt;/role&gt;


Don ' t forget to copy this file to all other hosts (including the monitoring host).

On the database hosts we need to edit/etc/mysql-mmm/mmm_agent.conf. Change ' db1 ' accordingly on the other hosts:

Copy Code code as follows:

Include mmm_common.conf
This DB1

On the monitor host we need to edit/etc/mysql-mmm/mmm_mon.conf:

Copy Code code as follows:



Include mmm_common.conf

<monitor>
    ip                       127.0.0.1
    pid_path                /var/run/mmmd_mon.pid
    bin_path                /usr/lib/mysql-mmm/
    status_path             /var/lib/misc/mmmd_mon.status
    ping_ips                 192.168.0.1, 192.168.0.11, 192.168.0.12, 192.168.0.13, 192.168.0.14
</monitor>

Monitor_user Mmm_monitor
Monitor_password Monitor_password

Debug 0

Ping_ips are some IPs that are pinged to determine whether the network connection of the ' monitor is OK. I used my switch (192.168.0.1) and the four database server.


7. Start MMM

Start the Agents
(on the database hosts)

Debian/ubuntu
Edit/etc/default/mysql-mmm-agent to enable the agent:

Enabled=1red Hat
Rhel/fedora does not enable packages to start in boot time/default policy, so might have to turn it on manually The agents'll start automatically when server is rebooted:

Chkconfig mysql-mmm-agent onthen Start it:

/etc/init.d/mysql-mmm-agent StartStart the Monitor
(on the Monitoring host) Edit/etc/default/mysql-mmm-monitor to enable the monitor:

Enabled=1then Start it:

/etc/init.d/mysql-mmm-monitor start

Wait some seconds for Mmmd_mon to start up. After a few seconds can use Mmm_control to check the status of the cluster:

Copy Code code as follows:

mon$ Mmm_control Show
DB1 (192.168.0.11) master/awaiting_recovery. Roles:
DB2 (192.168.0.12) master/awaiting_recovery. Roles:
DB3 (192.168.0.13) slave/awaiting_recovery. Roles:
DB4 (192.168.0.14) slave/awaiting_recovery. Roles:

Because its "the" the "the" the "the" does not know we hosts, so it sets all hosts to state awaiting_recovery and log s a warning message:

Copy Code code as follows:

mon$ Tail/var/log/mysql-mmm/mmm_mon.warn
...
2009/10/28 23:15:28 WARN detected new host ' DB1 ': Setting its initial state to ' Awaiting_recovery '. Use the ' Mmm_control set_online db1 ' to switch it online.
2009/10/28 23:15:28 WARN detected new host ' DB2 ': Setting its initial state to ' Awaiting_recovery '. Use the ' Mmm_control set_online DB2 ' to switch it online.
2009/10/28 23:15:28 WARN detected new host ' DB3 ': Setting its initial state to ' Awaiting_recovery '. Use the ' Mmm_control set_online db3 ' to switch it online.
2009/10/28 23:15:28 WARN detected new host ' DB4 ': Setting its initial state to ' Awaiting_recovery '. Use the ' Mmm_control set_online DB4 ' to switch it online.

Now we set or hosts online (DB1-because the slaves replicate from this host):

Copy Code code as follows:

mon$ Mmm_control set_online DB1
Ok:state of ' db1 ' changed to ONLINE. Now, can, wait some, and check its new roles!
mon$ Mmm_control set_online DB2
Ok:state of ' DB2 ' changed to ONLINE. Now, can, wait some, and check its new roles!
mon$ Mmm_control Set_online db3
Ok:state of ' db3 ' changed to ONLINE. Now, can, wait some, and check its new roles!
mon$ Mmm_control set_online DB4
Ok:state of ' DB4 ' changed to ONLINE. Now, can, wait some, and check its new roles!

Reference: Http://mysql-mmm.org/mmm2:guide

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.