Correct configuration steps for MySQL db cluster 2010-06-09 10:47 arrowcat Blog Parkfont Size:T | T
What we are going to share with you today is the correct configuration of the MySQL DB cluster, and the information I saw on the relevant website two days ago is available for sharing today.
Ad:51cto Net + the first App innovation contest hot Start-----------super million resources for you to take!
This article is mainly about the correct configuration of the MySQL database cluster of the actual steps, and the concept of the narrative, if you are interested in the actual operation of the relevant knowledge, the following articles will provide you with relevant information.
First, Introduction
This document is intended to show you how to install a MySQL DB cluster that is configured on 2 servers. and MySQL can continue to run when any server is having problems or downtime.
Attention!
Although this is a MySQL cluster based on 2 servers, there must be an additional third server as the management node, but this server can be shut down after the cluster is started. It is also important to note that it is not recommended to close the server as the management node after the cluster boot is complete. Although it is theoretically possible to build a MySQL cluster based on only 2 servers, this architecture, once a server goes down, the cluster will not be able to continue to work properly, so it will lose the significance of the cluster. For this reason, a third server is required to run as a management node.
In addition, many friends may not have the actual environment of 3 servers, consider experimenting in VMware or other virtual machines.
The following assumes the situation of these 3 services:
- Server1:mysql1.vmtest.net 192.168.0.1
- Server2:mysql2.vmtest.net 192.168.0.2
- Server3:mysql3.vmtest.net 192.168.0.3
Servers1 and Server2 as servers that actually match the MySQL DB cluster. For Server3 as a management node, the requirements are low, with minimal adjustments to the SERVER3 system and no installation Mysql,server3 can use a lower-profile computer and can run other services at the same time as Server3.
Ii. installing MySQL on Server1 and Server2
Note: The Mysql,standard version of Max must not support cluster deployment!
The following steps need to be done on Server1 and Server2
- # MV mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/usr/local/
- # cd/usr/local/
- # Groupadd MySQL
- # useradd-g MySQL MySQL
- # TAR-ZXVF Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
- # rm-f Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
- # mv mysql-max-4.1.9-pc-linux-gnu-i686 MySQL
- # CD MySQL
- # scripts/mysql_install_db --user=mysql
- # Chown-r Root.
- # chown-r MySQL Data
- # chgrp-r MySQL.
- # CP Support-files/mysql.server/etc/rc.d/init.d/mysqld
- # chmod +x/etc/rc.d/init.d/mysqld
- # chkconfig--add mysqld
Do not start mysql! at this time
Iii. Installing and configuring the Management node Server (SERVER3)
As a Management node server, SERVER3 requires the NDB_MGM and NDB_MGMD of two files:
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from http://www.mysql.com
- # MKDIR/USR/SRC/MYSQL-MGM
- # CD/USR/SRC/MYSQL-MGM
- # TAR-ZXVF Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
- # RM mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
- # CD mysql-max-4.1.9-pc-linux-gnu-i686
- # MV BIN/NDB_MGM.
- # MV BIN/NDB_MGMD.
- # chmod +x ndb_mg*
- # MV Ndb_mg*/usr/bin/
- # CD
- # RM-RF/USR/SRC/MYSQL-MGM
Now start establishing the configuration file for this Management node server:
- # Mkdir/var/lib/mysql-cluster
- # Cd/var/lib/mysql-cluster
- # VI Config.ini
In Config.ini, add the following:
- [NDBD DEFAULT]
- noofreplicas=2
- [MYSQLD DEFAULT]
- [NDB_MGMD DEFAULT]
- [TCP DEFAULT]
- # managment Server
- [NDB_MGMD]
hostname=192.168.0.3 #管理节点服务器Server3的IP地址
- # Storage Engines
- [NDBD]
- Hostname=192.168.0.1 #MySQL数据库集群Server1的IP地址
- Datadir=/var/lib/mysql-cluster
- [NDBD]
- Hostname=192.168.0.2 #MySQL集群Server2的IP地址
- Datadir=/var/lib/mysql-cluster
The following 2 [MYSQLD] can fill in the hostname of Server1 and Server2.
However, in order to be able to replace the server in the cluster faster, it is recommended to leave it blank, otherwise the configuration must be changed after the server is replaced.
- [MYSQLD]
- [MYSQLD]
After you save the exit, start the Management node server Server3:
- # NDB_MGMD
After starting the management node, it should be noted that this is only the Management node service, not the management terminal. So you don't see any information about the output after startup.
Iv. Configuring the Cluster server and starting MySQL
The following changes are required in both Server1 and Server2:
- # VI/ETC/MY.CNF
- [Mysqld]
- Ndbcluster
- ndb-connectstring=192.168.0.3 #Server3的IP地址
- [Mysql_cluster]
- ndb-connectstring=192.168.0.3 #Server3的IP地址
After saving the exit, set up the data directory and start MySQL:
- # Mkdir/var/lib/mysql-cluster
- # Cd/var/lib/mysql-cluster
- #/USR/LOCAL/MYSQL/BIN/NDBD--initial
- #/etc/rc.d/init.d/mysqld Start
The/USR/LOCAL/MYSQL/BIN/NDBD can be added to the/etc/rc.local to enable booting.
Note: You only need to use the--initial parameter when you start NDBD for the first time or if you make changes to Server3 Config.ini!
V. Check the status of work
Go back to the Management node server Server3 and start the management terminal:
- #/USR/BIN/NDB_MGM
Type the show command to view the current working status: (Below is an example of a status output)
- [Email protected] root]#/USR/BIN/NDB_MGM
- --NDB Cluster--Management Client--
- NDB_MGM> Show
- Connected to Management Server at:localhost:1186
- Cluster Configuration
- [NDBD (NDB)] 2 node (s)
- Id=2 @192.168.0.1 (version:4.1.9, nodegroup:0, Master)
- Id=3 @192.168.0.2 (version:4.1.9, nodegroup:0)
- [NDB_MGMD (MGM)] 1 node (s)
- id=1 @192.168.0.3 (version:4.1.9)
- [Mysqld (API)] 2 node (s)
- Id=4 (version:4.1.9)
- Id=5 (version:4.1.9)
- NDB_MGM>
If there is no problem, start testing MySQL now:
Note that this document does not have a root password for MySQL and it is recommended that you set the MySQL root password for Server1 and Server2 yourself.
In the Server1:
- #/usr/local/mysql/bin/mysql-u Root-p
- > Use test;
- > CREATE TABLE ctest (i INT) engine=ndbcluster;
- > INSERT into CTest () VALUES (1);
- > SELECT * from CTest;
You should see 1 row returned information (return value 1).
If the above is normal, then switch to Server2 to repeat the above test, observe the effect. If successful, insert is executed in Server2 and swapped back to Server1 to see if it is working properly.
If there is no problem, then congratulations success!
VI. Destructive Testing
Unplug the Server1 or SERVER2 network cable to see if another MySQL database Cluster server is working properly (you can use select query testing). Once the test is complete, reinsert the network cable.
If you don't have access to a physical server, which means you can't unplug a network cable, you can test it like this:
On Server1 or Server2:
- # PS aux | grep ndbd
You will see all the NDBD process information:
- Root 5578 0.0 0.3 6220 1964? S 03:14 0:00 NDBD
- Root 5579 0.0 20.4 492072 102828? R 03:14 0:04 NDBD
- Root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd
Then kill a ndbd process to achieve the purpose of destroying the MySQL Cluster server:
- # kill-9 5578 5579
Then use the Select query test on another clustered server. and executing the show command in the management terminal of the Management node server will see the state of that server being compromised.
Once the test is complete, you only need to restart the NDBD process for the compromised server:
- # NDBD
Attention! As I said before, it is not necessary to add--inital parameters at this time!
At this point, the MySQL database cluster is configured to complete!
MySQL DB cluster for proper configuration steps