Installation Requirements
Installation Environment: CentOS-6.3
Installation Method: source code compilation installation
Software Name: mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
Download Address: http://mysql.mirror.kangaroot.net/Downloads/
Software Installation Location:/usr/local/mysql
Data storage location:/var/mysql/data
Log storage location:/var/mysql/logs
Cluster design
First design the cluster installation allocation method, I need a total of 5 servers, the server allocation is as follows:
Management node: 192.168.15.231
SQL Node 1:192.168.15.232
SQL Node 2:192.168.15.233
Data node 1:192.168.15.234
Data node 2:192.168.15.235
The connection mode of the device is as shown in the following illustration:
Note: At present this design exists the problem is the management node is a single point, 231 hang off after the whole cluster will be paralyzed, regardless of this problem, first of all, this simple MySQL cluster first built up.
Check for installed MySQL
Check the system for MySQL information that has already been installed and operate as follows:
[Root@localhost/]# Rpm-qa | grep MySQL
[root@localhost/]# service MySQL Status
If you have installed another version of MySQL, uninstall, and then do the following:
[root@localhost/]#/etc/init.d/mysqld stop //Close the current MySQL service
[root@localhost/]# PS -ef | grep mysql //Verify that MySQL has closed the
#If not closed, execute the kill-9 port number
#Execute delete
rpm-e--allmatches --nodeps my SQL Mysql-server
rm-rf/var/lib/mysql //delete MySQL installation directory management node
Management node installation
Installation Management node (192.168.15.231)
[root@localhost/]# groupadd mysql
[root@localhost/]# useradd mysql-g MySQL
[ Root@localhost/]# cd /usr/local
[root@localhost local]# tar-zxv-f mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
[root@localhost local]# MV mysql-cluster-gpl-7.2.6-linux2.6-x86_64 MySQL
[root@localhost local]# chown-r Mysql:mysql MySQL
[root@localhost local]# cd mysql
[root@localhost mysql]# scripts/ mysql_install_db--user=mysql
Manage node configuration
[root@localhost ~]# mkdir/var/lib/mysql-cluster
[ Root@localhost ~]# cd /var/lib/mysql-cluster
[root@localhost mysql-cluster]# VI +/var/lib/mysql-cluster/config.ini add the following in Config.ini:
[NDBD DEFAULT]
Noofreplicas=1
[TCP DEFAULT]
portnumber=3306
[NDB_MGMD]
#Set up the management node server
hostname=192.168.15.231
Datadir=/var/mysql/data
[NDBD]
#Set up a storage node server(NDB node)
hostname=192.168.15.234
Datadir=/var/mysql/data
[NDBD]
#Second NDB node
hostname=192.168.15.235
Datadir=/var/mysql/data
[MYSQLD]
#Set up the SQL node server
hostname=192.168.15.232
[MYSQLD]
#Second SQL node
HOSTNAME=192.168.15.233 Management node Startup
[Root@localhost ~]#/usr/local/mysql/bin/ndb_mgmd-f/var/lib/mysql-cluster/config.ini
[Root@localhost ~]# Mkdir/var/mysql/logs
[Root@localhost ~]# NETSTAT-LNTPU See TCP 0 0 0.0.0.0:1186 open description starts normal
Open port 1186 of the Management node server
Management node Inspection
Perform the following actions:
[Root@localhost/]# NDB_MGM//admin node
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 (not connected, accepting connect from 192.168.15.234)
Id=3 (not connected, accepting connect from 192.168.15.235)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.15.231 (mysql-5.5.22 ndb-7.2.6)
[Mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from & nbsp 192.168.15.232)
Id=5 (not connected, accepting connect from 192.168.15.233)
The administrative node closes
[ Root@localhost/]# /usr/local/mysql/bin/ndb_mgm-e shutdown
#After successful, the terminal prints the following information.
Connected to management Server at:192.168.15.236:1186
3 NDB Cluster node (s) have shutdown.< br> disconnecting to allow Management Server to shutdown.
Data Nodes
Data node install
Data node 1:192.168.15.234
Data node 2:192.168.15.235
[root@localhost/]# groupadd mysql
[root@localhost/]# useradd-mysql-g MySQL
[root@localhost/]# cd /usr/local
[root@localhost local]# tar-zxv-f mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
[root@localhost local]# MV mysql-cluster-gpl-7.2.6-linux2.6-x86_64 MySQL
[root@localhost local]# chown-r Mysql:mysql MySQL
[root@localhost local]# cd mysql
[root@localhost mysql]# scripts/ mysql_install_db--user=mysql
[root@localhost mysql]# CP support-files/my-medium.cnf/etc/ MY.CNF
[root@localhost mysql]# CP support-files/mysql.server/etc/init.d/mysqld data node configuration
Configure the data node to do the following:
[Root@localhost mysql]# Mkdir/var/mysql/data
[Root@localhost mysql]# Mkdir/var/mysql/logs
[Root@localhost mysql]# vi/etc/my.cnf
Append the following content to the file:
[MYSQLD]
Ndbcluster
ndb-connectstring=192.168.15.231
[Mysql_cluster]
ndb-connectstring=192.168.15.231
[NDB_MGM]
connect-string=192.168.15.231
Data node Startup
When starting here, the Management node Server firewall must open the 1186,3306 port.
Note: The –initial parameter is only added when the NDBD is restarted for the first time or after a backup/restore or configuration change!
The first time it starts as follows:
[Root@localhost mysql]#/usr/local/mysql/bin/ndbd--initial
2013-01-30 13:43:53 [NDBD] INFO--Angel connected to ' 192.168.15.231:1186 '
2013-01-30 13:43:53 [NDBD] INFO--Angel allocated nodeid:2 normal boot mode:
[root@localhost mysql]#/USR/LOCAL/MYSQL/BIN/NDBD data node shutdown
[root@localhost/]#/etc/rc.d/init.d/mysqld Stop or
[root@localhost mysql]#/etc/init.d/mysql stop
shutting down MySQL. success! The
/usr/local/mysql/bin/mysqladmin-uroot shutdownsql node installation
SQL node installation
SQL node and storage node (NDB node) installation is the same, all of the following actions are performed;
SQL node 1:192.168.15.232
SQL node 2:192.168.15.233
[root@localhost/]# groupadd mysql
[root@localhost/]# useradd-mysql-g MySQL
[root@localhost/]# cd /usr/local
[root@localhost local]# tar-zxv-f mysql-cluster-gpl-7.2.6-linux2.6-x86_64.tar.gz
[root@localhost local]# MV mysql-cluster-gpl-7.2.6-linux2.6-x86_64 MySQL
[root@localhost local]# chown -R mysql:mysql MySQL
[root@localhost local]# cd mysql
[root@localhost mysql]# scripts/ mysql_install_db--user=mysql
[root@localhost mysql]# CP support-files/my-medium.cnf/etc/ MY.CNF
[root@localhost mysql]# CP support-files/mysql.server/etc/init.d/mysqldsql node configuration
Perform the following actions:
[Root@localhost mysql]# mkdir/var/mysql/data//Create folder where data is stored
[Root@localhost mysql]# mkdir/var/mysql/logs//Create a folder for storing logs
[Root@localhost mysql]# vi/usr/local/mysql/my.cnf//modify config file append the following:
[MYSQLD]
Ndbcluster
ndb-connectstring=192.168.15.231
[Mysql_cluster]
ndb-connectstring=192.168.15.231
[NDB_MGM]
Connect-string=192.168.15.231sql node Start
Perform the following actions:
[root@localhost mysql]# Service mysqld start
Starting MySQL. success! SQL node shutdown
The most direct way:
[Root@localhost mysql]#/usr/local/mysql/bin/mysqladmin-uroot shutdown[root@localhost/]#/etc/rc.d/init.d/mysqld Stop or
[Root@localhost mysql]#/etc/init.d/mysql stop
Shutting down MySQL. success! Functional Testing
View service status on the Management node (192.168.15.231)
[Root@localhost ~]#/USR/LOCAL/MYSQL/BIN/NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @192.168.15.234 (mysql-5.5.22 ndb-7.2.6, nodegroup:0, Master)
Id=3 @192.168.15.235 (mysql-5.5.22 ndb-7.2.6, nodegroup:1)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.15.231 (mysql-5.5.22 ndb-7.2.6)
[Mysqld (API)] 2 node (s)
Id=4 @192.168.15.232 (mysql-5.5.22 ndb-7.2.6)
Id=5 @192.168.15.233 (mysql-5.5.22 ndb-7.2.6) can see that the data nodes, management nodes, and SQL nodes are normal.
Attention:
1. Be sure to specify the use of the NDB cluster storage engine using ENGINE=NDB or engine=ndbcluster, or change the table's storage engine with the ALTER TABLE option.
A 2.NDB table must have a primary key, so the primary key must be defined when the table is created, or the NDB storage engine will automatically generate an implied primary key.
The User Rights table for 3.SQL nodes is still saved by the MyISAM storage engine, so the MySQL user created in one SQL node can only access this node, and if you want to use the same user to access another SQL node, you need to append the user to the corresponding SQL node. Although the MySQL Cluster7.2 version began to provide "user rights sharing."
Data Synchronization Test
Create a database on a data node, and then go to another data node to see if the data is synchronized.
1th Step:
Add data on SQL Node 1 (192.168.15.232):
[root@localhost mysql]# /etc/rc.d/init.d/mysqld status /Check whether MySQL is running
[root@localhost mysql]# /etc/rc.d/init.d/mysqld start /start MySQL
[root@localhost mysql]# /usr/local/ Mysql/bin/mysql-u root-p
Enter Password:
mysql> show databases;
Mysql> Create Database &nb Sp TESTDB2;
Mysql> use TESTDB2;
mysql> CREATE table td_test2 (i INT) engine=ndb;
//This must specify that the engine for the database table is Ndbcluster, the same as the name in the configuration file
mysql> INSERT into td_test2 () VALUES (1);
Mysql> INSERT into td_test2 () VALUES (152);
mysql> SELECT * from td_test2 2nd:
Go to SQL Node 2 (192.168.15.233) to view the data
mysql> use TESTDB2;
Database changed
Mysql> SELECT * from Td_test2;
+------+
| I |
+------+
| 126 |
| 1 |
+------+
2 rows in Set (0.01 sec) to see if the engine of the table is NDB:
>show CREATE TABLE Td_test2 3rd step:
Reverse test, add Data on SQL Node 2 (192.168.15.233):
mysql> CREATE DATABASE BB;
mysql> use BB;
Mysql> CREATE TABLE td_test3 (i INT) engine=ndb;
Mysql> INSERT into Td_test3 () VALUES (98);
Mysql> SELECT * from Td_test3; View data on SQL Node 1 (192.168.15.232):
mysql> use BB;
Database changed
Mysql> SELECT * from Td_test3;
+------+
| I |
+------+
| 98 |
+------+
1 row in Set (0.00 sec) Shutdown cluster
Close the management node and then close the SQL nodes and data nodes.
Sequence of cluster startup operations
To start the cluster again, execute in the following order:
Management node-> data node –> SQL node Note: Do not add the "–initial" parameter when starting the data node.
Errors in installation and testing
Errors in Startup
Error Tip:
Can ' t connect to the local MySQL server through socket '/tmp/mysql.sock ' (2)
Workaround 1 (port occupancy) NETSTAT-ANP |grep 3306
Kill-9 Process Number
Workaround 1 (permissions issue) [Root@localhost mysql]# chown-r mysql:mysql/var/mysql//Modify access rights for custom folders
Three server articles
First, introduce
========
This document is intended to describe how to install a MySQL cluster configured with 2 servers. and the implementation of any one server problems or downtime MySQL can still continue to run.
Attention!
Although this is a MySQL cluster based on 2 servers, there must also be an additional third server as a management node, but this server can be shut down after the cluster startup completes. Also note that it is not recommended to shut down the server as a management node after the cluster startup completes. Although it is theoretically possible to build a MySQL cluster based on only 2 servers, such a framework would have lost the sense of clustering once a server was down and the cluster could not continue to work properly. For this reason, a third server is required to run as a management node.
In addition, many friends may not have the actual environment of 3 servers, you can consider in VMware or other virtual machines to experiment.
The following assumptions about these 3 services:
Server1:mysql1.vmtest.net 192.168.0.1
Server2:mysql2.vmtest.net 192.168.0.2
Server3:mysql3.vmtest.net 192.168.0.3 Servers1 and Server2 as the actual server to configure the MySQL cluster. Server3, as a management node, requires a low level of adjustment to the SERVER3 system and does not require installation Mysql,server3 can use a lower-configured computer and can run other services simultaneously at Server3.
Second, install MySQL on Server1 and Server2
=================================
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from http://www.mysql.com
Note: The max version of Mysql,standard must not support cluster deployment!
The following steps need to be done on Server1 and Server2 each time
# MV mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/usr/local/
# cd/usr/local/
# Groupadd MySQL
# useradd-g MySQL MySQL
# TAR-ZXVF Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# rm-f Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# mv mysql-max-4.1.9-pc-linux-gnu-i686 MySQL
# CD MySQL
# scripts/mysql_install_db--user=mysql
# Chown-r Root.
# chown-r MySQL Data
# chgrp-r MySQL.
# CP Support-files/mysql.server/etc/rc.d/init.d/mysqld
# chmod +x/etc/rc.d/init.d/mysqld
# chkconfig--add mysqld do not start at this time mysql!
Third, install and configure the Management node server (SERVER3) www.111cn.Net
=====================================
As a Management node server, SERVER3 requires NDB_MGM and NDB_MGMD two files:
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from http://www.mysql.com
# MKDIR/USR/SRC/MYSQL-MGM
# CD/USR/SRC/MYSQL-MGM
# TAR-ZXVF Mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# RM mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# CD mysql-max-4.1.9-pc-linux-gnu-i686
# MV BIN/NDB_MGM.
# MV BIN/NDB_MGMD.
# chmod +x ndb_mg*
# MV Ndb_mg*/usr/bin/
# CD
# RM-RF/USR/SRC/MYSQL-MGM now starts to create a profile for this Management node server:
# Mkdir/var/lib/mysql-cluster
# Cd/var/lib/mysql-cluster
# VI Config.ini
Add the following content to the Config.ini:
[NDBD DEFAULT]
noofreplicas=2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]
# managment Server
[NDB_MGMD]
hostname=192.168.0.3 #Manage the IP address of the node server Server3
# Storage Engines
[NDBD]
hostname=192.168.0.1 #IP address of MySQL Cluster Server1
Datadir=/var/lib/mysql-cluster
[NDBD]
hostname=192.168.0.2 #IP address of MySQL Cluster Server2
Datadir=/var/lib/mysql-cluster
# The following 2 [MYSQLD] can fill in Server1 and Server2 host names.
# but in order to be able to change the server in the cluster faster, it is recommended that you leave it blank, or you must change this configuration after replacing the server.
[MYSQLD]
[MYSQLD]
After saving exits, start the Management node server Server3:
# NDB_MGMD
After starting the management node, it should be noted that this is only the Management node service, not the management terminal. So you don't see any information about the output after startup.
Iv. Configure the cluster server and start MySQL
=============================
The following changes are required in both Server1 and Server2:
# VI/ETC/MY.CNF
[Mysqld]
Ndbcluster
ndb-connectstring=192.168.0.3 #Server3 IP address
[Mysql_cluster]
ndb-connectstring=192.168.0.3 #Server3 IP address
After you save the exit, set up the data directory and start MySQL:
# Mkdir/var/lib/mysql-cluster
# Cd/var/lib/mysql-cluster
#/USR/LOCAL/MYSQL/BIN/NDBD--initial
#/etc/rc.d/init.d/mysqld Start
The/USR/LOCAL/MYSQL/BIN/NDBD can be added to the/etc/rc.local to boot up.
Note: You need to use--initial parameters only when you first start ndbd or when you make changes to Server3 Config.ini!
V. Check the status of work
================
Go back to the Management node server Server3 and start the management terminal:
#/USR/BIN/NDB_MGM
Type show command to view current working status: (Below is a status output example)
[Root@mysql3 root]#/USR/BIN/NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @192.168.0.1 (version:4.1.9, nodegroup:0, Master)
Id=3 @192.168.0.2 (version:4.1.9, nodegroup:0)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.0.3 (version:4.1.9)
[Mysqld (API)] 2 node (s)
Id=4 (version:4.1.9)
Id=5 (version:4.1.9)
Ndb_mgm>
If there is no problem, start testing MySQL now:
Note that this document does not have the root password set for MySQL and recommends that you set the MySQL root password for Server1 and Server2.
In Server1: www.111Cn.net
#/usr/local/mysql/bin/mysql-u Root-p
> Use test;
> CREATE TABLE ctest (i INT) engine=ndbcluster;
> INSERT into CTest () VALUES (1);
> SELECT * from CTest;
You should see 1 row returned information (return value 1).
If the above normal, then change to Server2 on the above test, observe the effect. If successful, insert in Server2 and return to Server1 to see if it works properly.
If there are no problems, then congratulations on success!
VI. Destructive Testing
==============
Unplug the Server1 or SERVER2 network cable to see if another clustered server is working properly (you can use the Select query test). Once the test is complete, reinsert the network cable.
If you don't have access to a physical server, which means you can't unplug the network cable, you can also test this:
On the Server1 or Server2:
# PS aux | grep ndbd
You will see all NDBD process information:
Root 5578 0.0 0.3 6220 1964? S 03:14 0:00 NDBD
Root 5579 0.0 20.4 492072 102828? R 03:14 0:04 NDBD
Root 23532 0.0 0.1 3680 684 pts/1 S 07:59 0:00 grep ndbd
Then kill a NDBD process to destroy the MySQL Cluster Server for the purpose of:
# kill-9 5578 5579
The Select query test is then used on another clustered server. and executing the show command in the management terminal of the Management node server will see the state of the compromised server.
Once the test is complete, you only need to restart the NDBD process of the compromised server:
# NDBD
Attention! Said earlier, at this time is not to add--inital parameters!
At this point, the MySQL cluster is configured to complete!