MySQL cluster function: Solve the shortcomings of MySQL proxy server single point of failure and data backup
The structure diagram is as follows:
650) this.width=650; "src=" Http://s2.51cto.com/wyfs02/M00/88/5C/wKiom1fyFs6gtx6XAACycUylU2U172.png "title=" 234. PNG "alt=" Wkiom1fyfs6gtx6xaacycuylu2u172.png "/>
Using 6 RHEL6.4 virtual machines, Sqla and sqlb as SQL nodes, NDBA and NDBB as data nodes, MGMD as management nodes, these 5 nodes constitute the Mysqlcluster system, and the other VM 192.168.4.120 as a MySQL test client.
Role Description:
Client: Accessing the cluster host
Manage cluster hosts (MGMD): Manage all hosts in the cluster, load their own master profiles, and set up roles for all hosts in the cluster
SQL node (SQL): For users to access and execute SQL statements, storage table structure (more than 2 units)
Data node (NDBD node): Stores records in tables (more than 2)
MySQL cluster provides three processes:
The MGMD management process runs on the management machine Config.ini
The MYSQLD database service runs on the SQL node/etc/my.cnf
NDBD data storage service runs on NDBD node/etc/my.cnf
1. Public configuration (to retain the Mysql-libs package from Rhel)
A. Shut down the firewall, SELinux, stop the MySQL database service, remove the configuration file/etc/my.cnf
B. Uninstall conflicting packages (all nodes)
# Rpm-qa | Grep-i MySQL//view the original MySQL package
# yum-y remove .....
C. Install the cluster software (all nodes)
# TAR-XF Mysql-cluster-gpl-7.3.3-1.el6.x86_64.rpm-bundle.tar
# RPM-UVH mysql-cluster-*.rpm//default path/var/lib/mysql
# Rpm-qa | grep mysql-cluster
D, change the password
# Cat/root/.mysql_secret
# The root user at Tue Dec24 15:24:50 (local time): Z3QW2OWJ
# mysql-u Root-p Z3QW2OWJ
mysql> SET PASSWORD for ' root ' @ ' localhost ' =password (' 123 ');
2. Configuring the Management host MGM
When you start the management process, call the profile/etc/my-cluster.ini and know the roles of the hosts in the cluster ( configuration files are edited themselves and no blank lines are allowed )
# mkdir-p/var/log/mysql-cluster//Create working folder
# vim/etc/My-cluster.ini
[ndbd Default] //Define common configuration of data nodes noofreplicas=2//Specify a few copies of data backup (data node number or can be divisible by data nodes)
datamemory=80m //data memory (save user-executed SQL statements and query results, usually 1/3 of physical memory) indexmemory=18m //Save indexed Memory & nbsp [NDB_MGMD] //designated management host &NB Sp nodeid=1 //Specify a number (administrative identity), value forbid Xu repeat hostname=192.168.4.100 //Management host IP Datadir=/var/log/mysql-cluster //path on MGM, for storing the information of hosts in the cluster, the path needs to be created by itself [NDBD " //Specify data nodes 1  &NBSP
Nodeid =30 &NB Sp hostname=192.168.4.30 &N Bsp //Data node IP DATADIR=/VAR/LI The B/mysql-cluster/data //path exists on the data node, which is used to store the records in the table [NDBD " //Specify Data Nodes 2 & nbsp;
Nodeid =40 &NB Sp hostname=192.168.4.40 &N Bsp , &NB Sp Datadir=/var/lib/mysql-cluster/data &N Bsp [mysqld] //Specify SQL Node 1  &NBSP
Nodeid =10 &NB Sp hostname=192.168.4.10 &N Bsp //sql node IP [MYSQLD] //Specify SQL Node 2 &NBSP ; Nodeid =20 , &N Bsp hostname=192.168.4.20 &N Bsp
3, Configuration Data node NDB (operation on. 30.40)
1) Create a directory that stores records in the table
# mkdir-p/var/lib/mysql-cluster/data
2) Edit Profile/etc/my.cnf (with the same name as Mysqld's master profile)
If the original my.cnf file already exists, rename it first
# Mv/etc/my.cnf/etc/my.cnf.bak
# vi/etc/my.cnf [mysqld] //Set Data node Datadir=/var/lib/mysql-cluster/data//Where records are stored in the local directory, consistent with settings on the management host NDB-CONNECTSTRING=192.168.4.100//management host IP
Ndbcluster//Specify the running storage engine
[Mysql_cluster] //Set up host for management cluster
NDB-CONNECTSTRING=192.168.4.100//Management host IP
4. Configuring SQL Node SQL (operation on. 10.20)
1) Edit the configuration file/etc/my.cnf
# Mv/etc/my.cnf/etc/my.cnf.bak
# vi/etc/my.cnf [mysqld] //Set SQL node
Ndbcluster//Specifies the table's storage engine
Default-storage-engine=ndbcluster
[Mysql_cluster] //Set up host for management cluster
NDB-CONNECTSTRING=192.168.4.100//Management host IP
5. Start (note the boot sequence)
* Correct boot order: Manage nodes--data node----SQL node.
* Close order: SQL node--data node--manage node.
* After the data node and SQL node are working properly, the management node can be shut down theoretically (without monitoring and other tasks).
1) Start the management process
# ndb_mgmd-f/etc/my-cluster.ini--nodaemon //Start Management process script
# Pgrep MGMD //See if process is on, pkill-9 MGMD End Management process
# NDB_MGM //script to execute Login management interface ndb_mgm>show//view cluster information
# NETSTAT-ATUNP | grep NDB
TCP 0 0 0.0.0.0:1186 0.0.0.0:* LISTEN 4965/NDB_MGMD
Set to boot from boot
#vim/etc/rc.local
Ndb_mgmd-f/etc/my-cluster.ini--nodaemon
There are a few points to note about the startup of the management node:
NDB_MGMD default mode Run (--daemon), the debugging process can add options--nodaemon to disable the background mode;
NDB_MGMD the cluster configuration is automatically saved after the initial boot success, the configuration file specified by-F is ignored at a later start, unless the--inital option is added (such as when adding a new node to the cluster).
2) Start the data node process.
MySQL Cluster data node of the corresponding service program is NDBD (single-threaded), NDBMTD (multi-threaded), the first time to start or reinitialize with the--initial option, not added later.
# NDBD--initial-start//Start Data node process
# pgrep NDBD//view process, available pkill-9 ndbd End Management process
# vim/etc/rc.local//set to boot from boot
ndbd
3) Start the SQL Node database service process
# pgrep Mysqld
# pkill-9 MySQL
# service MySQL Start
# chkconfig MySQL on
Check the SQL node default storage engine to ensure that Ndbcluster is supported and as the default storage engine:
# mysql-u root-p123456//Native account login
Mysql> show Engines\g;
Engine: ndbcluster
Support:default
Management node View cluster status
#ndb_mgm//Enter the cluster management environment
Ndb_mgm> Show//If you can see the following information, the cluster is created successfully
Cluster Configuration
---------------------
[NDBD (NDB)] 2node (s)
Id=30 @192.168.4.30 (mysql-5.6.14 ndb-7.3.3, starting, nodegroup:0)
Id=40 @192.168.4.40 (mysql-5.6.14 ndb-7.3.3, nodegroup:0, *)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.4.100 (mysql-5.6.14 ndb-7.3.3)
[Mysqld (API)] 2node (s)
id=10 @192.168.4.10 (mysql-5.6.14 ndb-7.3.3)
Id=20 @192.168.4.20 (mysql-5.6.14 ndb-7.3.3)
6. SQL node authorized client user (two point node)
Mysql>grant all on * * to [e-mail protected] ' 192.168.4.% ' identified by ' 123 ';
7. High availability test for MySQL cluster (verify cluster)
* * If the default engine is not set to Ndbcluster, specify the storage engine manually at the end of the build table operation * *
* * cannot automatically switch two nodes →mysql-mmm Resolution * *
1) data synchronization (automatic backup) test
The client logs on to the MySQL node. 10 and. 20 (. 10 and. 20 data are synchronized)
# mysql-h192.168.4.10-uroot-p123
mysql> CREATE DATABASE NDB;
mysql> use NDB;
Mysql> CREATE TABLE Ndb01 (Idint);
mysql> INSERT into ndb01values (100);
Mysql> select * from NDB01;
+------+
| ID |
+------+
| 100 |
| 100 |
From the client Access 20, confirm the result (you can see the Library, table, table records built from 10)
# mysql-h192.168.4.20-uroot-p123
2) High Availability test (single point of Failure) (Close a data node)
* * MySQL database is still available as long as there is one data node available * *
Close the data node. 30 on the NDBD process:
# pkill-9 NDBD
# Netstat-antup | grep NDBD//Confirmation
Ndb_mgm> Show//view cluster status from the management node at this time, you will find that NDBA is disconnected
Id=30 (not connected, acceptingconnect from 192.168.4.30)
The Sqla or sqlb is then accessed from the client, and the database is still readable and writable.
Verifying the failure recovery process
Restart. 30 on the NDBD service process, after a few moments, force shutdown. 40 on the NDBD service process. Then visit the. 30 or. 40 node again to query the records for the NDB01 table and find the same results as the previous additions:
The above test set up instructions: After the failure of the data node (. 30) recovery, the data is immediately synchronized from the Normal data node (. 40) to ensure data consistency.
3) High availability test (Close one SQL node)
This is a good understanding: when there are multiple SQL nodes in the MySQL cluster, as long as a single SQL node is available, it can access the table data saved in the MySQL database.
For example, after you close any one of Sqla, SQLB, you can also access the MyDB library from another station.
Complete!
This article is from the "Dave-Technology blog" blog, please be sure to keep this source http://davewang.blog.51cto.com/6974997/1858594
Build MySQL Cluster