Build Mysql-cluster cluster architecture

Source: Internet
Author: User

Lab requirements: Deploy MySQL cluster to reduce database single points of failure.


Protocol: Prepare 5 servers, MGMD (192.168.100.1) as the Management node, Sqla (192.168.100.2) and sqlb (192.168.100.3) as SQL nodes, NDBA (192.168.100.4) and NDBB (192.168.100.5) as data nodes, which 5 The MySQL cluster system is composed of nodes


Implementation process:

A Public configuration

1. Install MySQL cluster software on all nodes

1.1 All nodes unload conflict packages

The official Mysql-cluster related package has been integrated with the database server/client program, so it can be used directly to replace the normal MySQL server/client program. If you have a regular version of Mysql-server, MySQL, mysql-server, mysql-client package installed, please uninstall it first (if not, ignore it):

# service MySQL Stop

# chkconfig MySQL off

# rpm-e--nodeps mysql-server mysql-client mysql-shared mysql-embedded mysql-test mysql-devel


1.2 Cleaning up residual configuration files

Mv/etc/my.cnf/etc/my.cnf.bak

Useradd MySQL


1.3 Installing Mysql-cluster related packages for all nodes

# tar XVF Mysql-cluster-gpl-7.3.3-1.el6.x86_64.rpm-bundle.tar

mysql-cluster-shared-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-shared-compat-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-server-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-client-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-test-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-devel-gpl-7.3.3-1.el6.x86_64.rpm

mysql-cluster-embedded-gpl-7.3.3-1.el6.x86_64.rpm

RPM-UVH mysql-cluster-*.rpm


2. On the SQL node (SQLA, SQLB) server, modify the root password of the MySQL database


3.SQL node Add authorized database user, convenient for client access

Mysql>grant all on*.*to ' root ' @ ' 192.168.100.% ' identifiedby ' 123456 ';


Two Configuration Management node MGMD (192.168.100.1)

1. Create a working folder

# mkdir-p/var/log/mysql-cluster


2. Create a configuration file

[NDBD Default]: Specifies the defaults configuration for all data nodes.

[NDBD]: Specifies the configuration of a data node.

[NDB_MGMD Default]: Specifies the defaults configuration for all management nodes.

[NDB_MGMD]: Specifies the configuration of one of the management nodes.

[Mysqld Default]: Specifies the defaults configuration for all SQL nodes.

[MYSQLD]: Specifies the configuration of one of the SQL nodes.

# Vim/etc/config.ini

[NDBD Default]

noofreplicas=2//retains 2 copies of data

datamemory=80m//Data cache size

indexmemory=18m//Index cache size

[NDB_MGMD]

nodeid=1//the ID number of the 1th management node

hostname=192.168.100.1//address of this management node

Datadir=/var/log/mysql-cluster//working directory for this Management node

[Mysqld]

nodeid=2//the ID number of the 1th SQL node

hostname=192.168.100.2//1th SQL node address

[Mysqld]

nodeid=3//the ID number of the 2nd SQL node

hostname=192.168.100.3//2nd SQL node address

[NDBD]

nodeid=4//the ID number of the 1th data node

hostname=192.168.100.4//1th Data node address

Datadir=/mysql-cluster/data//1th Data node working directory, need to create a

[NDBD]

nodeid=5//the ID number of the 2nd data node

hostname=192.168.100.5//the address of the 2nd data node

Datadir=/mysql-cluster/data//2nd Data node working directory, need to create a


Three Configuration Data Node Ndba (192.168.100.4), NDBB (192.168.100.5)

1. Create a working folder

# mkdir-p/mysql-cluster/data


2. Create a configuration file

# VIM/ETC/MY.CNF

[Mysqld]

Datadir=/mysql-cluster/data//Specify Data store directory

ndb-connectstring=192.168.100.1//IP address of the Management Server to be connected

Ndbcluster//Specify the running storage engine

[Mysql_cluster]//Cluster Connection Configuration segment

ndb-connectstring=192.168.100.1


Four Configure SQL Node Sqla (192.168.100.2), sqlb (192.168.100.3)

In a mysql-cluster clustered environment, if a database is not using the Ndbcluster engine (but other engines such as InnoDB, MyISAM, and so on), you may not be able to synchronize to other nodes when you update the database tables.

# VIM/ETC/MY.CNF

Ndbcluster//Specify the running storage engine

Default-storage-engine=ndbcluster//Set default storage engine

[Mysql_cluster]//Cluster Connection Configuration segment

ndb-connectstring=192.168.100.1


Five Start the MySQL cluster

The correct boot order: Manage nodes--data nodes--and SQL nodes.

Shutdown Order: SQL node--data node--manage node.

After the data node and the SQL node are working properly, the management node can be closed (without monitoring and other tasks) in theory.


1. Start the Management node MGMD

# ndb_mgmd-f/etc/config.ini//Start Management node,-F Specify cluster configuration file

There are a few points to note about the startup of the management node:

NDB_MGMD defaults to run later (--daemon), you can add option--nodaemon during debugging to disable background mode.

NDB_MGMD the cluster configuration is automatically saved after the initial boot success, the configuration file specified by-F is ignored at a later start, unless the--inital option is added (such as when adding a new node to the cluster).

If you want to run NDB_MGMD automatically after each boot, you can write the above startup action to the/etc/rc.local configuration file, for example:

# vim/etc/rc.local

....

Ndb_mgmd-f/etc/config.ini


When the boot is complete, you can view the listening status:

# NETSTAT-TUANLP | grep NDB


Confirm the automatically saved cluster configuration data:

# Ls-lh/usr/mysql-cluster/ndb_1_config.bin.1

-rw-r--r--. 1 root root 4.1K December 2417:52/usr/mysql-cluster/ndb_1_config.bin.1


To view log file-related data:

# ls/var/log/mysql-cluster/

Ndb_1_cluster.log Ndb_1_out.log Ndb_1.pid


2. Start the Data node Ndba, NDBB

MySQL Cluster data node of the corresponding service program is NDBD (single-threaded), NDBMTD (multi-threaded), the first time to start or reinitialize with the--initial option, not added later.

# NDBD--initial//Start Data node

On the NDBA and NDBB servers, modify the/etc/rc.local configuration file so that the Data node service can start automatically after each boot:

# vim/etc/rc.local

....

ndbd


3. Start the SQL node Sqla, sqlb

for MySQL cluster SQL node, the corresponding service program is mysqld, normal through the MySQL script to restart the service.

# service MySQL Restart

# chkconfig MySQL on


4. View the cluster status on the management node MGMD

Direct execution of NDB_MGM can enter the cluster management environment:

# NDB_MGM

Ndb_mgm>

After entering the ndb_mgm> environment, execute show to see the status of the current nodes, to ensure that the 5 nodes in this example have been successfully connected;

Ndb_mgm> SHOW


Six High availability test for MySQL cluster

1. Data Synchronization Test

From the client access Sqla, perform write database, table related operations:

# mysql-u Root-p-H 192.168.100.2

mysql> CREATE DATABASE Mycluster;

Mysql> CREATE TABLE MYCLUSTER.A (ID int (3));

mysql> INSERT into MYCLUSTER.A values (123), (456); Inserting Test records

Then access the SQLB from the client to confirm the results (see the libraries, tables, and table records that were built from the Sqla):

# mysql-u Root-p-H 192.168.100.3

Mysql> select * from MYCLUSTER.A; View table records are also consistent


2. High availability test (close a data node)

Close the NDBD process on the data node NDBA:

# killall-9 NDBD//violent end NDBD process

# NETSTAT-TULANP | grep NDBD//confirmation has ended

When you view the status of the cluster from the management node, you find that NDBA is disconnected:

Ndb_mgm> Show

The Sqla or sqlb is then accessed from the client, and the database is still readable and writable. For example, you can add another record to the MYCLUSTER.A table:

mysql> INSERT into MYCLUSTER.A values (789); Add a table record

The above test set up instructions: As long as there is a data node available, the overall MySQL database is still available.

Next, you can further verify the recovery process-restart the NDBD service process on NDBA, and then force the NDBD service process on NDBB to close after a few moments. Then revisit the Sqla or sqlb node, query the records of the MYCLUSTER.A table, and discover the same results as the previous additions:

Mysql> select * from MYCLUSTER.A; Confirm Results

The above test set up instructions: After the failure of the data node (NDBA) recovery, will immediately from the normal data node (NDBB) synchronization data, to ensure data consistency.


3. High availability test (Close one SQL node)

When there are multiple SQL nodes in the MySQL cluster, it is possible to access the table data saved in the MySQL database as long as one SQL node is available.

After you close any one of the Sqla, SQLB, you can also access the Mycluster library through another one.



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.