1, prepare three servers
2, install the Linux operating system separately for three machines (Oracle Linux/rhel 6.5 x86_64bit)
3, separate IP address
Management Node 192.168.1.110 (responsible for managing the entire cluster)
SQL Node 192.168.1.111 (responsible for manipulating the database)
SQL Node 192.168.1.112 (responsible for manipulating the database)
Data Node 192.168.1.111 (responsible for storing data)
Data Node 192.168.1.112 (responsible for storing data)
SQL nodes and data nodes can be on a single machine
4, modify the three servers of the/etc/hosts file, modified after the contents are as follows:
127.0.0.1 localhost Localhost.domain
192.168.1.110 node01
192.168.1.111 NODE02
192.168.1.112 node03
5, modify the three servers of the/etc/sysconfig/nework file, modified after the contents are as follows:
Copy Code code as follows:
Networking=yes
Hostname=node01
gateway=192.168.1.1 (gateway address varies depending on network)
Networking=yes
Hostname=node02
gateway=192.168.1.1 (gateway address varies depending on network)
Networking=yes
Hostname=node03
gateway=192.168.1.1 (gateway address varies depending on network)
6. Restart three servers separately
7, download NDB cluster software mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz
Cluster software has three different formats:
A, at the end of the tar.gz two format
b, at the end of RPM package form
C, the source file to compile the installation method
RPM Package installation method, because it is automatically installed, so it is inconvenient to configure the software installation location and data storage location
Therefore, the installation files in the binary format of the. tar.gz end are usually selected.
Note: The mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz contains MySQL database software and NDB cluster software
8, respectively copy mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz files to the three servers in the/usr/local/directory
9, new user MySQL
# Groupadd MySQL
# useradd-r-G MySQL MySQL (because the MySQL user is only used to install the configuration NDB cluster or MySQL software, and not for the login system)
(so the MySQL user is created with the-R parameter and no password is set for the user)
10. Extract mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz files on three servers respectively
Copy Code code as follows:
# cd/usr/local
# tar ZXVF mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz
# ln-s mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64 MySQL
# chown-r Mysql.mysql mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64
# CD MySQL
# Chown-r Mysql.mysql.
After the configuration is complete, the results are as follows
11. Install MySQL database software on 192.168.1.111 and 192.168.1.112 two data nodes (two servers)
Since it is a binary installation, the MySQL database software is installed under/usr/local/mysql,
But data from the database can be stored elsewhere, such as under/u01/app/mysql/data/
Note: The following command is executed on both data node servers
# mkdir-p/u01/app/mysql/data
# Chown-r Mysql.mysql/u01
# Cd/usr/local/mysql
# scripts/mysql_install_db--basedir=/usr/local/mysql--datadir=/u01/app/mysql/data--user=mysql
After executing this command, the database data files (including Mysql,test, Performance_schema, etc.) are installed on the
/u01/app/mysql/data directory.
# CP bin/ndbd/usr/local/bin/
# CP bin/ndbmtd/usr/local/bin/
Then copy the startup file that started the database service to the/etc/rc.d/init.d/directory
# CP Support-files/mysql.server/etc/rc.d/init.d/mysqld (renamed)
After executing this command, you can use service mysqld Start | Stop | Restart command to manage the database service
# CP MY.CNF/ETC/MY.CNF
# VI/ETC/MY.CNF
[Mysqld]
Ndbcluster
Basedir=/usr/local/mysql
Datadir=/u01/app/mysql/data
port=3306
Sql_mode=no_engine_substitution,strict_trans_tables
[Mysql_cluster]
ndb-connectstring=192.168.1.110
Note again: The above command is executed on both data node servers
In this case, the installation configuration of the data node and the SQL node is over, and if the data node and the SQL node are separate on different machines, each data node
and each SQL node to execute the above command.
12, on the management node to install the configuration
Since step 10th has been uncompressed mysql-cluster-advanced-7.3.5-linux-glibc2.5-x86_64.tar.gz installation package to/usr/local
directory, and a soft connection, such as operations, so these tasks do not have to repeat, and then go down configuration can
# Cd/usr/local/mysql
# mkdir Mysql-cluster (Create mysql-cluster directory, to start Cluster Management service later)
# CP bin/ndb_mgm*/usr/local/bin/
# Cd/var/lib
# mkdir Mysql-cluster
# CD Mysql-cluster
# VI Config.ini (this is the cluster configuration file on the management node, very important)
[NDBD Default]
Noofreplicas=1 # Number of replicas
datamemory=80m # How much memory to allocate for data storage
indexmemory=18m # How much memory to allocate for index storage
# for Datamemory and indexmemory, we have used the
# default values. Since the ' World ' database takes up
# Only about 500KB, this should is more than enough for
# This example Cluster setup.
[TCP Default]
# this default; However, can use any
# port ' is ' free ' for all ' hosts in the cluster
# Note:it is recommended ' You don't specify the port
# number at all and simply allow the default value to is used
# instead
[NDB_MGMD]
HOSTNAME=192.168.1.110 # hostname or IP address of MGM node
Datadir=/var/lib/mysql-cluster # Directory for MGM node log files
Nodeid=1
[NDBD]
HOSTNAME=192.168.1.111 # hostname or IP address
Datadir=/u01/app/mysql/data # Directory for this data node ' s data files
nodeid=2
[NDBD]
HOSTNAME=192.168.1.112 # hostname or IP address
Datadir=/u01/app/mysql/data # Directory for this data node ' s data files
Nodeid=3
[Mysqld]
HOSTNAME=192.168.1.111 # hostname or IP address
Nodeid=4 # (additional mysqld connections can be
# specified for this node for various
# purposes such as running Ndb_restore)
[Mysqld]
HOSTNAME=192.168.1.112 # hostname or IP address
Nodeid=5 # (additional mysqld connections can be
# specified for this node for various
# purposes such as running Ndb_restore)
Configuration Description:
This part of [ndbd Default] is a public part that is valid for each data node and only needs to be configured with a
Noofreplicas=1 data mirroring several (each data node backs up each other)
[TCP default] uses which port to communicate between each data node and the management node.
In older versions of the NDB cluster software configuration, this place is typically configured with portnumber=2202
But the new version of NDB software does not need to be configured here, and MySQL officials strongly recommend that you do not configure
[NDB_MGMD] Management node configuration section (usually only one)
Note Nodeid=1 indicates that the management node has a node ID of 1, and if not specified, an error occurs when the cluster is started
HOSTNAME=192.168.1.110 indicates the IP address of the management node
Datadir=/var/lib/mysql-cluster indicate where the cluster management logs are stored
[NDBD] Data node configuration section, several data nodes are configured with several [NDBD]
hostname=192.168.1.111 indicates the IP address of the data node
Datadir=/u01/app/mysql/data indicates where the database files on the data node are stored
nodeid=2 indicates the NodeId number of the data node throughout the cluster (very important)
[Mysqld] The SQL node configuration section, with several SQL nodes, is configured with several [mysqld]
13, configure all the above configuration can be started on the management node on how to cluster
Note: You do not have to perform the/usr/local/mysql/scripts/mysql_install_db--basedir=--datadir=--user= command on the management node
To install the database.
# Ndb_mgmd-f/var/lib/mysql-cluster/config.ini (This is done the first time, if a new data node is added later)
(When executing this command, you need to take the--initital parameter, otherwise the newly added node)
(cannot be recognized)
(# ndb_mgmd-f/var/lib/mysql-cluster/config.ini--initial)
# NDB_MGM (View the cluster status after startup to see if the cluster started successfully, if you see the following to indicate that the cluster has been successfully configured and started)
[Root@mysql01 mysql-cluster]# NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 (not connected, accepting connect from 192.168.1.111)
Id=3 (not connected, accepting connect from 192.168.1.112)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.1.110 (mysql-5.6.17 ndb-7.3.5)
[Mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from 192.168.1.111)
Id=5 (not connected, accepting connect from 192.168.1.112)
Now you see that the cluster contains two data nodes, one Management node, and two SQL nodes, but neither the data nodes nor the SQL nodes are started.
So the connection status shown is not connected
14. Log in two data nodes and execute the following command to start the data node
# NDBD--initial (the first time you start, you need to add--initial to initialize the data node, the second time you start, you don't need this parameter)
[Root@mysql02 support-files]# ndbd--initial
2014-06-12 05:42:04 [NDBD] INFO--Angel connected to ' 192.168.1.110:1186 '
2014-06-12 05:42:04 [NDBD] INFO--Angel allocated Nodeid:2
[Root@mysql03 bin]# ndbd--initial
2014-06-12 05:41:38 [NDBD] INFO--Angel connected to ' 192.168.1.110:1186 '
2014-06-12 05:41:38 [NDBD] INFO--Angel allocated Nodeid:3
15, and then login to the Management node to view the status of the entire cluster now
Ndb_mgm> Show
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @192.168.1.111 (mysql-5.6.17 ndb-7.3.5, nodegroup:0, *)
Id=3 @192.168.1.112 (mysql-5.6.17 ndb-7.3.5, nodegroup:1)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.1.110 (mysql-5.6.17 ndb-7.3.5)
[Mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from 192.168.1.111)
Id=5 (not connected, accepting connect from 192.168.1.112)
Now you can see that two data nodes have been connected to indicate that two data nodes have successfully started
16, log on to two SQL nodes to start the SQL node
[Root@mysql02 ~]# Cd/usr/local/mysql/bin
[Root@mysql02 bin]#./mysqld_safe--user=mysql
140612 05:51:00 mysqld_safe Logging to '/u01/app/mysql/data/mysql02.err '.
140612 05:51:00 Mysqld_safe starting mysqld daemon with databases From/u01/app/mysql/data
[Root@mysql03 ~]# Cd/usr/local/mysql/bin
[Root@mysql03 bin]#./mysqld_safe--user=mysql
140612 05:52:07 mysqld_safe Logging to '/u01/app/mysql/data/mysql03.err '.
140612 05:52:07 Mysqld_safe starting mysqld daemon with databases From/u01/app/mysql/data
17, back to the management node to view the status of the entire cluster
Ndb_mgm> Show
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @192.168.1.111 (mysql-5.6.17 ndb-7.3.5, nodegroup:0, *)
Id=3 @192.168.1.112 (mysql-5.6.17 ndb-7.3.5, nodegroup:1)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.1.110 (mysql-5.6.17 ndb-7.3.5)
[Mysqld (API)] 2 node (s)
Id=4 @192.168.1.111 (mysql-5.6.17 ndb-7.3.5)
Id=5 @192.168.1.112 (mysql-5.6.17 ndb-7.3.5)
The entire cluster is built to complete, and it starts running successfully.
18. Safely shut down the entire cluster
First login to two SQL nodes, execute service mysqld stop command to shut down SQL node
[root@mysql02 ~]# service mysqld Stop
Shutting down MySQL ... success!
[ROOT@MYSQL03 ~]# service mysqld Stop
Shutting down MySQL ..... success!
Then log on to the admin node and execute the shutdown command to shut down the entire cluster.
[Root@mysql01 mysql-cluster]# NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 (not connected, accepting connect from 192.168.1.111)
Id=3 (not connected, accepting connect from 192.168.1.112)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.1.110 (mysql-5.6.17 ndb-7.3.5)
[Mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from 192.168.1.111)
Id=5 (not connected, accepting connect from 192.168.1.112)
ndb_mgm> shutdown
1 NDB Cluster node (s) have shutdown.
Disconnecting to allow Management Server to shutdown.
Ndb_mgm> exit
7 Test.
1, from SQL Node A login, create databases and tables, for simple testing.
mysql> CREATE DATABASE Zxztest;
mysql> use zxztest;
Database changed
Mysql> CREATE TABLE test1 (ID int,name varchar ()) engine=ndb;
mysql> INSERT INTO test1 values (1, ' linuxidc ');
Mysql> select * from Test1;
+------+---------+
| ID | name |
+------+---------+
| 1 | LINUXIDC |
+------+---------+