Overview
MySQL Cluster is a highly practical, scalable, high-performance, and highly redundant version of MySQL for distributed computing environments, and its design is designed to meet the toughest application requirements in many industries, and these applications often require database operations to run at 99.999% reliability. MySQL cluster allows for the deployment of an "in-memory" database cluster in a shared-free system, with the system being able to use inexpensive hardware without a shared architecture, and with no special requirements for hardware and software. In addition, because each component has its own memory and disk, there is no single point of failure.
In fact, the MySQL cluster integrates a memory-clustered storage engine called NDB with a standard MySQL server. It contains a group of computers, each running one or more processes, which may include a MySQL server, a data node, a Management server, and a proprietary data access program.
The MySQL cluster is able to configure the NDB storage engine with multiple failover and load balancing options, but this is easiest to do on the storage engine at the cluster level. The following is the MySQL cluster structure diagram,
MySQL from the structure, composed of 3 types of nodes (computers or processes), respectively:
Management node: for the entire cluster other nodes to provide configuration, management, arbitration and other functions. Theoretically, it is possible to provide services through a single server.
Data nodes: The core of MySQL cluster, which stores data, logs, and provides data for a variety of management services. More than 2 can achieve the high availability of the cluster, when the DB node increases, the processing speed of the cluster will slow.
SQL Node (API): for accessing MySQL cluster data, providing external application services. Increasing the API nodes increases the concurrent access speed and overall throughput of the entire cluster, which can be deployed on a Web application server or deployed on a dedicated server and deployed on the same server as db.
NDB engine
MySQL Cluster uses a dedicated, memory-based storage engine--NDB engine, which has the advantage of being fast, without disk I/O bottlenecks, but because it is based on memory, the size of the database is limited by the total system memory, If the MySQL server running NDB must be large enough memory, such as 4G, 8G, or even 16G. NDB engine is distributed, it can be configured on multiple servers to achieve data reliability and scalability, theoretically, through the configuration of 2 NDB storage nodes can achieve the entire database cluster redundancy and solve the single point of failure problem.
Defects
- Based on memory, the size of the database is limited by the total memory of the cluster
- Based on memory, the data may be lost after power off, which needs to be validated by testing.
- Multiple nodes through the network to achieve communication and data synchronization, query and other operations, so the integrity is affected by network speed, so the speed is also relatively slow
2.2 Advantages
- Multiple nodes can be distributed in different geographical locations, so it is also a scheme to implement a distributed database.
- Extensibility is very good, increase the node can realize the expansion of the database cluster.
- Redundancy is good, there are complete database data on multiple nodes, so any one node outage will not cause service disruption.
The cost of achieving high availability is low, unlike traditional high-availability scenarios where shared storage devices and proprietary software are required to be implemented, NDB as long as there is enough memory.
This article will build a simplified MySQL cluster system, all the commands in the configuration method are run as root account. This MySQL cluster contains a management node, two data nodes, two SQL nodes, each of which is installed on five virtual machines, and the name and IP of the virtual machine are as follows:
I. Public configuration
Configure the configuration items here on three virtual machines, respectively.
1. Install virtual machines
The virtual machine operating system installs the x86_64 version of CentOS 6.4, uses NAT network, and also installs the Vmware-tools, the specific installation method is not detailed here.
2. copy MySQL Cluster
Download Mysql-cluster:
Download the resulting compressed package to the/root/downloads directory of the virtual machine, and then run the following command in the shell:
Cd/root/downloads
tar-xvzf mysql-cluster-gpl-7.3.4-linux-glibc2.5-x86_64.tar.gz
MV Mysql-cluster-gpl-7.3.4-linux-glibc2.5-x86_64/usr/local/mysql
3. Close Security Policy
Close the Iptables firewall (or open the 1186, 3306 port on the firewall) and run the following command in the shell:
Close SELinux and run the following command in the shell:
Change the SELinux entry in the config file to disabled, and the contents of the modified config file are as follows:
# This file controls the ' state of ' SELinux on the system.
# selinux= can take one of these three values:
# Enforcing-selinux The security policy is enforced
. # Permissive-selinux Prints warnings instead of enforcing.
# disabled-no SELinux policy is loaded.
selinux=disabled
# selinuxtype= can take one of these two values:
# targeted-targeted processes are protected,
# Mls-multi level security protection.
Finally reboot the system
II. Configuration Management node (192.168.124.141)
1. Configure Config.ini configuration file
Run the following command in the shell:
Mkdir/var/lib/mysql-cluster
cd/var/lib/mysql-cluster
gedit config.ini
The configuration file Config.ini the following contents:
[NDBD Default]
noofreplicas=2
datamemory=80m
indexmemory=18m
[NDB_MGMD]
nodeid=1
hostname=192.168.124.141
Datadir=/var/lib/mysql-cluster
[NDBD]
nodeid=2
hostname=192.168.124.142
datadir=/usr/local/mysql/data
[ndbd]
nodeid=3
hostname=192.168.124.143
datadir=/usr/local/mysql/data
[mysqld]
nodeid=4
hostname= 192.168.124.144
[mysqld]
nodeid=5
hostname=192.168.124.145
2. Installation Management node
Installing the Management node does not require mysqld binaries, only MySQL Cluster server-side programs (NDB_MGMD) and listener client programs (NDB_MGM). Run the following command in the shell:
cp/usr/local/mysql/bin/ndb_mgm*/usr/local/bin
cd/usr/local/bin
chmod +x ndb_mgm*
Third, the Configuration data node (192.168.124.142, 192.168.124.143)
1. Add MySQL group and user
Run the following command in the shell:
2. Configure MY.CNF configuration file
Run the following command in the shell:
The contents of the configuration file my.cnf are as follows:
[Mysqld]
Basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
socket=/usr/local/mysql/sock/mysql.sock
User=mysql
# Disabling symbolic-links is recommended to prevent assorted security risks
[Mysqld_safe]
Log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[Mysql_cluster]
3. Create a system database
Run the following command in the shell:
Cd/usr/local/mysql
mkdir sock
scripts/mysql_install_db--user=mysql--basedir=/usr/local/mysql Usr/local/mysql/data
4. Set up the Data directory
Run the following command in the shell:
Chown-r Root.
Chown-r mysql.mysql/usr/local/mysql/data
chown-r mysql.mysql/usr/local/mysql/sock
5. Configure MySQL Service
Run the following command in the shell:
CP support-files/mysql.server/etc/rc.d/init.d/
chmod +x/etc/rc.d/init.d/mysql.server
chkconfig--add Mysql.server
Iv. Configuring SQL Nodes (192.168.124.144, 192.168.124.145)
1. Add MySQL group and user
Run the following command in the shell:
2. Configure MY.CNF configuration file
Run the following command in the shell:
The contents of the configuration file my.cnf are as follows:
[Client]
Socket=/usr/local/mysql/sock/mysql.sock
[mysqld]
ndbcluster
datadir=/usr/local/mysql/data
Socket=/usr/local/mysql/sock/mysql.sock
ndb-connectstring=192.168.124.141
[Mysql_cluster]
3. Create a system database
Run the following command in the shell:
Cd/usr/local/mysql
mkdir Sock
4. Set up the Data directory
Run the following command in the shell:
Chown-r Root.
Chown-r mysql.mysql/usr/local/mysql/data
chown-r mysql.mysql/usr/local/mysql/sock
5. Configure MySQL Service
Run the following command in the shell:
CP support-files/mysql.server/etc/rc.d/init.d/
chmod +x/etc/rc.d/init.d/mysql.server
V. Cluster environment launch
Note the boot order: first, the management node, then the data node, and finally the SQL node.
1. Start Management node
Run the following command in the shell:
Ndb_mgmd-f/var/lib/mysql-cluster/config.ini
You can also use NDB_MGM to listen for clients, as follows:
Ndb_mgm
2. Start Data node
For the first boot, you need to add the--initial parameter for initialization of the NDB node. This parameter cannot be added during a later boot process, otherwise the NDBD program clears all previously established data and log files for recovery.
If it is not first started, the following command is executed.
/usr/local/mysql/bin/ndbd
3. Start SQL node
If the MySQL service is not running, run the following command in the shell:
4. Start test
View Management node, start successfully
Vi. Cluster Testing
1. Test A
Now we have the database created on one of the SQL nodes, and then go to another SQL node to see if the data is synchronized.
Execute on SQL Node 1 (192.168.124.144):
shell>/usr/local/mysql/bin/mysql-u root-p
mysql>show databases;
Mysql>create database AA;
Mysql>use AA;
Mysql>create TABLE ctest2 (i INT) engine=ndb; This must specify that the engine for the database table is NDB, otherwise the synchronization fails
mysql> INSERT into Ctest2 () VALUES (1);
Mysql> SELECT * from Ctest2;
Then see if the data is synchronized at SQL Node 2.
Tested to create data on a non-master, which can be synchronized to master
The engine that looks at the table is not the ndb,>show create table name;
2. Test Two
Close a data node, write input in another node, and turn on the closed node to see if the data is synchronized.
First, restart the Data node 1, and then add the data to Node 2.
On SQL Node 2 (192.168.124.145), the following actions are:
mysql> CREATE database BB;
mysql> use BB;
Mysql> CREATE TABLE ctest3 (i INT) engine=ndb;
mysql> use AA;
Mysql> INSERT into Ctest2 () VALUES (3333);
Data node 1 start up, start Data Node 1 service
#/usr/local/mysql/bin/ndbd--initial#service mysqld Start
Then log in to view the data
#/usr/local/mysql/bin/mysql-u Root–p
You can see that the data is synchronized, indicating that the data can be synchronized in both directions.
Vii. Closure of clusters
1. To close the management node and data node, simply execute it in the Management node (clustermgm--134):
Connected to Management Server at:localhost:1186
2 NDB Cluster node (s) have shutdown.
Disconnecting to allow Management Server to shutdown.
Show
Connected to Management Server at:localhost:1186
2 NDB Cluster node (s) have shutdown.
Disconnecting to allow Management Server to shutdown.
2. Then close the SQL node (135,136) and run in 2 nodes:
Shell>/etc/init.d/mysql.server stop
shutting down MySQL ... success!
Note: To start the cluster again, follow the start step of part five, but don't add the "-initial" parameter when you start the data node.
The above is the MySQL cluster construction process, I hope for everyone to build a MySQL cluster of their own help