Mysql CLUSTER: MYSQL cluster
1. References
Http://xuwensong.elastos.org/2014/01/13/ubuntu-%E4%B8%8Bmysql-cluster%E5% AE %89%E8%A3%85%E5%92%8C%E9%85%8D%E7%BD% AE/
2. Introduction
MySQL-Cluster is a technology that allows you to deploy "in memory" database clusters in a non-shared system. Without a shared architecture, the system can use cheap hardware without special requirements for hardware and software. In addition, each component has its own memory and disk, so there is no single point of failure.
The MySQL cluster integrates the standard MySQL server with the "in-memory" Cluster Storage engine named NDB. The term NDB refers to the settings related to the storage engine, and the term "MySQL cluster" refers to the combination of MySQL and the NDB storage engine.
A MySQL cluster consists of a group of computers, each of which runs a variety of processes, including MySQL servers, NDB cluster data nodes, Management Servers (MGM), and (possibly) dedicated Data Access Program. The relationship between components in the cluster, such:
All these programs constitute a MySQL cluster. When you save data to the NBD cluster engine, the table is saved in the data node. These tables can be directly accessed from all other MySQL servers in the cluster. Therefore, if an application updates the salary of an employee, all other MySQL servers that query this data can immediately detect this change.
For MySQL clusters, data stored on data nodes can be mapped, and the cluster can handle faults of individual data nodes. Except for a few things, they will be abandoned due to the loss of their status, there will be no other impact. A transaction application can handle failures, so it is not the source of the problem.
3. Basic concepts of MySQL Clusters
NDB is a "in-memory" storage engine with high availability and good data consistency.
The NDB storage engine can be configured using multiple failover and load balancing options, but the storage engine at the cluster level is the easiest to start. The NDB storage engine of the MySQL cluster contains the complete dataset, which depends only on other data in the cluster.
Next, we will introduce how to set up a MySQL cluster consisting of the NDB storage engine and some MySQL servers.
Currently, the MySQL cluster can be configured independently of the MySQL server. In a MySQL cluster, each part of the cluster is considered as a node.
Note: in many cases, the term "Node" is used for computers, but it represents processes when discussing MySQL clusters. A single computer can have any number of nodes, so we can configure multiple nodes with different functions on the same computer. Therefore, we use the term cluster host.
There are three types of cluster nodes. In the lowest MySQL cluster configuration, there are at least three nodes. These three types of nodes are:
Management Nodes (MGM): these nodes manage other nodes in the MySQL cluster, such as providing configuration data, starting and stopping nodes, and running backups. Because such nodes are responsible for managing the configurations of other nodes, management nodes should be started before other nodes. The MGM node is started with the command ndb_mgm.
Data nodes (NDB): these nodes are used to save the data of the cluster. The number of data set points is related to the number of copies, which is a multiple of fragments. For example, there are four data nodes for two replicas and two fragments for each copy. There is no need to have more than one copy .. The data node is started with the command ndbd.
SQL nodes: these nodes are used to access cluster data. For MySQL clusters, client nodes are traditional MySQL servers that use the NDB Cluster Storage engine. In typical cases, the SQL node is started with the mysql-ndbcluster command, or the ndbcluster is added to my. cnf and started with mysqld.
The cluster configuration includes the configuration of individual nodes in the cluster and the individual communication links between nodes. The current design of MySQL clusters aims to ensure that the storage nodes are homogeneous in terms of processor capabilities, memory space, and bandwidth. In addition, to provide a single configuration point as a whole, all configurations of the cluster are in the same file.
The Management Server (MGM node) is responsible for managing cluster configuration files and cluster logs. Each node in the cluster retrieves configuration data from the Management Server and requests to determine the location of the management server. When an interesting event occurs in a data node, the node transmits the information about the event to the management server, and then writes the information to the cluster log.
4. Implementation Environment
Now, we plan to establish the MySQLCLuster system with five nodes, so we need to use three machines (SQL and data nodes are shared) for the following purposes:
Node (Purpose) IP address (host name)
Management node (MGM) 10.24.0.101 (db1) nodeid = 1
Data Node 1 (NDBD1) 10.24.6.4 (db4) nodeid = 11
Data Node 2 (NDBD2) 10.24.6.6 (db5) nodeid = 12
SQL Node 1 (SQL1) 10.24.6.4 (db2) nodeid = 21
SQL Node 2 (SQL2) 10.24.6.6 (db3) nodeid = 22
5. Download the MySQL-Cluster installation package
For installation package downloads for MySQL-Cluster, see http://dev.mysql.com/downloads/cluster/
Mysql-cluster-gpl-7.4.7-debian7-x86_64.deb
6. Install mysql 6.1. Clear earlier mysql traces
In addition, if you have installed mysql-server before, you need to uninstall mysql-server before this experiment, and execute the following command to uninstall mysql
Sudo apt-get autoremove -- purgemysql-server
Sudo apt-get removemysql-server
Sudo apt-get autoremovemysql-server
Sudo apt-get remove mysql-common (very important)
6.2. Install the deb File
Sudo dpkg-imysql-cluster-gpl-7.4.7-debian7-x86_64.deb
Installation directory/opt/mysql/server-5.6
6.3. Storage node/SQL node Installation
The installation steps for SQL nodes and data nodes are basically the same. Therefore, perform the following steps as the system root user on each machine designed as a storage node or SQL node:
Mysql group and mysql user
Check the/etc/passwd and/etc/group/files to check whether the mysql group and mysql user already exist on the system, in this case, some operating systems will create it as part of the installation process. You can run the following command to View Details:
Cat show/etc/passwd
Cat show/etc/group
If they do not exist, create a new mysql user group and add a mysql user group to the group.
Groupadd mysql
Useradd-g mysql
6.4. Create a script for the system database
Sudo/opt/mysql/server-5.6/scripts/mysql_install_db -- user = mysql
If the script cannot be run and the host name does not match, the download version may be incorrect. Check whether the OS is 32-bit or 64-bit and select the correct version. If the default file cannot be found, it is likely that the previous mysql-server was not uninstalled. Run the commands provided above to thoroughly uninstall mysql-server.
Successful results:
To start mysqld at boot time youhave to copy
Support-files/mysql. server to theright place for your system
Please remember to set a passwordfor the MySQL root USER!
To do so, start the server, then issue the following commands:
/Opt/mysql/server-5.6/bin/mysqladmin-u rootpassword 'new-password'
/Opt/mysql/server-5.6/bin/mysqladmin-u root-h drbd01 password 'new-password'
Alternatively you can run:
/Opt/mysql/server-5.6/bin/mysql_secure_installation
Which will also give you theoption of removing the test
Databases and anonymous usercreated by default. This is
Stronugly recommended forproduction servers.
See the manual for moreinstructions.
You can start the MySQL daemonwith:
Cd/opt/mysql/server-5.6;/opt/mysql/server-5.6/bin/mysqld_safe &
You can test the MySQL daemonwith mysql-test-run.pl
Cd mysql-test; perl mysql-test-run.pl
Support MySQL by buyingsupport/licenses at http://shop.mysql.com
WARNING: Found existing configfile/opt/mysql/server-5.6/my. cnf on the system.
Because this file might be inuse, it was not replaced,
But was used in bootstrap (unless you used -- defaults-file)
And when you later start theserver.
The new default config file wascreated as/opt/mysql/server-5.6/my-new.cnf,
Please compare it with yourfile and take the changes you need.
. Set necessary permissions for the MySQL server and Data Directory
Chown-R root.
Chown-R mysql data
6.6. Copy mysql. server
Sudo cp/opt/mysql/server-5.6/support-files/mysql. server/etc/init. d/mysql
Chmod + x/etc/init. d/mysql
6.7. Copy my. cnf
Sudo cp/opt/mysql/server-5.6/my-new.cnf/etc/my. cnf
Sudo vim/etc/my. cnf
6.8. Set the root password
Sudo apt-get installmysql-client
Mysqladmin-u rootflush-privileges password "123456"
6.9. Management node Installation
For management (MGM) nodes, you do not need to install the mysqld executable file. You only need to install the binary files for the MGM server and client. Such files can be found in the downloaded file. Assume that the downloaded file is stored in the/var/tmp file. As a system administrator, perform the following steps to install ndb_mgmd and ndb_mgm on the cluster management node host.
Ndb_mgmd: ndb Management Server
Ndb_mgm: ndb management client
6.10. Install the deb File
Sudo dpkg-imysql-cluster-gpl-7.4.7-debian7-x86_64.deb
Installation directory/opt/mysql/server-5.6
6.11. Create a management directory
Sudo mkdir/usr/local/mysql/
6.12. Copy the ndb hypervisor
Sudo cp/opt/mysql/server-5.6/bin/ndb_mgm */usr/local/mysql/
6.13. Port
Note: The default port of the cluster management node is 1186, and the default end of the data node is 2202.
6.14. Configure the ndb management Node
Cd/usr/local/mysql/
Sudo vim config. ini:
# Options affecting ndbdprocesses on all data nodes:
[Ndbd default]
NoOfReplicas = 2
DataMemory = 80 M
IndexMemory = 18 M
# TCP/IP options:
[Tcp default]
# Portnumbers = 2202
# Management process options:
[NDB_MGMD]
Nodeid = 1
HostName = 10.24.0.101
DataDir =/usr/local/mysql
# Options for data node:
[NDBD]
Nodeid = 11
HostName = 10.24.6.4
DataDir =/opt/mysql/server-5.6/data/
# Options for data node:
[NDBD]
Nodeid = 12
HostName = 10.24.6.6
DataDir =/opt/mysql/server-5.6/data/
# SQL node options:
[MYSQLD]
Nodeid = 21
HostName = 10.24.6.4
[MYSQLD]
Nodeid = 22
HostName = 10.24.6.6
7. Start
After the configuration is complete, it is not difficult to start the cluster. Each cluster node process must be started on the host where the data node is located. Although the node can be started in any order, it is recommended that you first start the management node, then start the storage node, and finally start the SQL node.
7.1. Management node startup
You can use the nbd_mgm command to log on to the ndb_mgm client. After logging on, you can use the show command to view the status of nodes in the cluster.
Note: When starting MGM, you must use the-f or-config-file option to tell ndb_mgmd where to locate the configuration file. The-initial option must be selected for the first startup, or the-initial option must be selected after the configuration information of the MGM node is changed.
Sudo/usr/local/mysql/ndb_mgmd-f/usr/local/mysql/config. ini
Ndb client View:
Wiki @ zoweewiki:/usr/local/mysql $/usr/local/mysql/ndb_mgm
-- NDB Cluster -- ManagementClient --
Ndb_mgm>
Ndb_mgm>
Ndb_mgm>
Ndb_mgm> show
Connected to Management Serverat: localhost: 1186
Cluster Configuration
---------------------
[Ndbd (NDB)] 2 node (s)
Id = 11 @ 10.24.6.4 (mysql-5.6.25, Nodegroup: 0 ,*)
Id = 12 @ 10.24.6.6 (mysql-5.6.25, Nodegroup: 0)
[Ndb_mgmd (MGM)] 1 node (s)
Id = 1 @ 10.24.0.101 (mysql-5.6.25 ndb-7.4.7)
[Mysqld (API)] 2 node (s)
Id = 21 @ 10.24.6.4 (mysql-5.6.25 ndb-7.4.7)
Id = 22 @ 10.24.6.6 (mysql-5.6.25 ndb-7.4.7)
7.2. Data Node startup
On each data node host, run the following command to start the NDBD process for the first time:
Sudo/opt/mysql/server-5.6/bin/ndbd -- initial
Note that the "-initial" parameter should be used only when ndbd is started for the first time, or when ndbd is restarted after backup/recovery or configuration change. This is very important, this parameter causes the data node to delete any files created by an earlier ndbd instance for recovery, including log files for recovery.
7.3. MYSQL node startup
Sudo/etc/init. d/mysql restart
Startup log:
/Opt/mysql/server-5.6/data/drbd02.err
151015 14:33:19 mysqld_safeStarting mysqld daemon with databases from/opt/mysql/server-5.6/data
14:33:22 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use -- explicit_defaults_for_timestamp server option (see documentation for moredetails ).
14:33:22 0 [Note]/opt/mysql/server-5.6/bin/mysqld (mysqld 5.6.25-ndb-7.4.7-cluster-gpl-log) starting as process 15192...
14:33:23 15192 [Note] Plugin 'federated 'is disabled.
14:33:23 15192 [Note] InnoDB: Using atomics to ref count buffer pool pages
14:33:23 15192 [Note] InnoDB: The InnoDB memory heap is disabled
14:33:23 15192 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
14:33:23 15192 [Note] InnoDB: Memory barrier is not used
14:33:23 15192 [Note] InnoDB: Compressed tables use zlib 1.2.3
14:33:23 15192 [Note] InnoDB: Using Linux native AIO
14:33:23 15192 [Note] InnoDB: Not using CPU crc32 instructions
14:33:23 15192 [Note] InnoDB: Initializing buffer pool, size = 128.0 M
14:33:23 15192 [Note] InnoDB: Completed initialization of buffer pool
14:33:24 15192 [Note] InnoDB: Highest supported file format is Barracuda.
14:33:24 15192 [Note] InnoDB: 128 rollback segment (s) are active.
14:33:24 15192 [Note] InnoDB: Waiting for purge to start
14:33:24 15192 [Note] InnoDB: 5.6.25 started; log sequence number 1626027
14:33:24 15192 [Note] NDB: Changed global value of binlog_format from STATEMENT to MIXED
15192 14:33:24 1186 [Note] NDB: NodeID is 22, management server '10. 24.0.101: 100'
14:33:25 15192 [Note] NDB [0]: NodeID: 22, all storage nodes connected
14:33:25 15192 [Warning] NDB: server id set to zero-changes logged to bin log with server idzero will be logged with another server id by slave mysqlds
14:33:25 15192 [Note] NDB Binlog: Starting...
14:33:25 15192 [Note] NDB Util: Starting...
14:33:25 15192 [Note] NDB Index Stat: Starting...
14:33:25 15192 [Note] NDB Index Stat: Wait for server start completed
14:33:25 15192 [Note] NDB Util: Wait for server start completed
14:33:25 15192 [Note] NDB Binlog: Started
14:33:25 15192 [Note] NDB Binlog: Setting up
14:33:25 15192 [Note] NDB Binlog: Created schema Ndb object, reference: 0x80040016, name: 'ndbbinlog schema change monitoring'
14:33:25 15192 [Note] NDB Binlog: Created injector Ndb object, reference: 0x80050016, name: 'ndb Binlog data change monitoring'
14:33:25 15192 [Note] NDB Binlog: Setup completed
14:33:25 15192 [Note] NDB Binlog: Wait for server start completed
14:33:25 15192 [Note] Server hostname (bind-address): '*'; port: 3306
14:33:25 15192 [Note] IPv6 is available.
14:33:25 15192 [Note]-': 'resolves '::';
14:33:25 15192 [Note] Server socket created on IP :'::'.
14:33:25 15192 [Note] Event schedents: Loaded 0 events
14:33:25 15192 [Note]/opt/mysql/server-5.6/bin/mysqld: ready for connections.
Version: '5. 6.25-ndb-7.4.7-cluster-gpl-log 'socket: '/var/run/mysqld. sock' port: 3306 MySQL ClusterCommunity Server (GPL)
14:33:25 15192 [Note] NDB Util: Wait for cluster to start
14:33:25 15192 [Note] NDB Util: Started
14:33:25 15192 [Note] NDB Binlog: Check for incidents
14:33:25 15192 [Note] NDB Binlog: Wait for cluster to start
14:33:25 15192 [Note] NDB Index Stat: Wait for cluster to start
14:33:25 15192 [Note] ndb_index_stat_proc: Created Ndb object, reference: 0x80070016, name: 'ndb Index Statistics monitoring'
14:33:25 15192 [Note] NDB Index Stat: Started
14:33:26 15192 [Note] NDB Binlog: discover table Event: REPL $ mysql/ndb_schema
14:33:26 15192 [Note] NDB Binlog: logging./mysql/ndb_schema (UPDATED, USE_WRITE)
14:33:26 15192 [Note] NDB Binlog: discover table Event: REPL $ mysql/ndb_apply_status
14:33:26 15192 [Note] NDB Binlog: logging./mysql/ndb_apply_status (UPDATED, USE_WRITE)
14:33:26 15192 [Note] NDB: Cleaning stray tables from database 'ndb _ 12_fs'
14:33:26 15192 [Note] NDB: Cleaning stray tables from database 'ndbinfo'
14:33:26 15192 [Note] NDB: Cleaning stray tables from database 'performance _ Scheme'
14:33:26 15192 [Note] NDB: Cleaning stray tables from database 'test'
14:33:26 15192 [Note] NDB Binlog: Wait for first event
14:33:26 [NdbApi] INFO -- Flushing incomplete GCI: s <4554/3
14:33:26 [NdbApi] INFO -- Flushing incomplete GCI: s <4554/3
14:33:26 15192 [Note] NDB Binlog: starting log at epoch 4554/3
14:33:26 15192 [Note] NDB Binlog: Got first event
14:33:26 15192 [Note] NDB Binlog: ndb tables writable
14:33:26 15192 [Note] NDB Binlog: Startup and setup completed
14:33:26 15192 [Note] NDB Schema dist: Data node: 11 reports subscribe from node 21, subscriber bitmask 0200000
14:33:26 15192 [Note] NDB Schema dist: Data node: 12 reports subscribe from node 21, subscriber bitmask 0200000
14:34:03 15192 [Warning] IP address '10. 24.6.170 'cocould not be resolved: Name or service notknown
14:34:56 15192 [Note] NDB Schema dist: Data node: 11 reports subscribe from node 21, subscriber bitmask 00
14:34:56 15192 [Note] NDB Schema dist: Data node: 12 reports subscribe from node 21, subscriber bitmask 00
14:35:03 15192 [Note] NDB Schema dist: Data node: 11 reports subscribe from node 21, subscriber bitmask 0200000
14:35:03 15192 [Note] NDB Schema dist: Data node: 12 reports subscribe from node 21, subscriber bitmask 0200000
14:41:04 15192 [Note] NDB Schema dist: Data node: 11 failed, subscriber bitmask 00
14:42:36 15192 [Note] NDB Schema dist: Data node: 12 reports subscribe from node 21, subscriberbitmask 00
8. Test 8.1. General Test
To ensure that data tables can be copied normally in the cluster, you must specify the ndbcluster engine (engine = ndb or engine = ndbcluster) when creating data tables ).
Log on to mysql at 10.24.64, create a new database songzi, create an ndbcluster engine data table test (id int, namechar (10), and insert a data entry (0, songzi ). The operations and results on 10.24.6.4 are as follows:
Log on to mysql 10.24.6.6 and check that the data has been synchronized. The new table and data also exist. The operations and results on 10.24.6.6 are as follows:
8.2. Simulate NDB node Crash
Terminate the NDB process on 10.24.6.4 and run the following command to check the NDB Process
Ps-ef | grep ndbd
Killed 24077
Ps-ef | grep ndbd
For specific operations and results, see:
You can also view that the NDB node on 10.24.0.101 has been stopped on the ndb_mgm management node client.
Log on to mysql 10.24.6.4 and 10.24.6.6 respectively, and you can find that the data can still be queried. The result is as follows:
10.24.6.4
10.24.6.6
This result indicates that the test is successful, that is, when there is an NDB node Crash, the entire MySQL environment can still serve normally.
8.3. Simulate SQL node Crash
To terminate the mysqld process on 10.24.6.4, run the following command:
Killall mysqld
You can also view that the SQL node on 10.24.6.4 has been stopped on the ndb_mgm management node client.
Log on to mysql on 10.24.6.6 and check that the data still exists.
This result indicates that the test is successful, and the entire MySQL-Cluster environment can still work after a Crash SQL node exists.
So far, the entire MySQL-Cluster installation and configuration experiment has been completed. After the experiment is complete, you can use the command shell> ndb_mgm-e shutdown or ndb_mgm> shutdown to close all nodes in the cluster.
9. Problems Encountered
10. Summary
- Mysqlcluster is a unified shared cluster.
- Share multiple mysql instances at the same time
- Multi-copy storage of one value, not based on consistent hash distribution like redis
- High concurrency, high availability, and high scalability
- Share nothing Architecture
- Add data node Extension: 0.2 billion NoSQL queries per second through 32 data nodes and nearly 2.5 million SQL statements per second through 16 Data Nodes
- We recommend that you use lvs + keepalived + mysql cluster to implement the Cluster mysqlMySQL cluster. It is a real-time, scalable, ACID-compliant transactional memory database.