My personal experience in configuring a mysql cluster.
Installation preparation:
1. Required software.
Management node:
MySQL-Cluster-gpl-management-7.1.9a-1.rhel5.i386.rpm
MySQL-Cluster-gpl-tools-7.1.9a-1.rhel4.i386.rpm
SQL node:
MySQL-Cluster-gpl-server-7.1.9a-1.rhel5.i386.rpm
MySQL-Cluster-gpl-client-7.1.9a-1.rhel5.i386.rpm
Storage node:
MySQL-Cluster-gpl-storage-7.1.9a-1.rhel4.i386.rpm
2. Prerequisites
Disable the firewall of all cluster machines
Configure the/etc/hosts file in the machines of all mysql clusters to write the ip addresses and machine names of all machines. Add all cluster machines to the same group.
Add to mysql group: groupadd mysql
Add User mysql: useradd-g mysql
1. installation and configuration management nodes
1. Installation preparation
Create a folder: mkdir/var/lib/mysql/data indicates the storage node folder.
Mkdir/var/lib/mysql-cluster indicates the management node folder
Grant permissions: chmod-R 1777/var/lib/mysql
Chmod-R 1777/var/lib/mysql-cluster
2. Install management node software
Rpm-ivh MySQL-Cluster-gpl-management-7.1.9a-1.rhel5.i386.rpm
Rpm-ivh MySQL-Cluster-gpl-tools-7.1.9a-1.rhel4.i386.rpm
3. Configuration Management Node
Create a mysql-cluster folder in the mysql directory, go to mysql-cluster, and create config. ini.
[Ndbd default]
NoOfReplicas = 2 # backup, copy, so that the data of the two data nodes will be synchronized
DataMemory = 200 M
IndexMemory = 100 M
[Tcp default]
Portnumber = 2202 connection port number, which corresponds to the storage node and SQL Node
[NDB_MGMD] # management Node
Id = 11
Hostname = IP address of the Management Node
Datadir =/var/lib/mysql-cluster
[NDBD] # Data Node
Id = 21
Hostname = IP address of the corresponding data node
Datadir =/var/lib/mysql/data
[NDBD] # Data Node
Id = 22
Hostname = IP address of the corresponding data node
Datadir =/var/lib/mysql/data
There can be multiple ndbd points, just add [NDBD] registration here
[MySQLD] # SQL Node
Id = 31
Hostname = IP address of the corresponding SQL Node
[MySQLD] # SQL Node
Id = 32
Hostname = IP address of the corresponding SQL Node
There can be multiple ndbd points, just add [MySQLD] registration here
4. Start the management Node
Ndb_mgmd-f/var/lib/mysql-cluster/config. ini
5. Management Tools
Run the command ndb_mgm to enter the console. Enter show to display the connection information of the cluster device.
2. install and configure the ndbd Node
1. Install software
Rpm-ivh MySQL-Cluster-gpl-storage-7.1.9a-1.rhel4.i386.rpm
2. Configure the ndbd Node
Add information to the/ect/my. cnf File
Vi/ect/my. cnf
[Mysqld]
Max_connections = 100 Max connections
Slow_query_log =/var/lib/mysql-cluster/slow_query.log
Long_query_time = 1
Datadir =/var/lib/mysql-cluster
Ndbcluster
Ndb-connectstring = "nodeid = id, ip" where id is the id corresponding to the ip address of the storage node configured in the management node, and the ip address following it is the ip address of the Management Node
[Mysql_cluster]
Ndb-connectstring = "nodeid = id, ip" where id is the id corresponding to the ip address of the storage node configured in the management node, and the ip address following it is the ip address of the Management Node
3. Start the ndbd Node
Start ndbd-initial for the first time
Use ndbd later
3. install and configure SQL nodes
1. Install software
Rpm-ivh MySQL-Cluster-gpl-server-7.1.9a-1.rhel5.i386.rpm
Rpm-ovh MySQL-Cluster-gpl-client-7.1.9a-1.rhel5.i386.rpm
2. Configure SQL nodes
Run the following command:
Cp my-medium.ini/etc/my. cnf Use find/-name medium. cnf to find the location of the medium. cnf file, and then use copy
1> my-small.ini is designed for small databases. This model should not be used in databases that contain common projects.
2> my-medium.ini is designed for medium-sized databases. If you are using RHEL in the enterprise, it may be much more physical than the minimum RAM requirement of the Operating System (256 MB ).
Save. It can be seen that if there is so much RAM memory available, you can naturally run other services on the same machine.
3> my-large.ini is designed for a computer dedicated to a SQL database. Because it can use up to MB of memory for the database, at least
1 gb ram so that it can simultaneously process the operating system and database applications.
4> my-huge.ini is designed for the database in the enterprise. Such databases require dedicated servers and 1 GB or more RAM.
Vi/etc/my. cnf
Add
[MYSQLD]
Basedir =/usr
Datadir =/usr/local/mysql/data/
User = mysql
Socket =/tmp/mysql. sock
Ndbcluster
Ndb-connectstring = "nodeid = id, ip" where id is the id corresponding to the ip address of the storage node configured in the management node, and the ip address following it is the ip address of the Management Node
[MYSQL_CLUSTER]
Ndb-connectstring = "nodeid = id, ip" where id is the id corresponding to the ip address of the storage node configured in the management node, and the ip address following it is the ip address of the Management Node
3. Grant Permissions
Create a/usr/local/mysql/data directory folder
Chomd 775/usr/local/mysql/data
Grant Database Access Permissions
Mysql_install_db/usr/local/mysql/data Use find/-name mysql_install_db to find mysql_install_db and then use this command
Mysql_install_db reinitialize mysql
4. Start mysqld
Use the command mysqld_safe
4. Test Cluster
1. Enter ndb_mgm in the management node to enter the management console, and enter show. The displayed information is shown in, indicating that all connections are normal. ____________________________________________________________________
| [Ndbd (NDB)] 2 node (s)
| Id = 21 @ 192.168.41.52 (mysql-5.1.51 ndb-7.1.9, node group: 0, Master)
| Id = 22 @ 192.168.41.42 (mysql-5.1.51 ndb-7.1.10, node group: 0)
|
| [Ndb_mgmd (MGM)] 1 node (s)
| Id = 11 @ 192.168.41.240 (mysql-5.1.51 ndb-7.1.10)
|
| [Mysqld (API)] 2 node (s)
| Id = 31 @ 192.168.41.51 (mysql-5.1.51 ndb-7.1.9)
| Id = 32 @ 192.168.41.55 (mysql-5.1.51 ndb-7.1.9)
| ________________________________________________________________________
2. log on to mysql-u root on the sql1 node.
Create a database sql1test, enter the sql1test database, create a table test, and add several pieces of data to it.
3. log on to mysql-u root on another SQL Node
View the database and find the sql1test database. Go to the select statement to query the records in the test table. The displayed records are normal.
4. Disconnect an ndbd data storage Node
Query the records in the test table on the sql1 node.
If the information described above is normal, the cluster has been set up.
5. Problems Encountered
1. An error occurred while starting the SQL node, prompting that the/var/lib/mysql. sock file does not exist.
Open/etc/my. cnf and modify the socket value under [client] so that this value is the same as the socket value under [MYSQLD.
2. The errmsg. sys file cannot be found in the error message.
Modify the basedir value under [MYSQLD], use find/-name errmsg. sys to find the file, and modify the basedir value so that it can correctly refer to the errmsg. sys file.
3. Use the mysql command to enter the Database Console. The cluster mode of the database cannot be implemented.
Use mysql-u root to log on to the Database Console.
4. The created table cannot be viewed on the mysql console of another SQL node.
Run the create table [table name] () engine = ndbcluster command to create a database table.