Mysql server cluster configuration method
Install mysql-cluster
System centos5.1 32-bit (how to use a 64-bit rpm package)
Http://dev.mysql.com/get/downloads/mysql-cluster-7.0
A total of five packages:
Mysql-cluster-gpl-client-7.1.3-1.rhel5.i386.rpm
Mysql-cluster-gpl-management-7.1.3-1.rhel5.i386.rpm
Mysql-cluster-gpl-server-7.1.3-1.rhel5.i386.rpm
Mysql-cluster-gpl-storage-7.1.3-1.rhel5.i386.rpm
Mysql-cluster-gpl-tools-7.1.3-1.rhel5.i386.rpm
Three centos servers:
Management node (ndb_mgmd): 192.168.1.14
SQL Node 1 (mysqld): 192.168.1.15
SQL Node 1 (mysqld): 192.168.1.11
Data Node (ndbd): 192.168.1.15
Data Node (ndbd): 192.168.1.11
// -------------------------------------------------------------------- Start with this (this is used for all the seven hosts)
The first thing to do is to turn off the firewalls of the seven hosts (if not, every node cannot connect)
Disable Firewall:
Service iptables stop;
Flushing firewall rules: [OK]
Setting chains to policy accept: filter [OK]
Unloading iptables modules: [OK]
How can I close it like this;
Create a folder:
Storage node: mkdir/var/lib/mysql/data
Management node: mkdir/var/lib/mysql-cluster
SQL node: yes or no
Grant permissions to two files:
Chmod-r 1777/var/lib/mysql
Chmod-r 1777/var/lib/mysql-cluster
// -------------------------------------------------------------------- End with this
Management node installation:
Mysql-cluster-gpl-management-7.0.9-0.rhel4.x86_64.rpm;
Mysql-cluster-gpl-tools-7.0.9-0.rhel4.x86_64.rpm;
Rpm-ivh mysql-cluster-gpl-management-7.0.9-0.rhel4.x86_64.rpm
Rpm-ivh mysql-cluster-gpl-tools-7.0.9-0.rhel4.x86_64.rpm
Vi/var/lib/mysql-cluster/config. ini
[Ndbd default]
Noofreplicas = 2
Datamemory = 80 m
Indexmemory = 18 m
[Tcp default]
Sendbuffermemory = 2 m
Receivebuffermemory = 2 m
[Ndb_mgmd default]
Portnumber = 1186
Datadir =/var/lib/mysql-cluster
[Ndb_mgmd]
Id = 1
Hostname = 192.168.1.14
[Ndbd]
Id = 2
Hostname = 192.168.1.15
Datadir =/var/lib/mysql/data
[Ndbd]
Id = 3
Hostname = 192.168.1.11
Datadir =/var/lib/mysql/data
[Mysqld]
Id = 14
Hostname = 192.168.1.15
[Mysqld]
Id = 15
Hostname = 192.168.1.11
[Mysqld]
Id = 16
// Start the management Node
Ndb_mgmd-f/var/lib/mysql-cluster/config. ini
/******************** The above is the Installation Management node ************* *************/
Storage node Installation
Mysql-cluster-gpl-storage-7.0.9-0.rhel4.x86_64.rpm
Rpm-ivh mysql-cluster-gpl-storage-7.0.9-0.rhel4.x86_64.rpm
Vi/etc/my. cnf // confirm that the following modified
[Mysqld]
Max_connections = 100
Slow_query_log =/var/lib/mysql-cluster/slow_query.log
Long_query_time = 1
Datadir =/var/lib/mysql-cluster
Ndbcluster
Ndb-connectstring = 192.168.1.14
[Mysql_cluster]
Ndb-connectstring = 192.168.1.14
Initialize the database that starts the data node,
Note: // ndbd-initial cannot be executed on all data nodes at the same time. Otherwise, all data is deleted. That is, this command can only be executed on one of the data nodes.
Ndbd-initial
How:
[Ndbd] info -- configuration fetched from '10. 50.8.8: 1186 ', generation: 1
SQL node installation:
Mysql-cluster-gpl-client-7.0.9-0.rhel4.x86_64.rpm
Mysql-cluster-gpl-server-7.0.9-0.rhel4.x86_64.rpm
Rpm-ivh mysql-cluster-gpl-server-7.0.9-0.rhel4.x86_64.rpm
Rpm-ivh mysql-cluster-gpl-client-7.0.9-0.rhel4.x86_64.rpm -- nodeps-force Red must be written or not installed.
Vi/etc/my. cnf // confirm that the following modified
[Mysqld]
Ndbcluster
Ndb-connectstring = 192.168.1.14: 1186
[Mysql_cluster]
Ndb-connectstring = 192.168.1.14: 1186
I used this mysqld_safe to start the SQL node & no error
After execution
100308 13:46:32 mysqld_safe logging to '/var/lib/mysql/localhost. localdomain. err '.
100308 13:46:32 mysqld_safe starting mysqld daemon with databases from/var/lib/mysql
The SQL node is successful.
Under the host of the Management node (192.168.1.14)
Ndb_mgm
Ndb_mgm> show
Connected to management server at: localhost: 1186
Cluster configuration
---------------------
[Ndbd (ndb)] 2 node (s)
Id = 2 @ 192.168.1.15 (mysql-5.1.44 ndb-7.1.3, node group: 0, master)
Id = 3 @ 192.168.1.11 (mysql-5.1.44, nodegroup: 0)
[Ndb_mgmd (mgm)] 1 node (s)
Id = 1 @ 192.168.1.14 (mysql-5.1.44 ndb-7.1.3)
[Mysqld (api)] 3 node (s)
Id = 14 @ 192.168.1.15 (mysql-5.1.44 ndb-7.1.3)
Id = 15 @ 192.168.1.11 (mysql-5.1.44 ndb-7.1.3)
Id = 16 (not connected, accepting connect from any host)
The above information is displayed successfully.
If not connected appears, accepting connect from any host is not enabled for the SQL node.
When the mysql-5.1.39 ndb-7.0.9 appears, staring nodegroup: 0 indicates that the storage node is not started. If your configuration is correct (that is, your firewall is irrelevant)
******************/
Dynamic Update Node
Close management node my management node here is 1
Ndb_mgm> 1 stop
Exit ndb_mgm
Shell> vi/var/lib/mysql-cluster/config. ini
If we add an ndbd Node
[Ndbd]
Id = 6
Hostname = 10.50.8.13
Datadir =/var/lib/mysql/data
Save and exit
Ndb_mgmb-f config. ini-reload
13:47:15 [mgmtsrvr] info -- ndb cluster management server. mysql-5.1.39 ndb-7.0.9b
13:47:16 [mgmtsrvr] info -- reading cluster configuration from 'config. ini'
It starts successfully.
Restart each node.
The ndb node executes ndb_mgm> 2 restart on the Management node (you should know how to install it in ndb_mgm)
The SQL node executes service myql stop mysqld_save on the SQL node;
Show the results on the Management node.
1. Management node startup: ndb_mgmd-f/var/lib/mysql-cluster/config. ini ndb_mgmd-f config. ini-reload
2. Data Node start: ndbd start ndb_mgm> 2 restart data node restart
3. SQL node startup: mysqld_safe & off: service mysql stop mysqld_save