MySQL server cluster configuration method
Mysql-cluster Installation
System centos5.1 32-bit (how to 64-bit RPM package with 64-bit)
http://dev.mysql.com/get/downloads/mysql-cluster-7.0 and the following package name correspond
A total of 5 packages:
mysql-cluster-gpl-client-7.1.3-1.rhel5.i386.rpm
mysql-cluster-gpl-management-7.1.3-1.rhel5.i386.rpm
mysql-cluster-gpl-server-7.1.3-1.rhel5.i386.rpm
mysql-cluster-gpl-storage-7.1.3-1.rhel5.i386.rpm
mysql-cluster-gpl-tools-7.1.3-1.rhel5.i386.rpm
3 CentOS Servers:
Management node (NDB_MGMD): 192.168.1.14
SQL Node 1 (mysqld): 192.168.1.15
SQL Node 1 (mysqld): 192.168.1.11
Data node (NDBD): 192.168.1.15
Data node (NDBD): 192.168.1.11
------------------------------------------------------------------start with this (seven machines are doing this)
The first thing to do is to turn off the firewalls of these seven machines (how do not close each node is not connected)
To turn off the firewall:
Service iptables stop;
Flushing firewall rules: [OK]
Setting chains to policy accept:filter [OK]
Unloading iptables modules: [OK]
How to close successfully like this;
To create a folder:
Storage node: Mkdir/var/lib/mysql/data
Management node: Mkdir/var/lib/mysql-cluster
SQL node: No, you can.
To assign permissions to two files:
Chmod-r 1777/var/lib/mysql
Chmod-r 1777/var/lib/mysql-cluster
------------------------------------------------------------------end With this.
Management Node Installation:
mysql-cluster-gpl-management-7.0.9-0.rhel4.x86_64.rpm;
mysql-cluster-gpl-tools-7.0.9-0.rhel4.x86_64.rpm;
RPM–IVH mysql-cluster-gpl-management-7.0.9-0.rhel4.x86_64.rpm
RPM–IVH mysql-cluster-gpl-tools-7.0.9-0.rhel4.x86_64.rpm
Vi/var/lib/mysql-cluster/config.ini
[NDBD Default]
noofreplicas=2
datamemory=80m
indexmemory=18m
[TCP Default]
Sendbuffermemory=2m
Receivebuffermemory=2m
[NDB_MGMD Default]
portnumber=1186
Datadir=/var/lib/mysql-cluster
[NDB_MGMD]
Id=1
Hostname= 192.168.1.14
[NDBD]
id=2
hostname=192.168.1.15
Datadir=/var/lib/mysql/data
[NDBD]
Id=3
Hostname= 192.168.1.11
Datadir=/var/lib/mysql/data
[Mysqld]
Id=14
hostname=192.168.1.15
[Mysqld]
Id=15
hostname=192.168.1.11
[Mysqld]
Id=16
Start Management node
Ndb_mgmd-f/var/lib/mysql-cluster/config.ini
/********************* above is the installation management node **************************/
Storage Node Installation
mysql-cluster-gpl-storage-7.0.9-0.rhel4.x86_64.rpm
RPM–IVH mysql-cluster-gpl-storage-7.0.9-0.rhel4.x86_64.rpm
VI/ETC/MY.CNF//Confirm Add Modify the following section
[Mysqld]
max_connections = 100
Slow_query_log =/var/lib/mysql-cluster/slow_query.log
Long_query_time = 1
DataDir =/var/lib/mysql-cluster
Ndbcluster
ndb-connectstring=192.168.1.14
[Mysql_cluster]
ndb-connectstring= 192.168.1.14
Initializes the database that initiates the data node,
Note://ndbd–initial cannot be executed on all data node machines at the same time, and no one deletes all data. That command can only be executed in one of the data nodes
Ndbd–initial
How to do this:
[NDBD] Info--Configuration fetched from ' 10.50.8.8:1186 ', generation:1
Installation of SQL node:
mysql-cluster-gpl-client-7.0.9-0.rhel4.x86_64.rpm
mysql-cluster-gpl-server-7.0.9-0.rhel4.x86_64.rpm
RPM–IVH mysql-cluster-gpl-server-7.0.9-0.rhel4.x86_64.rpm
RPM–IVH mysql-cluster-gpl-client-7.0.9-0.rhel4.x86_64.rpm--nodeps–force Red must be written or not Ann.
VI/ETC/MY.CNF//Confirm Add Modify the following section
[Mysqld]
Ndbcluster
ndb-connectstring=192.168.1.14:1186
[Mysql_cluster]
ndb-connectstring=192.168.1.14:1186
Start SQL node I'm not wrong with this mysqld_safe &.
After execution
100308 13:46:32 mysqld_safe logging to '/var/lib/mysql/localhost.localdomain.err '.
100308 13:46:32 Mysqld_safe starting mysqld daemon with databases From/var/lib/mysql
The SQL node is successful.
Under the management node of the machine (192.168.1.14)
Ndb_mgm
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @192.168.1.15 (mysql-5.1.44 ndb-7.1.3, nodegroup:0, Master)
Id=3 @192.168.1.11 (mysql-5.1.44 ndb-7.1.3, nodegroup:0)
[NDB_MGMD (MGM)] 1 node (s)
Id=1 @192.168.1.14 (mysql-5.1.44 ndb-7.1.3)
[Mysqld (API)] 3 node (s)
Id=14 @192.168.1.15 (mysql-5.1.44 ndb-7.1.3)
Id=15 @192.168.1.11 (mysql-5.1.44 ndb-7.1.3)
Id=16 (not connected, accepting connect to any host)
How to appear the above information is successful.
The not connected appears, accepting connect from the any host is not a SQL node.
The mysql-5.1.39 ndb-7.0.9,staring nodegroup:0 appears to indicate that the storage node has not been restarted. How your configuration is not wrong (that is your firewall does not turn off)
/********* The following are the operations of adding nodes, restarting nodes ******************/
Dynamic Update Node
Close admin node My management node here is 1.
Ndb_mgm>1 stop
Exit NDB_MGM
Shell>vi/var/lib/mysql-cluster/config.ini
If we add a NDBD node
[NDBD]
Id=6
Hostname= 10.50.8.13
Datadir=/var/lib/mysql/data
Save exit
Ndb_mgmb–f Config.ini–reload
2010-03-08 13:47:15 [Mgmtsrvr] Info-NDB cluster Management Server. mysql-5.1.39 ndb-7.0.9b
2010-03-08 13:47:16 [MGMTSRVR] Info--reading cluster configuration from ' Config.ini '
I succeeded from the apocalypse.
Restart each node
The NDB node executes ndb_mgm>2 Restart on the management node (how do you install the above should know to enter NDB_MGM)
The SQL node executes the service MYQL stop mysqld_save on the SQL node;
All done. Show a look at the results on the management node.
1. Management node Start: NDB_MGMD ndb_mgmd-f/var/lib/mysql-cluster/config.ini ndb_mgmd–f config.ini–reload
2. Data node start: NDBD Start ndb_mgm>2 restart data node reboot
3. SQL node start: Mysqld_safe & shutdown: Service MySQL stop Mysqld_save