You need to make the following changes first: (if you are in a ubuntu (Debian) system, you need to make a slight change, as shown below:
1)
Init. d path:
Originally:
/Etc/rc. d/init. d/mysqld
Now:
/Etc/init. d/mysqld
2)
Originally:
Chkconfig -- add mysqld
Changed:
Sudo update-rc.d mysqld defaults
Bytes --------------------------------------------------------------------------------------------------
The following article is about self-Conversion
Http://hi.baidu.com/%BA%DA%BF%CD%B7%C0%CF%DF/blog/item/5ce9bfde50129d58cdbf1a15.html
Bytes ---------------------------------------------------------------------------------------------------
MySQLClusterConfiguration
I. Introduction
==========
This document describes how to install and configure MYSQL based on two servers.Cluster. MySQL can continue to run when any server encounters a problem or goes down.
Note!
Although this is MYSQL based on two serversClusterBut there must also be an additional third server as the management node, but this server canClusterClose after startup. At the same time, it is not recommended thatClusterShut down the server that is used as the management node after startup. Although theoretically, we can establish MYSQL based on only two serversClusterHowever, once a server goes downClusterAnd it will be lost.Cluster. For this reason, a third server is required to run as a management node.
In addition, many friends may not have the actual environment of three servers. You can consider conducting experiments in VMware or other virtual machines.
The following describes the three services:
Server1: mysql1.vmtest.net 192.168.0.1
Server2: mysql2.vmtest.net 192.168.0.2
Server3: mysql3.vmtest.net 192.168.0.3
Servers1 and server2 are used as the actual MySQL ConfigurationCluster. Server3, as a management node, has low requirements. You only need to make minor adjustments to the server3 system and do not need to install MySQL, server3 can use a low-configuration computer and run other services on server3.
2. Install MySQL on Server 1 and Server 2
======================================
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from http://www.mysql.com
Note: It must be MySQL of Max version, which is not supported by standard version.ClusterDeployment!
Perform the following steps on Server 1 and Server 2
# Music mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz/usr/local/
# Cd/usr/local/
# Groupadd MySQL
# Useradd-G MySQL
# Tar-zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# Rm-F mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# Music mysql-max-4.1.9-pc-linux-gnu-i686 MySQL
# Cd MySQL
# Scripts/mysql_install_db -- user = MySQL
# Chown-r root.
# Chown-r MySQL daTa
# Chgrp-r MySQL.
# Cp support-files/MySQL. Server/etc/rc. d/init. d/mysqld
# Chmod + x/etc/rc. d/init. d/mysqld
# Chkconfig -- add mysqld
Do not start MySQL at this time!
3. install and configure the management node server (server3)
============================================
As a management node server, server3 requires two files: ndb_mgm and ndb_mgmd:
Download mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz from http://www.mysql.com
# Mkdir/usr/src/MySQL-MGM
# Cd/usr/src/MySQL-MGM
# Tar-zxvf mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# Rm mysql-max-4.1.9-pc-linux-gnu-i686.tar.gz
# Cd mysql-max-4.1.9-pc-linux-gnu-i686
# Mv bin/ndb_mgm.
# Mv bin/ndb_mgmd.
# Chmod + x ndb_mg *
# Mv ndb_mg */usr/bin/
# Cd
# Rm-RF/usr/src/MySQL-MGM
Now, create a configuration file for this management node Server:
# Mkdir/var/lib/MySQL-Cluster
# Cd/var/lib/MySQL-Cluster
# Vi config. ini
Add the following content to config. ini:
[Ndbd default]
Noofreplicas = 2
[Mysqld default]
[Ndb_mgmd default]
[TCP default]
# Managment Server
[Ndb_mgmd]
Hostname = 192.168.0.3 # manage the IP address of node server server3
# Storage engines
[Ndbd]
Hostname = 192.168.0.1 # MySQLClusterServer 1 IP Address
Datadir =/var/lib/MySQL-Cluster
[Ndbd]
Hostname = 192.168.0.2 # MySQLClusterServer 2 IP Address
Datadir =/var/lib/MySQL-Cluster
# The following two [mysqld] hostnames can be set to server1 and server2.
# But in order to be able to change more quicklyClusterWe recommend that you leave it blank. Otherwise, you must change the configuration after changing the server.
[Mysqld]
[Mysqld]
After saving and exiting, start the management node server server3:
# Ndb_mgmd
After starting a management node, you should note that this is only a management node service, not a management terminal. Therefore, you cannot see any output information after startup.
Iv. ConfigurationClusterServer and start MySQL
==================================
The following changes must be made in both server1 and server2:
# Vi/etc/My. CNF
[Mysqld]
Ndbcluster
NDB-connectstring = 192.168.0.3 # server3 IP Address
[Mysql_cluster]
NDB-connectstring = 192.168.0.3 # server3 IP Address
After saving and exiting, create a data directory And start MYSQL:
# Mkdir/var/lib/MySQL-Cluster
# Cd/var/lib/MySQL-Cluster
#/Usr/local/MySQL/bin/ndbd -- Initial
#/Etc/rc. d/init. d/mysqld start
You can add/usr/local/MySQL/bin/ndbd to/etc/rc. Local to enable startup.
Note: you must use the -- Initial Parameter only when you start ndbd for the first time or modify config. ini of server3!
5. Check the working status
======================
Return to Server 3 of the Management node and start the Management Terminal:
#/Usr/bin/ndb_mgm
Enter the show command to view the current working status: (The following is an example of status output)
[Root @ mysql3 root] #/usr/bin/ndb_mgm
-- NDB Cluster -- management client --
Ndb_mgm> show
Connected to management server at: localhost: 1186
Cluster configuration
---------------------
[Ndbd (NDB)] 2 node (s)
Id = 2 @ 192.168.0.1 (Version: 4.1.9, nodegroup: 0, Master)
Id = 3 @ 192.168.0.2 (Version: 4.1.9, nodegroup: 0)
[Ndb_mgmd (MGM)] 1 node (s)
Id = 1 @ 192.168.0.3 (Version: 4.1.9)
[Mysqld (API)] 2 node (s)
Id = 4 (Version: 4.1.9)
Id = 5 (Version: 4.1.9)
Ndb_mgm>
If there is no problem, test MySQL now:
Note: This document does not set root for MySQL.Password, We recommend that you set the MySQL Root of server1 and server2Password.
In server1:
#/Usr/local/MySQL/bin/MySQL-u root-P
> Use test;
> Create Table ctest (I INT) engine = ndbcluster;
> Insert into ctest () values (1 );
> Select * From ctest;
We can see 1 row returned information (return value 1 ).
If the above is normal, switch to server2 and repeat the above test to observe the effect. If the operation succeeds, execute insert in server2 and return to server1 to check whether the operation is normal.
If no problem exists, congratulations!
Vi. Destructive Testing
====================
Unplug the Server 1 or server 2 network cable and observe the otherClusterWhether the server works properly (you can use select to query and test ). After testing, re-insert the network cable.
If you do not have access to the physical server, that is, you cannot unplug the network cable. You can also test it as follows:
On server1 or server2:
# Ps aux | grep ndbd
All ndbd processes are displayed:
Root 5578 0.0 0.3 6220 1964? S ndbd
Root 5579 0.0 20.4 492072 102828? R ndbd
Root 23532 0.0 0.1 3680 684 pts/1 s grep ndbd
Then, kill an ndbd process to destroy MySQL.ClusterServer objective:
# Kill-9 5578 5579
And thenClusterUse the SELECT query test on the server. In addition, execute the show command on the management terminal of the Management node server to view the State of the damaged server.
After the test is complete, you only need to restart the ndbd process of the damaged server:
# Ndbd
Note! As mentioned above, the -- inparameters parameter is not required!
So far, MySQLClusterThe configuration is complete!