MySQL Cluster build-up

Source: Internet
Author: User
Tags iptables

Reference: http://blog.csdn.net/zklth/article/details/7522677

First, the Environment preparation:

Note: All nodes need to close their walls

/etc/init.d/iptables Status #查看防火墙状态
/etc/init.d/iptables Stop #关闭防火墙

1. Software Download:
Ftp://mirror.switch.ch/mirror/mysql/Downloads/MySQL-Cluster-7.1/

Select mysql-cluster-gpl-7.1.19.tar.gz Download

2. Hardware requires five physical nodes (the virtual machine is not tried)

3. Software Installation: Do the following on all nodes

######################################################################################################

Groupadd MySQL #创建mysql组

useradd-g MySQL MySQL #创建mysql用户

#创建目录

Mkdir-p/opt/mysql-cluster

Mkdir-p/opt/mysql-cluster/etc

Mkdir-p/opt/mysql-cluster/tmp

Mkdir-p/opt/mysql-cluster/data

#解压软件包

Tar zxvf mysql-cluster-gpl-7.1.19.tar.gz

#配置安装项

CD mydql-cluster-gpl-7.1.19

./configure--prefix=/opt/mysql-cluster--WITH-CHARSET=GBK--with-collation=gbk_chinese_ci--with-client-ldflags=- All-static-with-mysqld-ldflags=-all-static--enable-assembler--with-extra-charsets=complex-- Enable-thread-safe-client--with-big-tables--with-readline--with-ssl--with-embedded-server--enable-local-infile --with-unix-socket-path=/opt/mysql-cluster/tmp/mysql.sock--sysconfdir=/opt/mysql-cluster/etc--without-debug-- With-mysqld-user=mysql--with-plugins=max

#安装

Make && make install

######################################################################################################

Second, Management node configuration

1. Create a cluster configuration file for the management node Config.ini

[Email protected] mysql-cluster]# VI Etc/config.ini

[NDBD DEFAULT]
Noofreplicas= 2
[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
Datadir=/opt/mysql-cluster/data #数据存放目录
[NDB_MGMD]
Hostname= 10.30.9.204
[NDBD]
Hostname= 10.30.9.206
Datadir=/opt/mysql-cluster/data
[NDBD]
Hostname= 10.30.9.207
Datadir=/opt/mysql-cluster/data
[MYSQLD]
Hostname= 10.30.9.208
[MYSQLD]
Hostname= 10.30.9.211

2. Start the NDB_MGMD service on the management node

[Email protected] mysql-cluster]# libexec/ndb_mgmd-f Etc/config.ini
MySQL Cluster Management Server mysql-5.1.56 ndb-7.1.19

Now that the NDB_MGMD service has started, you can view its progress with the PS command:

[Email protected] mysql-cluster]# Ps-ef | grep NDB
Root 23505 1 0 May11? 00:00:00 libexec/ndb_mgmd-f Etc/config.ini
Root 24692 24238 0 01:29 pts/1 00:00:00 grep NDB

Use the Ndb_mgm>show command to view its status:

[Email protected] mysql-cluster]# BIN/NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 (not connected, accepting connect from 10.30.9.206)
Id=3 (not connected, accepting connect from 10.30.9.207)

[NDB_MGMD (MGM)] 1 node (s)
Id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)

[Mysqld (API)] 2 node (s)
Id=4 (not connected, accepting connect from 10.30.9.208)
Id=5 (not connected, accepting connect from 10.30.9.211)


From the display information you can see that two data nodes and two SQL nodes are not started.

Ndb_mgm> Exit #退出


Third, data node configuration

Perform the following 1 2 3 4-step operation on two data nodes

1. Create a data dictionary

Specifying Data directories and users

[Email protected] mysql-cluster]# bin/mysql_install_db--user=mysql--datadir=/opt/mysql-cluster/data--basedir=/ Opt/mysql-cluster
Warning:the host ' sg206 ' could not being looked up with RESOLVEIP.
This probably means that your libc libraries is not compatible
With this binary MySQL version. The MySQL daemon, mysqld, should work
Normally with the exception that host name resolving won't work.
This means, should use IP addresses instead of hostnames
When specifying MySQL privileges!
Installing MySQL system tables ...
Ok
Filling Help Tables ...
Ok

To start mysqld at boot time with to copy
Support-files/mysql.server to the right place for your system

REMEMBER to SET A PASSWORD for the MySQL root USER!
To does so, start the server, then issue the following commands:

/opt/mysql-cluster/bin/mysqladmin-u root password ' new-password '
/opt/mysql-cluster/bin/mysqladmin-u root-h sg206 password ' new-password '

Alternatively you can run:
/opt/mysql-cluster/bin/mysql_secure_installation

Which would also give you the option of removing the test
Databases and anonymous user created by default. This is
Strongly recommended for production servers.

See the Manual for more instructions.

You can start the MySQL daemon with:
Cd/opt/mysql-cluster; /opt/mysql-cluster/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
Cd/opt/mysql-cluster/mysql-test; Perl mysql-test-run.pl

Problems with The/opt/mysql-cluster/scripts/mysqlbug script!

2. Create a MY.CNF configuration file

[Email protected] mysql-cluster]# vim etc/my.cnf


[Mysqld]
Basedir=/opt/mysql-cluster
Datadir=/opt/mysql-cluster/data
user= MySQL
Port= 3306
socket=/opt/mysql-cluster/tmp/mysql.sock
Ndbcluster
#管理节点IP
ndb-connectstring=10.30.9.204
[Mysql_cluster]
#管理节点IP
ndb-connectstring=10.30.9.204

3. Start the NBDB service

The NDBD service needs to be added with the--initial option for the first boot, and will not be required

[Email protected] mysql-cluster]# LIBEXEC/NDBD--initial

2012-05-12 03:53:15 [NDBD] INFO--Angel connected to ' 10.30.9.204:1186 '

2012-05-12 03:53:15 [NDBD] INFO--Angel allocated Nodeid:2

4. Replace the owner of all content in the Mysql-cluster directory with the MySQL user, MySQL group

[Email protected] mysql-cluster]# chown-r Mysql:mysql *


At this point the two data nodes Nbdb service is started, and then the management node or data node can view the cluster status,

#################### ############################
[[email protected] mysql-cluster]# BIN/NDB_MGM
--NDB cluster--Management Client--
ndb_mgm> show
Connected to Management Server at:localhost:1186
Cluster Configuration
------- --------------
[NDBD (NDB)]    2 node (s)
id=2    @10.30.9.206  ( mysql-5.1.56 ndb-7.1.19, nodegroup:0, Master)
id=3    @10.30.9.207  (mysql-5.1.56 ndb-7.1.19, nodegroup:0)

[NDB_MGMD (MGM)]    1 node (s)
id=1    @10.30.9.204  ( mysql-5.1.56 ndb-7.1.19)

[Mysqld (API)]    2 node (s)
id=4   (not connected, accepting connect from 10.30.9.208)
id=5    

###############################################


Four, SQL node configuration

Perform three of the 1 2 4 steps and the following 1 steps for two SQL nodes

1. Start the mysqld process

[Email protected] mysql-cluster]# Bin/mysqld_safe--defaults-file=/opt/mysql-cluster/etc/my.cnf--basedir=/opt/ Mysql-cluster--datadir=/opt/mysql-cluster/data--user=mysql &

If the boot fails, view the error log/opt/mysql-cluster/data/sg208.err

The cluster status can be viewed in either the management node or the data node

################################################

[Email protected] mysql-cluster]# BIN/NDB_MGM
--NDB Cluster--Management Client--
Ndb_mgm> Show
Connected to Management Server at:localhost:1186
Cluster Configuration
---------------------
[NDBD (NDB)] 2 node (s)
id=2 @10.30.9.206 (mysql-5.1.56 ndb-7.1.19, nodegroup:0, Master)
Id=3 @10.30.9.207 (mysql-5.1.56 ndb-7.1.19, nodegroup:0)

[NDB_MGMD (MGM)] 1 node (s)
Id=1 @10.30.9.204 (mysql-5.1.56 ndb-7.1.19)

[Mysqld (API)] 2 node (s)
Id=4 @10.30.9.208 (mysql-5.1.56 ndb-7.1.19)
Id=5 @10.30.9.211 (mysql-5.1.56 ndb-7.1.19)

################################################

You can see that all the nodes are all normal.

Five, test data synchronization on two SQL nodes.

Log on to a database on a 10.30.9.208SQL node

[[email protected] mysql-cluster] #bin/mysqladmin-uroot password ' 111111 ' #修改root用户的密码为111111

[[email protected] mysql-cluster] #bin/mysql-uroot-p #连接mysql cluster
Enter Password: #输入密码111111

Welcome to the MySQL Monitor. Commands End With; or \g.
Your MySQL Connection ID is 4
Server version:5.1.56-ndb-7.1.19 Source Distribution

Copyright (c), Oracle and/or its affiliates. All rights reserved.
This software comes with absolutely NO WARRANTY. This is the free software,
And you is welcome to modify and redistribute it under the GPL v2 license

Type ' help ', ' or ' \h ' for help. Type ' \c ' to clear the current input statement.

Mysql> CREATE DATABASE ORCL #创建orcl databases

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| Information_schema |
| MySQL |
| Ndbinfo |
| ORCL |
| Test |
+--------------------+
5 rows in Set (0.00 sec)

10.30.9.208 on ORCL has been created good

Then log in to the database in the same way on the 10.30.9.211

mysql> show databases;
+--------------------+
| Database |
+--------------------+
| Information_schema |
| MySQL |
| Ndbinfo |
| ORCL |
| Test |
+--------------------+
5 rows in Set (0.00 sec)

You can see that there are ORCL libraries on 10.30.9.211.


Create a table and insert data on 10.30.9.208

Use ORCL;

CREATE TABLE name (

ID int (4) auto_increment NOT null primary key,

XM char (8),

XB Char (2),

CSNY date) engine=ndbcluster ; # Engine=ndbcluster is important, the table data cannot be synchronized without adding.

INSERT into name values (' ', ' Jack ', ' m ', ' 1900-01-01 ');

INSERT into name values (' ', ' Rose ', ' f ', ' 1904-01-01 ');

At this time on the 10.30.9.211

Mysql> Use ORCL
Database changed
Mysql> select * from name;
+----+------+------+------------+
| ID | XM | XB | CSNY |
+----+------+------+------------+
| 1 | Jack | m | 1900-01-01 |
| 2 | Rose | f | 1903-01-01 |
+----+------+------+------------+
2 rows in Set (0.00 sec)

You can see that the data is synchronized.


Shutdown of the cluster :

NDB shutdown and start of each node
The startup and shutdown of a node is sequential, starting with the management node, then the data node, and finally the MySQL node. When you close a node, you should close the MySQL node and then close all the management nodes and data nodes through the management node.
Start:
/usr/bin/ndb_mgmd-f/usr/local/mysql/mysql-cluster/config.ini (Start management node)
/USR/BIN/NDBD--initial (Start data node, only need--initial parameter after first boot or some configuration change)
/etc/rc.d/init.d/mysqld start (Start MySQL node)
Stop it:
Bin/mysqladmin-u root-p shutdown
NDB_MGM-E shutdown

Startup of the cluster:

1. Start the Management node:

Bin/ndb_mgmd-f/opt/mysql-cluster/etc/config.ini--reload--configdir=/opt/mysql-cluster #修改配置文件后要加上--reload will take effect.

2. Start the Data node:

BIN/NDBD--connect-string= "nodeid=2;host=172.16.48.204:1186" #各个数据节点的nodeid可以在管理节点上show看到

3. Start the SQL node:

Bin/mysqld_safe--defaults-file=/opt/mysql-cluster/etc/my.cnf--basedir=/opt/mysql-cluster--datadir=/opt/ Mysql-cluster/data--user=mysql &

Report:

Config.ini file detailed configuration can be referred to: http://www.linuxidc.com/Linux/2010-06/26640p2.htm

#scp-P 4400-r [email protected]:/home2/backup//home/mover00/shadow_bak/sites/
Copy Remote (10.0.24.103)/home2/backup/to local/home/mover00/shadow_bak/sites/

#scp-P 4400-r/home2/backup/[email protected]:/home/mover00/shadow_bak/sites/
Copy Local/home2/backup/to remote (10.0.24.99)/home/mover00/shadow_bak/sites/


MySQL Cluster build-up

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.