Server planning: The entire system in the RHEL5U1 server 64-bit version, by the Xen-based virtual machine, wherein the cluster management node * 2, SQL node * 2, Data node * *, Web Service node * *, the data node is made up of 2 groups, each group of two units of the form:
- Virtual machine mysql_mgm-1, 192.168.20.5: Cluster Management node, id=1
- Virtual machine mysql_mgm-2, 192.168.20.6: Cluster Management node, id=2
- Virtual machine mysql_sql-1,192.168.20.7:sql node, MySQL server node, id=3
- Virtual machine mysql_sql-2,192.168.20.8:sql node, MySQL server node, id=4
- Virtual machine MYSQL_NDB-1:192.168.20.9:NDB data node, id=5
- Virtual machine MYSQL_NDB-2:192.168.20.10:NDB data node, id=6
- Virtual machine MYSQL_NDB-3:192.168.20.11:NDB data node, id=7
- Virtual machine MYSQL_NDB-4:192.168.20.12:NDB data node, id=8
- Virtual machine Mysql_lb-1:192.168.20.15:lvs Load Balancer Node 1
- Virtual machine Mysql_lb-2:192.168.20.16:lvs Load Balancer Node 2
- The virtual IP of the cluster is: 192.168.20.17
- Load balancer is done using the software that comes with Rhel.
-----------------------------------------------------------------------------------------------------------
Installation process:
1. Install MySQL on all nodes: I did not download the MySQL source code and then compile the installation, feeling the machine too much compilation too cumbersome. I went to mysql.com to download the community version, and is for my Rhel 5.1 server 64-bit version of the MySQL software, is: http://dev.mysql.com/downloads/m ... Hel5-x86-64bit-rpms Download all RPM packages below and install, where shared libraries and shared compatibility libraries can only be installed two, install the MySQL service, cluster-related tools and so on, very convenient.
2. (1) Create profile/etc/config.ini on all management nodes (Note: This file is required on the management node and is not required on other nodes):
- [NDBD DEFAULT]
- noofreplicas=2
- datamemory=600m
- indexmemory=100m
- [NDB_MGMD]
- Id=1
- hostname=192.168.20.5
- Datadir=/var/lib/mysql-cluster
- [NDB_MGMD]
- id=2
- hostname=192.168.20.6
- Datadir=/var/lib/mysql-cluster
- [MYSQLD]
- Id=3
- hostname=192.168.20.7
- [MYSQLD]
- Id=4
- hostname=192.168.20.8
- [NDBD]
- Id=5
- hostname=192.168.20.9
- [NDBD]
- Id=6
- hostname=192.168.20.10
- [NDBD]
- Id=7
- hostname=192.168.20.11
- [NDBD]
- Id=8
- hostname=192.168.20.12
-----------------------------------------------------------------------------------------------------------
(2) To open the Management Program on node NDB_MGMD (local listening 1186 port), first start the Id=1 node, and then start the id=2 node, the whole cluster to id=2 node as the main management node (who started who is the main management node, I also need to argue about this: ndb_mgmd-f/etc/config.ini Note: You do not need to start the MySQL service on this server (so you do not need to configure the/etc/my.cnf file). NDB_MGMD must be turned on before opening the service on NDB node and SQL node.
3. (1) Add the following content to the original/etc/my.cnf file on all NDB node:
- [Mysqld]
- Ndbcluster
- # IP address of the cluster management node
- ndb-connectstring=192.168.20.5
- ndb-connectstring=192.168.20.6
- [Mysql_cluster]
- # IP address of the cluster management node
- ndb-connectstring=192.168.20.5
- ndb-connectstring=192.168.20.6
-----------------------------------------------------------------------------------------------------------
Note:/etc/config.ini files are not required on the NDB node.
(2) Execute command on all NDB node for the first time: Mkdir/var/lib/mysql-cluster cd/var/lib/mysql-cluster ndbd--initial
Note: The MySQL service does not start on NDB node. Normally, using the "NDBD" command to start NDB node, only the node changes or other conditions require--initial parameters.
4. (1) Create a new my.cnf file on SQL node:
- [Mysqld]
- port=3306
- Ndbcluster
- ndb-connectstring=192.168.20.5
- ndb-connectstring=192.168.20.6
- [NDBD]
- connect-string=192.168.20.9
- [NDBD]
- connect-string=192.168.20.10
- [NDBD]
- connect-string=192.168.20.11
- [NDBD]
- connect-string=192.168.20.12
- [NDBD_MGM]
- connect-string=192.168.20.5
- connect-string=192.168.20.6
- [NDBD_MGMD]
- Config-file=/etc/config.ini
- [Mysql_cluster]
- ndb-connectstring=192.168.20.5
- ndb-connectstring=192.168.20.6
-----------------------------------------------------------------------------------------------------------
Note: SQL node only needs to start the MySQL service:/etc/init.d/mysql start, do not need to configure the/etc/my.cnf file.
5. (1) Run the cluster Management program on all management node: NDB_MGM
Enter the command "show" here at the prompt, the output of the command can be seen four NDB nodes are all connected to the management node, the output on the two management nodes is the same:
- Connected to Management Server at:192.168.20.5:1186
- Cluster Configuration
- ----------------- ----
- [NDBD (NDB)] 4 node (s)
- id=5 @192.168.20.9 ( version:5.1.22, nodegroup:0, Master)
- id=6 @192.168.20.10 (version:5.1.22, Nodegroup : 0)
- id=7 @192.168.20.11 (version:5.1.22, nodegroup:1)
- id=8 @19 2.168.20.12 (version:5.1.22, nodegroup:1)
- [NDB_MGMD (MGM)] 2 node (s)
- id=1 @1 92.168.20.5 (version:5.1.22)
- id=2 @192.168.20.6 (version:5.1.22)
- [Mysqld (API)] 2 node (s)
- id=3 @192.168.20.7 (version:5.1.22)
- id=4 @192.168.20.8 (version:5.1.22)
As you can see, all nodes are connected to the management node. If "Not connected, accepting connect from any host" appears, a node is not yet connected to the management node.
From the output of the netstat command, you can also see that all nodes are connected to the management node:
- TCP 0 0 192.168.20.5:1186 192.168.20.7:48066 established
- TCP 0 0 192.168.20.5:1186 192.168.20.7:48065 established
- TCP 0 0 192.168.20.5:1186 192.168.20.12:48677 established
- TCP 0 0 192.168.20.5:1186 192.168.20.9:37060 established
- TCP 0 0 192.168.20.5:1186 192.168.20.9:37061 established
- TCP 0 0 192.168.20.5:1186 192.168.20.9:37062 established
- TCP 0 0 192.168.20.5:1186 192.168.20.9:50631 established
- TCP 0 0 192.168.20.5:1186 192.168.20.11:33977 established
- TCP 0 0 192.168.20.5:1186 192.168.20.10:55260 established
(2) See Connections on any NDB node (20.9 for example), you can see this node and the other 3 NDB node (10/11/12), Management node (5/6), SQL node (7/8) have a connection, in addition to and management node connection is to 1186 port, The other connections are random ports.
- TCP 0 0 192.168.20.9:59318 192.168.20.11:49124 established
- TCP 0 0 192.168.20.9:37593 192.168.20.7:33593 established
- TCP 0 0 192.168.20.9:55146 192.168.20.10:46643 established
- TCP 0 0 192.168.20.9:48657 192.168.20.12:46097 established
- TCP 0 0 192.168.20.9:55780 192.168.20.8:41428 established
- TCP 0 0 192.168.20.9:58185 192.168.20.5:1186 established
- TCP 0 0 192.168.20.9:54535 192.168.20.6:1186 established
(3) on any SQL node to see the connection (take 20.7 as an example), you can see that two SQL node is connected to the Management node 20.6 (Management node in 20.5 first start, 20.6 start):
- TCP 0 0 192.168.20.7:49726 192.168.20.6:1186 established
- TCP 0 0 192.168.20.7:38498 192.168.20.10:58390 established
- TCP 0 0 192.168.20.7:54636 192.168.20.12:40206 established
- TCP 0 0 192.168.20.7:33593 192.168.20.9:37593 established
- TCP 0 0 192.168.20.7:57676 192.168.20.11:37717 established
7. mysql High availability cluster is built, then load balancer is built with Ipvs.
Create an empty library on all MYSQL_SQL nodes: Create DATABASE loadbalancing;
Set permissions to allow all MYSQL_LB nodes to have SELECT permission (for heartbeat testing): Grant Select on loadbalancing.* to [email protected] identified by ' ABCDEFG '; Grant SELECT on loadbalancing.* to [e-mail protected] identified by ' ABCDEFG ';
8. (1) Load the Ipvs module on the management node:
- Modprobe IP_VS_DH
- Modprobe ip_vs_ftp
- Modprobe Ip_vs
- Modprobe IP_VS_LBLC
- Modprobe IP_VS_LBLCR
- Modprobe IP_VS_LC
- Modprobe IP_VS_NQ
- Modprobe IP_VS_RR
- Modprobe ip_vs_sed
- Modprobe Ip_vs_sh
- Modprobe IP_VS_WLC
- Modprobe IP_VS_WRR
(2) Configure LVS on the Management node (20.15 and 20.16 are two load-balanced nodes, Realserver is 20.7 and 20.8, virtual IP is 20.17, port is 3306). You can start/etc/init.d/piranha-gui, then set up the cluster in the http://localhost:3636, eventually generate the configuration file/etc/sysconfig/ha/lvs.cf, or you can generate the file directly/etc/ SYSCONFIG/HA/LVS.CF:
- Serial_no = 37
- Primary = 192.168.20.15
- Service = LVs
- Backup_active = 1
- Backup = 192.168.20.16
- Heartbeat = 1
- Heartbeat_port = 539
- KeepAlive = 6
- Deadtime = 18
- Network = Direct
- Debug_level = NONE
- Monitor_links = 1
- Virtual MYSQL {
- Active = 1
- Address = 192.168.20.17 Eth0:1
- Vip_nmask = 255.255.255.0
- Port = 3306
- expect = "OK"
- Use_regex = 0
- Send_program = "/usr/local/bin/mysql_running_test%h"
- Load_monitor = None
- Scheduler = WLC
- protocol = TCP
- Timeout = 6
- reentry = 15
- Quiesce_server = 0
- Server Mysql_sql-1 {
- Address = 192.168.20.7
- Active = 1
- Weight = 1
- }
- Server Mysql_sql-2 {
- Address = 192.168.20.8
- Active = 1
- Weight = 1
- }
- }
You must make sure that the lvs.cf file is on two load balancer nodes and that the content is identical.
Probe script/usr/local/bin/mysql_running_test on two load balancer nodes:
- #!/bin/sh
- # We use $ as the argument in the TEST which would be the various IP ' s
- # of the real servers in the cluster.
- # Check for MySQL service
- Test= ' echo ' select ' "As ABCDEFG ' | Mysql-uloadbalancing-pabcdefg-h $ | grep ABCDEFG '
- if [$TEST! = ' 1 ']; Then
- echo "OK"
- Else
- echo "FAIL"
- #/bin/echo | Mail [Email][email Protected][/email]-S "NOTICE: $ failed to provide email service"
- Fi
Note: A MySQL client is required on two probe nodes. Principle: lvs.cf Specifies this script, in fact, to the Load Balancer node on the Nanny program call, LVS.CF in the%h parameter represents the time to call this script with the HOSTNAME/IP address parameters. The purpose of this script is to connect to MySQL server to execute a SELECT statement echoing a string ABCDEFG, by determining whether the echo is correct to verify that real server is functioning properly.
(3) Start the LVS service:/etc/init.d/pulse start
The contents of a/var/log/messages on one of the nodes:
- Dec 14:57:15 mysql_lb-1 pulse[8606]: starting pulse as MASTER
- Dec 14:59:29 mysql_lb-1 pulse[8606]: terminating due to signal 15
- Dec 14:59:30 mysql_lb-1 Pulse:siocgifaddr Failed:cannot Assign requested address
- Dec 14:59:30 mysql_lb-1 pulse[8659]: starting pulse as MASTER
The contents of the/var/log/messages on the other node:
- Dec 14:59:06 mysql_lb-2 pulse[16729]: Starting pulse as BACKUP
- Dec 14:59:08 mysql_lb-2 pulse[167 £ º primary inactive (link failure?): Activating LVs
- Dec 14:59:08 mysql_lb-2 lvs[16731]: starting virtual SE Rvice MySql active:3306
- Dec 14:59:08 mysql_lb-2 nanny[16734]: Starting LVS client Monitor for 192.168.20.17: 3306
- Dec 14:59:08 mysql_lb-2 lvs[16731]: Create_monitor for mysql/mysql_sql-1 running as PID 16734
- Dec 14:59:08 mysql_lb-2 nanny[16737]: Starting LVS client Monitor for 192.168.20.17:3306
- Dec 14:59:08 mysq L_lb-2 lvs[16731]: Create_monitor for mysql/mysql_sql-2 running as PID 16737
- Dec 14:59:08 mysql_lb-2 nanny[16 737]: Making 192.168.20.8:3306 available
- Dec 14:59:08 mysql_lb-2 nanny[16734]: making 192.168.20.7:3306 avail Able
- Dec 14:59:13 mysql_lb-2 pulse[16740]: Gratuitous LVs Arps finished
-
2007-12-27 11:33 Upload
Download number of times: 26
Practice: Build a MySQL Cluster based on load balancer