Practice: Build a MySQL Cluster based on load balancer

Source: Internet
Author: User
Tags mysql client

Server planning: The entire system in the RHEL5U1 server 64-bit version, by the Xen-based virtual machine, wherein the cluster management node * 2, SQL node * 2, Data node * *, Web Service node * *, the data node is made up of 2 groups, each group of two units of the form:

    1. Virtual machine mysql_mgm-1, 192.168.20.5: Cluster Management node, id=1
    2. Virtual machine mysql_mgm-2, 192.168.20.6: Cluster Management node, id=2
    3. Virtual machine mysql_sql-1,192.168.20.7:sql node, MySQL server node, id=3
    4. Virtual machine mysql_sql-2,192.168.20.8:sql node, MySQL server node, id=4
    5. Virtual machine MYSQL_NDB-1:192.168.20.9:NDB data node, id=5
    6. Virtual machine MYSQL_NDB-2:192.168.20.10:NDB data node, id=6
    7. Virtual machine MYSQL_NDB-3:192.168.20.11:NDB data node, id=7
    8. Virtual machine MYSQL_NDB-4:192.168.20.12:NDB data node, id=8
    9. Virtual machine Mysql_lb-1:192.168.20.15:lvs Load Balancer Node 1
    10. Virtual machine Mysql_lb-2:192.168.20.16:lvs Load Balancer Node 2
    11. The virtual IP of the cluster is: 192.168.20.17
    12. Load balancer is done using the software that comes with Rhel.

-----------------------------------------------------------------------------------------------------------
Installation process:
1. Install MySQL on all nodes: I did not download the MySQL source code and then compile the installation, feeling the machine too much compilation too cumbersome. I went to mysql.com to download the community version, and is for my Rhel 5.1 server 64-bit version of the MySQL software, is: http://dev.mysql.com/downloads/m ... Hel5-x86-64bit-rpms Download all RPM packages below and install, where shared libraries and shared compatibility libraries can only be installed two, install the MySQL service, cluster-related tools and so on, very convenient.
2. (1) Create profile/etc/config.ini on all management nodes (Note: This file is required on the management node and is not required on other nodes):

    1. [NDBD DEFAULT]
    2. noofreplicas=2
    3. datamemory=600m
    4. indexmemory=100m
    5. [NDB_MGMD]
    6. Id=1
    7. hostname=192.168.20.5
    8. Datadir=/var/lib/mysql-cluster
    9. [NDB_MGMD]
    10. id=2
    11. hostname=192.168.20.6
    12. Datadir=/var/lib/mysql-cluster
    13. [MYSQLD]
    14. Id=3
    15. hostname=192.168.20.7
    16. [MYSQLD]
    17. Id=4
    18. hostname=192.168.20.8
    19. [NDBD]
    20. Id=5
    21. hostname=192.168.20.9
    22. [NDBD]
    23. Id=6
    24. hostname=192.168.20.10
    25. [NDBD]
    26. Id=7
    27. hostname=192.168.20.11
    28. [NDBD]
    29. Id=8
    30. hostname=192.168.20.12

-----------------------------------------------------------------------------------------------------------

(2) To open the Management Program on node NDB_MGMD (local listening 1186 port), first start the Id=1 node, and then start the id=2 node, the whole cluster to id=2 node as the main management node (who started who is the main management node, I also need to argue about this: ndb_mgmd-f/etc/config.ini Note: You do not need to start the MySQL service on this server (so you do not need to configure the/etc/my.cnf file). NDB_MGMD must be turned on before opening the service on NDB node and SQL node.
3. (1) Add the following content to the original/etc/my.cnf file on all NDB node:

    1. [Mysqld]
    2. Ndbcluster
    3. # IP address of the cluster management node
    4. ndb-connectstring=192.168.20.5
    5. ndb-connectstring=192.168.20.6
    6. [Mysql_cluster]
    7. # IP address of the cluster management node
    8. ndb-connectstring=192.168.20.5
    9. ndb-connectstring=192.168.20.6

-----------------------------------------------------------------------------------------------------------

Note:/etc/config.ini files are not required on the NDB node.
(2) Execute command on all NDB node for the first time: Mkdir/var/lib/mysql-cluster cd/var/lib/mysql-cluster ndbd--initial
Note: The MySQL service does not start on NDB node. Normally, using the "NDBD" command to start NDB node, only the node changes or other conditions require--initial parameters.
4. (1) Create a new my.cnf file on SQL node:

    1. [Mysqld]
    2. port=3306
    3. Ndbcluster
    4. ndb-connectstring=192.168.20.5
    5. ndb-connectstring=192.168.20.6
    6. [NDBD]
    7. connect-string=192.168.20.9
    8. [NDBD]
    9. connect-string=192.168.20.10
    10. [NDBD]
    11. connect-string=192.168.20.11
    12. [NDBD]
    13. connect-string=192.168.20.12
    14. [NDBD_MGM]
    15. connect-string=192.168.20.5
    16. connect-string=192.168.20.6
    17. [NDBD_MGMD]
    18. Config-file=/etc/config.ini
    19. [Mysql_cluster]
    20. ndb-connectstring=192.168.20.5
    21. ndb-connectstring=192.168.20.6

-----------------------------------------------------------------------------------------------------------

Note: SQL node only needs to start the MySQL service:/etc/init.d/mysql start, do not need to configure the/etc/my.cnf file.
5. (1) Run the cluster Management program on all management node: NDB_MGM
Enter the command "show" here at the prompt, the output of the command can be seen four NDB nodes are all connected to the management node, the output on the two management nodes is the same:

    1. Connected to Management Server at:192.168.20.5:1186
    2. Cluster Configuration
    3. ----------------- ----
    4. [NDBD (NDB)]     4 node (s)
    5. id=5    @192.168.20.9   ( version:5.1.22, nodegroup:0, Master)
    6. id=6    @192.168.20.10   (version:5.1.22, Nodegroup : 0)
    7. id=7    @192.168.20.11   (version:5.1.22, nodegroup:1)
    8. id=8    @19 2.168.20.12   (version:5.1.22, nodegroup:1)
    9. [NDB_MGMD (MGM)] 2 node (s)
    10. id=1    @1 92.168.20.5   (version:5.1.22)
    11. id=2    @192.168.20.6   (version:5.1.22)
    12. [Mysqld (API)]   2 node (s)
    13. id=3    @192.168.20.7   (version:5.1.22)
    14. id=4    @192.168.20.8   (version:5.1.22)

As you can see, all nodes are connected to the management node. If "Not connected, accepting connect from any host" appears, a node is not yet connected to the management node.
From the output of the netstat command, you can also see that all nodes are connected to the management node:

  1. TCP 0 0 192.168.20.5:1186 192.168.20.7:48066 established
  2. TCP 0 0 192.168.20.5:1186 192.168.20.7:48065 established
  3. TCP 0 0 192.168.20.5:1186 192.168.20.12:48677 established
  4. TCP 0 0 192.168.20.5:1186 192.168.20.9:37060 established
  5. TCP 0 0 192.168.20.5:1186 192.168.20.9:37061 established
  6. TCP 0 0 192.168.20.5:1186 192.168.20.9:37062 established
  7. TCP 0 0 192.168.20.5:1186 192.168.20.9:50631 established
  8. TCP 0 0 192.168.20.5:1186 192.168.20.11:33977 established
  9. TCP 0 0 192.168.20.5:1186 192.168.20.10:55260 established

(2) See Connections on any NDB node (20.9 for example), you can see this node and the other 3 NDB node (10/11/12), Management node (5/6), SQL node (7/8) have a connection, in addition to and management node connection is to 1186 port, The other connections are random ports.

  1. TCP 0 0 192.168.20.9:59318 192.168.20.11:49124 established
  2. TCP 0 0 192.168.20.9:37593 192.168.20.7:33593 established
  3. TCP 0 0 192.168.20.9:55146 192.168.20.10:46643 established
  4. TCP 0 0 192.168.20.9:48657 192.168.20.12:46097 established
  5. TCP 0 0 192.168.20.9:55780 192.168.20.8:41428 established
  6. TCP 0 0 192.168.20.9:58185 192.168.20.5:1186 established
  7. TCP 0 0 192.168.20.9:54535 192.168.20.6:1186 established

(3) on any SQL node to see the connection (take 20.7 as an example), you can see that two SQL node is connected to the Management node 20.6 (Management node in 20.5 first start, 20.6 start):

  1. TCP 0 0 192.168.20.7:49726 192.168.20.6:1186 established
  2. TCP 0 0 192.168.20.7:38498 192.168.20.10:58390 established
  3. TCP 0 0 192.168.20.7:54636 192.168.20.12:40206 established
  4. TCP 0 0 192.168.20.7:33593 192.168.20.9:37593 established
  5. TCP 0 0 192.168.20.7:57676 192.168.20.11:37717 established

7. mysql High availability cluster is built, then load balancer is built with Ipvs.
Create an empty library on all MYSQL_SQL nodes: Create DATABASE loadbalancing;
Set permissions to allow all MYSQL_LB nodes to have SELECT permission (for heartbeat testing): Grant Select on loadbalancing.* to [email protected] identified by ' ABCDEFG '; Grant SELECT on loadbalancing.* to [e-mail protected] identified by ' ABCDEFG ';
8. (1) Load the Ipvs module on the management node:

    1. Modprobe IP_VS_DH
    2. Modprobe ip_vs_ftp
    3. Modprobe Ip_vs
    4. Modprobe IP_VS_LBLC
    5. Modprobe IP_VS_LBLCR
    6. Modprobe IP_VS_LC
    7. Modprobe IP_VS_NQ
    8. Modprobe IP_VS_RR
    9. Modprobe ip_vs_sed
    10. Modprobe Ip_vs_sh
    11. Modprobe IP_VS_WLC
    12. Modprobe IP_VS_WRR

(2) Configure LVS on the Management node (20.15 and 20.16 are two load-balanced nodes, Realserver is 20.7 and 20.8, virtual IP is 20.17, port is 3306). You can start/etc/init.d/piranha-gui, then set up the cluster in the http://localhost:3636, eventually generate the configuration file/etc/sysconfig/ha/lvs.cf, or you can generate the file directly/etc/ SYSCONFIG/HA/LVS.CF:

  1. Serial_no = 37
  2. Primary = 192.168.20.15
  3. Service = LVs
  4. Backup_active = 1
  5. Backup = 192.168.20.16
  6. Heartbeat = 1
  7. Heartbeat_port = 539
  8. KeepAlive = 6
  9. Deadtime = 18
  10. Network = Direct
  11. Debug_level = NONE
  12. Monitor_links = 1
  13. Virtual MYSQL {
  14. Active = 1
  15. Address = 192.168.20.17 Eth0:1
  16. Vip_nmask = 255.255.255.0
  17. Port = 3306
  18. expect = "OK"
  19. Use_regex = 0
  20. Send_program = "/usr/local/bin/mysql_running_test%h"
  21. Load_monitor = None
  22. Scheduler = WLC
  23. protocol = TCP
  24. Timeout = 6
  25. reentry = 15
  26. Quiesce_server = 0
  27. Server Mysql_sql-1 {
  28. Address = 192.168.20.7
  29. Active = 1
  30. Weight = 1
  31. }
  32. Server Mysql_sql-2 {
  33. Address = 192.168.20.8
  34. Active = 1
  35. Weight = 1
  36. }
  37. }

You must make sure that the lvs.cf file is on two load balancer nodes and that the content is identical.
Probe script/usr/local/bin/mysql_running_test on two load balancer nodes:

    1. #!/bin/sh
    2. # We use $ as the argument in the TEST which would be the various IP ' s
    3. # of the real servers in the cluster.
    4. # Check for MySQL service
    5. Test= ' echo ' select ' "As ABCDEFG ' | Mysql-uloadbalancing-pabcdefg-h $ | grep ABCDEFG '
    6. if [$TEST! = ' 1 ']; Then
    7. echo "OK"
    8. Else
    9. echo "FAIL"
    10. #/bin/echo | Mail [Email][email Protected][/email]-S "NOTICE: $ failed to provide email service"
    11. Fi

Note: A MySQL client is required on two probe nodes. Principle: lvs.cf Specifies this script, in fact, to the Load Balancer node on the Nanny program call, LVS.CF in the%h parameter represents the time to call this script with the HOSTNAME/IP address parameters. The purpose of this script is to connect to MySQL server to execute a SELECT statement echoing a string ABCDEFG, by determining whether the echo is correct to verify that real server is functioning properly.
(3) Start the LVS service:/etc/init.d/pulse start
The contents of a/var/log/messages on one of the nodes:

    1. Dec 14:57:15 mysql_lb-1 pulse[8606]: starting pulse as MASTER
    2. Dec 14:59:29 mysql_lb-1 pulse[8606]: terminating due to signal 15
    3. Dec 14:59:30 mysql_lb-1 Pulse:siocgifaddr Failed:cannot Assign requested address
    4. Dec 14:59:30 mysql_lb-1 pulse[8659]: starting pulse as MASTER

The contents of the/var/log/messages on the other node:

    1. Dec 14:59:06 mysql_lb-2 pulse[16729]: Starting pulse as BACKUP
    2. Dec 14:59:08 mysql_lb-2 pulse[167 £ º primary inactive (link failure?): Activating LVs
    3. Dec 14:59:08 mysql_lb-2 lvs[16731]: starting virtual SE Rvice MySql active:3306
    4. Dec 14:59:08 mysql_lb-2 nanny[16734]: Starting LVS client Monitor for 192.168.20.17: 3306
    5. Dec 14:59:08 mysql_lb-2 lvs[16731]: Create_monitor for mysql/mysql_sql-1 running as PID 16734
    6. Dec 14:59:08 mysql_lb-2 nanny[16737]: Starting LVS client Monitor for 192.168.20.17:3306
    7. Dec 14:59:08 mysq L_lb-2 lvs[16731]: Create_monitor for mysql/mysql_sql-2 running as PID 16737
    8. Dec 14:59:08 mysql_lb-2 nanny[16 737]: Making 192.168.20.8:3306 available
    9. Dec 14:59:08 mysql_lb-2 nanny[16734]: making 192.168.20.7:3306 avail Able
    10. Dec 14:59:13 mysql_lb-2 pulse[16740]: Gratuitous LVs Arps finished

2007-12-27 11:33 Upload

Download number of times: 26

Practice: Build a MySQL Cluster based on load balancer

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.