MySQL is one of the most widely used databases, while GlusterFS is an open-source distributed file system. A major problem with MySQL is that there is a single point of failure. When a MySQL node fails, the entire business system will crash. MySQLreplication, MySQLCluster, and MySQLGalera both try to solve this problem to varying degrees.
MySQL is one of the most widely used databases, while GlusterFS is an open-source distributed file system. A major problem with MySQL is that there is a single point of failure. When a MySQL node fails, the entire business system will crash. MySQL replication, MySQL Cluster, and MySQLGalera both try to solve this problem to varying degrees.
MySQL is one of the most widely used databases, while GlusterFS is an open-source distributed file system. A major problem with MySQL is that there is a single point of failure. When a MySQL node fails, the entire business system will crash. MySQL replication, MySQL Cluster, and MySQL + Galera both try to solve this problem to varying degrees. This article attempts to use the Distributed File System GlusterFS to try to solve the high availability of MySQL.
1. Install Ubuntu
First, we set up a test environment to install two Ubuntu 13.10 virtual machines on a physical machine (LAPTOP). The virtual machine software can use VMWare or VirtualBox.
Virtual Machine specifications:
-1 CPU
-1G memory
-2 Disks: a disk of 20 GB is used to install the system, and a disk of 10 Gb is left for Gluster. The second disk can be retained during installation. After installation, use fdisk to partition and format the disk.
-Other settings: keep the two VMS in the same configuration, such as the time zone.
After Ubuntu is installed, you can create a snapshot to quickly recover from system crashes.
2. Configure Ubuntu
Before using gluster and mysql, configure the ubuntu Virtual Machine
2.1 network
After installation, the virtual machine uses DHCP by default. To avoid IP address changes, use a static IP address.
Select the network connection icon in the upper-right corner-> edit connection
Go to the IPv4 settings page, select the Manual method, and add the following address information (based on your system ):
-Address: 192.168.53.218
-Mask: 255.255.255.0
-Gateway: 192.168.53.2
-DNS: 192.168.53.2
If you do not know the IP address, gateway, and DNS of the current system, you can use the nm-tool to display the network information of the system.
2.2 disk
Partition the second disk, create a file system, and mount
$ Sudo fdisk/dev/sdb
N> w
$ Sudo mkfs. ext4/dev/sdb1
$ Sudo mkdir-p/data/gv0/brick1
$ Sudo vi/etc/fstab # Add the following line to/etc/fstab.
/Dev/sdb1/data/gv0/brick1 ext4 defaults 1 2
$ Sudo mount-a & mount
2.3 other configurations
2.3.1 sudoers
Modify sudoers to enable password-free access. Add the following line to/etc/sudoers. u1 is the user name and replace it with your own user name.
u1 ALL=(ALL:ALL) NOPASSWD: ALL
%sudo ALL=(ALL:ALL) NOPASSWD: ALL
2.3.2 ssh
$ sudo apt-get install ssh
3. Install gluster
First install the dependency package of gluster. If necessary, use apt-cache search to search for the package name.
$ sudo apt-get install -y flex bison openssl libssl-dev libreadline6 libreadline6-dev systemtap systemtap-sdt-dev
Compile and install the latest gluster (the latest version is 3.4.2)
$ wget http://download.gluster.org/pub/gluster/glusterfs/3.4/3.4.2/glusterfs-3.4.2.tar.gz
$ tar zxf gluster-3.4.2.tar.gz
$ cd gluster-3.4.2
$ ./configure --enable-debug
$ make && sudo make install
$ sudo ldconfig
$ gluster --version
Now, to install another virtual machine, you can create a snapshot to quickly restore it to the available status when a problem occurs later.
After the installation, the host name and IP address of the two hosts are as follows:
-U1: 192.168.53.218
-U2: 192.168.53.221
4. Configure Gluster replication (AFR) and perform a simple test.
4.1 prepare bricks
Bricks is a gluster term and is a basic component of volume. It can be considered as a directory on the host.
On two virtual machines:
$ sudo mkdir -p /data/gv0/brick1/test
4.2 build a trusted storage pool composed of two virtual machines
Run the following command on 192.168.53.218:
$ Sudo gluster peer probe 192.168.53.221 # Replace with your system
$ Sudo gluster peer status
Number of Peers: 1
Hostname: 192.168.53.221
Port: 24007
Uuid: e1de158a-1a81-4d6c-af55-b548d4c4c174
State: Peer in Cluster (Connected)
4.3 create replicated volume
Run the following command on any virtual machine:
$ sudo gluster volume create testvol replica 2 192.168.53.218:/data/gv0/brick1/test 192.168.53.221:/data/gv0/brick1/test
// Disable NFS (this step is optional)
$ Sudo gluster volume set testvol nfs. disable on
$ sudo gluster volume start testvol
$ sudo gluster volume status
Mount the created gluster volume on a virtual machine (such as 192.168.53.218.
$ sudo mount -t glusterfs 192.168.53.218:/testvol /mnt/testvol
4.4 start and stop the gluster Service
-Start:sudo /etc/init.d/glusterd start
-Stop: umount:sudo umount /path/to/dir
-Stop: volume:sudo gluster volume stop
-Stop: glusterd:sudo /etc/init.d/glusterd stop
4.5 Test the replication function of Gluster
Create a file on the virtual machine mounted with gluster volume:
$ echo "a\nb\n\c" > /mnt/testvol/test.txt
Observe the brick directory. The test file should appear under the/data/gv0/brick/test directory on both machines.
$ cd /data/gv0/brick/test && rm test.txt
Delete the file you just created in a brick directory. gluster automatically restores the file. (Not fully automated. It will be restored only when the file is accessed; otherwise, gluster will not restore the deleted file)
$ Ls/data/gv0/brick/test/test.txt # gluster: The file is lost and automatically restored.
4.5.1 files cannot be restored automatically due to DNS issues
During the test, it is found that after the file is deleted on a brick, gluster cannot be automatically restored, and a large number of errors such as connection timeout and connection failure are reported in the log. After reading the code, the automatic recovery part of the gluster code will use DNS, even though we previously used IP addresses.
Ubuntu comes with a small DNS server dnsmasq. It supports domain name resolution for LAN machines not in the global DNS. The dnsmasq user scenario is a home LAN connected through ADSL. It is also suitable for any small network (less than 1000 clients ). The most important thing for this test is that it supports the/etc/hosts configuration file. Ubuntu network-manager automatically starts dnsmasq, but by default it ignores/etc/hosts. You can create the following file to make it take effect:
$ cat /etc/NetworkManager/dnsmasq.d/hosts.conf
addn-hosts=/etc/hosts
$ sudo service networking restart
$ sudo /etc/init.d/dns-clean restart
After modification, the following commands are required to output the same IP address:
$ host u1
$ nslookup u1
$ getent ahosts u1
5. Install mysql
Create the gluster brick directory on all VMS to save mysql data.
$ Mkdir-p/data/gv0/brick1/mysqldata
Create a new volume:
$ sudo gluster volume create mysqldata replica 2 192.168.53.218:/data/gv0/brick1/mysqldata 192.168.53.221:/data/gv0/brick1/mysqldata
Start the newly created volume:
$ sudo gluster volume start mysqldata
Log on to a machine and mount gluster volume
$ sudo mount -t glusterfs 192.168.53.218:/mysqldata /mnt/mysqldata
Install mysql and use the new mount volume:
$ First download the MySQL Binary Package
$ Sudo groupadd mysql
$ Sudo useradd-r-g mysql
$ Cd/usr/local
$ Sudo tar zxvf/path/to/mysql-VERSION-OS.tar.gz
$ Sudo ln-s full-path-to-mysql-VERSION-OS mysql
$ Cd/usr/local/mysql
$ Sudo chown-R mysql: mysql.
$ sudo apt-get install -y libaio-dev
# Mysql requires libaio. so
# The preceding operations must be performed on all VMS.
Modify the maximum number of connections, which will be used in subsequent tests:
Mysql> show variables like "max_connections ";
Mysql> set global max_connections = 300; # restart fails.
The Permanent modification method is to add it to the configuration file:
Max_connections = 300
Install mysql
$ sudo /usr/local/mysql/scripts/mysql_install_db --user=mysql
--basedir=/usr/local/mysql --datadir=/mnt/mysqldata
--defaults-file=/usr/local/mysql/my.cnf
Start mysql:
$ sudo /usr/local/mysql/support-files/mysql.server start
--datadir=/mnt/mysqldata --log-error=/usr/local/mysql/mysql.error
Configure mysql:
$ /usr/local/mysql/bin/mysqladmin -u root password ''
$ echo 'bind-address: 0.0.0.0' >> /usr/local/mysql/my.cnf
$/Usr/local/mysql/bin/mysql-uroot # Run the following statement.
CREATE USER 'yydzero'@'localhost' IDENTIFIED BY 'goodluck';
GRANT ALL PRIVILEGES ON *.* TO 'yydzero'@'localhost' WITH GRANT OPTION;
CREATE USER 'yydzero'@'%' IDENTIFIED BY 'goodluck';
GRANT ALL PRIVILEGES ON *.* TO 'yydzero'@'%' WITH GRANT OPTION;
6. Test MySQL + GlusterFS
mysql> create database gluster
mysql> use gluster
mysql> CREATE TABLE IF NOT EXISTS test1 (i int, v varchar(1024), bb blob, INDEX USING BTREE (i))
mysql> INSERT INTO test1 (i, v) VALUES (1, 'x9byod')
mysql> SELECT * FROM test1;
In this way, we can run MySQL on the Gluster distributed file system. Later we will conduct high concurrency and high failure rate tests on this system to verify the stability of Gluster + MySQL.