Use the RHEL6 platform Keepalived + LVS + iSCSI + GFS to build a high-availability server Load balancer Web Cluster

Source: Internet
Author: User
The main implementation is a high-availability server Load balancer web server cluster, suitable for lamp architecture. The front end uses two servers as The lvs + keepalived load scheduler. N servers can be used as the apache + php application server in the middle, and the next two servers are used as the mysql high-availability dual-machine, finally, a virtual machine is used as the file server. A total of 7 virtual machines are enabled. System Environment: RHEL6.0kvm VM lvs scheduling server: 192.168.0.1194250.2apac

The main implementation is a high-availability server Load balancer web server cluster, suitable for lamp architecture.
The front end uses two servers as The lvs + keepalived load scheduler. N servers can be used as the apache + php application server in the middle, and the next two servers are used as the mysql high-availability dual-machine, finally, a virtual machine is used as the file server.
A total of 7 virtual machines are enabled.
System Environment: RHEL6.0 kvm Virtual Machine
Lvs scheduling server: 192.168.0.1 192.168.0.2
Apache server: 192.168.0.3 192.168.0.4
Mysql Server: 192.168.0.7 192.168.0.8
File Server: 192.168.0.10
Bytes -----------------------------------------------------------------------------------------------------------
I. lvs scheduling Server
Bytes -----------------------------------------------------------------------------------------------------------
Keepalived + lvs
Server environment:
System: RHEL6.0 kernel: 2.6.32-71. el6.i686)
Virtual IP: 192.168.0.50
Load Balancer: 192.168.0.1
Backup: 192.168.0.2
Real Server 1: 192.168.0.3
Real Server 2: 192.168.0.4

Installation and configuration of software packages on the master and slave machines
Yum install ipvsadm kernel-devel-y
Wget http://www.keepalived.org/software/keepalived-1.1.20.tar.gz
Tar zxf keepalived-1.1.20.tar.gz
Cd keepalived 1 1. 1.20
./Configure ­ prefix =/usr/local/keepalived ­ with ­ kernel dir =/usr/src/kernels/2.6.32-71. el6.i686/
86/

Keepalived configuration
***********************************************************
Compiler: gcc
Compiler flags: ­ g­ O2
Extra Lib: ­ lpopt ­ lssl ­ lcrypto
Use IPVS Framework: Yes; note that lvs must be supported during compilation.
S sync daemon support: Yes
Use VRRP Framework: Yes
Use LinkWatch: No
Use Debug flags: No
Make
Make install

Mkdir/etc/keepalived
Ln-s/usr/local/keepalived/etc/rc. d/init. d/keepalived/etc/init. d/
Ln-s/usr/local/keepalived/etc/keepalived. conf/etc/keepalived/
Ln-s/usr/local/keepalived/etc/sysconfig/
Ln-s/usr/local/keepalived/bin/*/bin/
Ln-s/usr/local/keepalived/sbin/*/sbin/

Vi/etc/rc. local
Modprobe ip_vs

Vi/etc/keepalived. conf

! Configuration File for keepalived
Global_defs {
Notification_email {
Root@example.com # email address for receiving alerts, you can add multiple
}
Notification_email_from root @ localhost
Smtp_server 127.0.0.1 # use the local machine to forward emails
Smtp_connect_timeout 30
Router_id LVS_DEVEL # ID of the load balancer instance, used for email alerts
}
Vrrp_instance VI_1 {
State MASTER # change the BACKUP to BACKUP. The status is determined by the value of priority.
The value of priority is smaller than the value of the standby machine, and the MASTER state will be lost.
Interface eth0 # HA Monitoring Network interface
Virtual_router_id 50 # The virtual_router_id of the master and slave machines must be the same
Priority 150 # host priority, which is changed to 50
Advert_int 1 # number of seconds between active and standby notifications
Authentication {
Auth_type PASS # verification during Master/Slave switchover
Auth_pass 1111
}
Virtual_ipaddress {
192.168.0.111 # HA virtual ip address, which can contain multiple
}
}




Virtual_server 192.168.0.111 80 {
Delay_loop 6 # query realserver status every 6 seconds
Lb_algo rr # lvs scheduling algorithm, which is called
Lb_kind DR # lvs load balancing mechanism. Direct Connection routing is used here.
# Persistence_timeout 50 # connections from the same IP address are allocated to the same realserver within 60 seconds.
Protocol TCP # Use the TCP protocol to check the realserver status
Real_server 192.168.0.1 80 {
Weight 1
TCP_CHECK {
Connect_timeout 3 # number of failed retry seconds
Nb_get_retry 3 # retry Delay
Delay_before_retry 3
}
}
Real_server 192.168.0.2 80 {
Weight 1
TCP_CHECK {
Connect_timeout 3
Nb_get_retry 3
Delay_before_retry 3
}
}
}


Run the following commands on realserver:
Vi/etc/sysctl. conf
Net. ipv4.conf. all. arp_ignore = 1
Net. ipv4.conf. lo. arp_ignore = 1
Net. ipv4.conf. all. arp_announce = 2
Net. ipv4.conf. lo. arp_announce = 2
Sysctl-p
Ifconfig eth0: 0 192.168.0.200 netmask 255.255.255.255 up
Route add-host 192.168.0.200 dev eth0: 0

Vi/etc/rc. local
Ifconfig eth0: 0 192.168.0.200 netmask 255.255.255.255 up
Route add-host 192.168.0.200 dev eth0: 0


Echo 'hostname'>/var/www/html/index.html
Service httpd start
Test:
Access http: // 192.168.0.50. If you see the two realservers on the page, the switchover is successful!
You can also view the connection details through ipvsadm-Lnc!
Bytes -------------------------------------------------------------------------------------------------------------------
Ii. mysql Server
Bytes -------------------------------------------------------------------------------------------------------------------
Mysql dual-master high availability
-------------------------------------
System Environment:
RHEL6.0 _ I386
VIP 192.168.0.51
Real server1 192.167.0.7
Real server2 192.168.0.8
-------------------------------------
1. Install mysql on Server 1 and Server 2 and modify the configuration file:
Yum install mysql-server
Vi/etc/my. cnf:
[Mysqld]
Log-bin = MySQL-bin
Server-id = 1 # configure server-id = 2 on server 2

-------------------------------------
2. server1 and server2 are set to master-slave synchronization (dual-master ).
Server1:
Mysql> grant replication slave on *. * to 'cluster' @ '%' identified by 'cluster ';
Mysql> show master status;
-------------------------
MySQL-bin.000001 236
-------------------------
------------------------------------
Server2:
Mysql> change master
-> Master_host = '192. 168.0.7 ',
-> Master_user = 'cluster ',
-> Master_password = 'cluster ',
-> Master_log_file = 'mysql-bin.000001 ',
-> Master_log_pos = 236;

Mysql> start slave;
Mysql> show slave status;
Perform the opposite operations on Server 2 and Server 1 to make them mutually active and standby.
--------------------------------------
3. install the software package
Yum install gcc popt-devel kernel-devel openssl-devel kernel SADM make
Tar xf keepalived-*** .tar.gz
Cd keepalived -*

./Configure -- prefix =/usr/local/keepalived -- with-kernel-dir =/usr/src/kernels/2.6.32-71. el6.i686
Make & make install

Modprobe ip_vs # if the system does not automatically load this module, the keepalived Server Load balancer Protocol cannot be found after it is started.
Mkdir/etc/keepalived/
Ln-s/usr/local/keepalived/etc/rc. d/init. d/keepalived/etc/init. d/
Ln-s/usr/local/keepalived/etc/sysconfig/
Ln-s/usr/local/keepalived/etc/keepalived. conf/etc/keepalived/
Ln-s/usr/local/keepalived/bin/*/bin/
Ln-s/usr/local/keepalived/sbin/*/sbin/
---------------------------------------------------
4. Modify the server1/server2 configuration file
Server1:
Vi/etc/keepalived. conf:


! Configuration File for keepalived

Global_defs {
Notification_email {
Root@example.com
}
Notification_email_from root @ localhost
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id MYSQL-HA # Make sure it is the same as server2
}

Vrrp_instance VI_1 {
State BACKUP
Interface eth0
Virtual_router_id 51 # Make sure that this option is the same as that of server2. It must be different for different clusters in the same network; otherwise, a conflict occurs.
Priority 100 # Set server2 to 50
Advert_int 1
Nopreempt # Do not preemptible. It is only set on server1 with a high priority. This option is commented out on server2.
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.0.51
}
}

Virtual_server 192.168.0.51 3306 {
Delay_loop 2
Lb_algo wrr
Lb_kind DR
Persistence_timeout 60
Protocol TCP

Real_server 192.168.0.7 3306 {
# Change to 192.168.0.8 (that is, the ip address of the server 2 host) on Server 2)
Weight 3
Notify_down/usr/local/keepalived/bin/mysql. sh
TCP_CHECK {
Connect_timeout 10
Nb_get_retry 3
Delay_before_retry 3
Connect_port 3306
}
}
}
--------------------------------
This check script is added to both server1 and server2 to automatically disable keepalived on the local machine when mysql stops working.
In this way, the faulty machine is kicked out (because keepalived on each machine only adds realserver as the local machine ).
Vi/usr/local/keepalived/bin/mysql. sh:
#! /Bin/sh
Pkill keepalived
--------------------------------
Vi/etc/rc. local:
Modprobe ip_vs # manually load this module if it cannot be automatically loaded
--------------------------------
Server 1 and Server 2 start the keepalived daemon.
/Etc/init. d/keepalived start

Bytes ------------------------------------------------------------------------------------------------------------
Iii. File Server
Bytes ------------------------------------------------------------------------------------------------------------
Data server: 192.168.0.10
Data client1: 192.168.0.3
Data client2: 192.168.0.4
-----------------------------

Data server:
Yum install luci-y

/Etc/init. d/luci start
Access https: // 192.168.0.10/: 8084/use the system account password to log on to create a cluster and add client1/2 as a node.
Add a virtual fence device and add it to client1 client2


Yum install scsi-target-utils-y
Chkconfig tgtd on
/Etc/init. d/tgtd start
Tgtadm -- lld iscsi -- op new -- mode target -- tid 1-T webdata
Tgtadm -- lld iscsi -- op new -- mode logicalunit -- tid 1 -- lun 1-B/dev/sda
Tgtadm -- lld iscsi -- op bind -- mode target -- tid 1-I ALL

Tgtadm -- lld iscsi -- op show -- mode target # verify whether OK
Vi/etc/rc. local: Write the preceding configuration command to rc. local.
Tgtadm -- lld iscsi -- op new -- mode target -- tid 1-T webdata
Tgtadm -- lld iscsi -- op new -- mode logicalunit -- tid 1 -- lun 1-B/dev/sda
Tgtadm -- lld iscsi -- op bind -- mode target -- tid 1-I ALL


Data client:
Yum install ricci-y
/Etc/init. d/ricci start
Lvmconf -- enable-cluster

Yum install iscsi-initiator-utils-y
Iscsiadm-m discovery-t sendtargets-p 192.168.0.10
Iscsiadm-m node-T webdata-p 192.168.0.10-l
Information is automatically saved to the configuration file/var/lib/iscsi/nodes/webdata/192.168.0.10, 3260,1/default
Fdisk-l view. One more/dev/sda hard disk is displayed.

Pvcreate/dev/sda
Vgcreate datavg/dev/sda
Lvcreate-L 1020 M-n lv1 datavg
Cman_tool status | grep Name # view cluster name
Cluster Name: web_cluster
Create a gfs File System
Mkfs. gfs2-p lock_dlm-t web_cluster: gfs-j 2/dev/datavg/lv1
# Here-j 2 provides two client hosts for connection

Mount-t gfs2/dev/datavg/lv1/mnt
Vi/etc/fstab
/Dev/datavg/lv1/mnt gfs2 defaults 0 0
/Etc/init. d/gfs2 start # the file system is automatically mounted to/mnt
Chkconfig cman on
Chkconfig rgmanager on
Chkconfig ricci on
Chkconfig modclusterd on
Chkconfig clvmd on
Chkconfig gfs2 on

If the lvscan command cannot be identified when gfs is started and the lv status is inactive again:
Lvchange-ay/dev/gfsvg/gfs
Bytes -------------------------------------------------------------------------------------
That's it ....

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.