Front-end httpd + heepalived + back-end heartbeat + nfs + drbd for efficient application and unified resource management of httpd services

Source: Internet
Author: User

Four machines in this article: Centos6.0

Lv1 and lv2 are used as the httpd frontend and keepalived for high availability. A virtual vip is provided for the client to access.

Node1 and node2 use the drbd technology to implement file image storage, and Virtualize a vip to provide nfsserver_ip for httpd services, facilitating configuration and unified data management;

Application Technology; httpd, keepalived, drbd, nfs, heartbeat

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132112058-0.jpg "title =" 1.jpg"/>


Lv1: 192.168.182.130

Lv2: 192.168.182.129 VIP: 192.168.182.200 this vip is accessible to the client.

Node1: 192.168.182.133

Node2: 192.168.182.134 VIP: 192.168.182.150 this VIP acts as an nfsserver for mounting

Selinux and iptables are disabled first. Of course, this is not the case in the real environment. Just configure them separately.

1. Start to configure lv1 and lv2. test whether the front-end is normal.

1. Run: yum install-y httpd restart SADM keepalived

To identify the differences between lv1 and lv2

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321121G-1.jpg "style =" float: none; "title =" 1.jpg"/>

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321115C-2.jpg "style =" float: none; "title =" 2.jpg"/>

2. Configure keepalived;

Lv1: Configure on

Vim/etc/keepalived. conf

! Configuration File for keepalived

Global_defs {
Notification_email {
Coffee_lanshan@sina.com
}
Notification_email_from admin@example.com
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id LV_ha
}

Vrrp_instance httpd {
State MASTER
Interface eth0
Virtual_router_id 51
Priority100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.182.200
}
}

Virtual_server 192.168.182.200 80 {
Delay_loop 2
Lb_algo rr
Lb_kind DR
Persistence_timeout 50
Protocol TCP

Real_server 192.168.182.130 80 {
Weight 3
Notify_down/var/www/httpd. sh
TCP_CHECK {
Connect_timeout 3
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
Lv2: Configuration

Vim/etc/keepalived. conf

! Configuration File for keepalived

Global_defs {
Notification_email {
Coffee_lanshan@sina.com
}
Notification_email_from admin@example.com
Smtp_server 127.0.0.1
Smtp_connect_timeout 30
Router_id LV_ha
}

Vrrp_instance httpd {
State MASTER
Interface eth0
Virtual_router_id 51
Priority100
Advert_int 1
Authentication {
Auth_type PASS
Auth_pass 1111
}
Virtual_ipaddress {
192.168.182.200
}
}

Virtual_server 192.168.182.200 80 {
Delay_loop 2
Lb_algo rr
Lb_kind DR
Persistence_timeout 50
Protocol TCP

Real_server 192.168.182.129 80 {
Weight 3
Notify_down/var/www/httpd. sh
TCP_CHECK {
Connect_timeout 3
Nb_get_retry 3
Delay_before_retry 3
Connect_port 80
}
}
}
Create httpd. sh in lv1 and lv2 respectively.

Vim/var/www/httpd. sh

#! /Bin/sh
Pkill keepalived

#####

Chmod + x httpd. sh

Now, check whether the above information is normal and whether it is transferred normally.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/013211I53-3.jpg "style =" float: none; "title =" 1.jpg"/>

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132116406-4.jpg "style =" float: none; "title =" 2.jpg"/>

It is found that lv1 is currently providing services, and priority 100 can also be seen in vrrp );

Now stop httpd at lv1

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132112025-5.jpg "style =" float: none; "title =" 1.jpg"/>

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132114293-6.jpg "style =" float: none; "title =" 2.jpg"/>

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321150W-7.jpg "style =" float: none; "title =" 3.jpg"/>

Now we find that lv2 is providing services. Now, when you start httpd and keepalived services on lv1, httpd will automatically switch to lv1. we will not describe it here;

2. Configure drbd + heartbeat + nfs for node1 and node2;

1. Configure hosts and install drbd, heartbeat, and nfs

1> node1 and node2:

Vim hosts

192.168.182.20.node1
192.168.182.134 node2

2> install drbd

Yum-y install gcc kernel-devel kernel-headers flex

Wget http://oss.linbit.com/drbd/8.4/drbd-8.4.3.tar.gz
Tar zxvf drbd-8.4.3.tar.gz
Cd drbd-8.4.3
./Configure -- prefix =/usr/local/drbd -- with-km
Make KDIR =/usr/src/kernels/2.6.32-71. el6.i686/
Make install
Mkdir-p/usr/local/drbd/var/run/drbd
Cp/usr/local/drbd/etc/rc. d/init. d/drbd/etc/rc. d/init. d
Chkconfig -- add drbd
Chkconfig drbd on
Cd drbd
Cp drbd. ko/lib/modules/'uname-R'/kernel/lib/
Depmod
Modprobe drbd

Are you sure you have loaded the drbd module?

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321131R-8.jpg "title =" 1.jpg"/>

The drbd of the two machines has been installed. Start config.

First, we need to add an 8 GB disk to the disk created in node1 and node2; fdisk for drbd). Remember not to format it.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/013211Ea-9.jpg "title =" 1.jpg"/>

Node1:

Cd/usr/local/drbd/etc/drbd. d

Mv global_common.conf global_common.conf.bak

Vim global_common.conf

Global {
Usage-count yes; # whether to participate in DRBD user statistics. The default value is yes.
}
Common {
Net {
Protocol C; # The third synchronization protocol of drbd indicates that the write is completed after the write confirmation is received from the remote host.
}
}

Vim r0.res

Resource r0 {
On node1 {# description of each host starts with on, followed by hostname
Device/dev/drbd1; # name of the drbd device
Disk/dev/sdb1; # the disk partition used by/dev/drbd1 is/dev/sdb1.
Address 192.168.182.20.: 7789; # Set the listening port of DRBD to communicate with another host.
Meta-disk internal;
}
On node2 {
Device/dev/drbd1;
Disk/dev/sdb1;
Address 192.168.182.134: 7789;
Meta-disk internal;
}
}

Copy the preceding configuration file to the/etc/drbd. d directory of the two hosts.

2. Start DRBD

Run
Before starting DRBD, you need to create a data block for the DRBD record information on the sdb1 partition of the two hosts. Execute the following on the two hosts:
[Root @ Centos ~] # Drbdadm create-md r0 or execute drbdadm create-md all
[Root @ Centos ~] # Drbdadm create-md r0
Start the service on two nodes
[Root @ Centos ~] #/Etc/init. d/drbd start
[Root @ Centos ~] #/Etc/init. d/drbd start
It is best to start at the same time
View node status on any node

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132111254-10.jpg "title =" 1.jpg"/>

The output description is as follows:
Ro indicates the role information. When drbd is started for the first time, both drbd nodes are in the Secondary status by default,
Ds indicates the disk status information. "Inconsistent/Inconsisten" indicates that the disk data of the two nodes is Inconsistent. "UpToDate/UpToDate ". It is in the "real-time/real-time" status.
Ns indicates the packet information sent by the network.
Dw is the disk write information
Dr indicates the disk read information.

Set master node
Because there are no primary and secondary nodes by default, you need to set the primary and secondary nodes of the two hosts, select the host that needs to be set as the primary node, and then execute the following command:

Drbdsetup/dev/drbd1 primary -- o

After executing this command for the first time, you can use another command if you need to set which is the master node:

Drbdadm primary r0 or drbdadm primary all

After executing this command, start to synchronize the data on the disk of the two machines.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/013211DD-11.jpg "title =" 2.jpg"/>

We can see from the output:
"Ro status changes to" Primary/Secondary ", and" ds "status changes to" UpToDate/Inconsistent ", that is," real-time/Inconsistent "status, the data is being synchronized between the disks of the master and slave hosts, and the synchronization progress is 8.4%. The synchronization speed is about 10 MB per second.
Wait for a moment and check the synchronization status again. The output is as follows:

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/013211D04-12.jpg "title =" 2.jpg"/>

We can see that the synchronization is complete and the "ds" status changes to "UpToDate/UpToDate. It is in the "real-time/real-time" status.

Format disk

Mkfs. ext4/dev/drbd1

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321130O-13.jpg "title =" 2.jpg"/>

Next, you can use mount.

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132112364-14.jpg "title =" 2.jpg"/>

3. Install heartbeat and nfs

Yum install heartbeat nfs libnet-y

Cp/usr/share/doc/heartbeat-3.0.4/authkeys ha. cf haresources/etc/ha. d/

1. Configure ha. cf in node1

Logfile/var/log/ha-log

Logfacility local0

Keepalive 2

Deadtime 30

Warntime 10

Initdead 120

Udpport 694

Ucast eth0 192.168.182.134

Auto_failback off

Node node1
Node node2

Ping 192.168.182.2

Respawn root/usr/lib/heartbeat/ipfail

++

Ha. cf of node2 is the same as above, except that ucast eth0 192.168.182.20.the IP address of the other party)

Configure/etc/ha. d/authkeys

Auth 2
#1 crc
2 sha1 heartbeat
#3 md5 Hello!

Node2 same as above

Configure/etc/ha. d/haresources

Node1 IPaddr: 192.168.182.150/24/eth0 drbddisk: r0 Filesystem:/dev/drbd1:/mnt: ext4 nfs

Node2 same as above

Chmod 600 authkeys

Node2 same as above

Cp/usr/local/drbd/etc/ha. d/resource. d/drbddisk/etc/ha. d/resource. d/

Node2 same as above

Start heartbeat

/Etc/init. d/heartbeat start

So far: you will find the node 1 Nic

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/01321120c-15.jpg "title =" 2.jpg"/>

Disable heartbeat on node1: You will find node2

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132111217-16.jpg "title =" 1.jpg"/>

Automatic mounting, automatic switchover of drbd status, and automatic vip drift

Configure the nfs shared directory on node1 and node2.

[Root @ node2 ~] # Vim/etc/exports

/Root/data * (rw)

[Root @ node2 ~] # Exportfs-r
[Root @ node2 ~] # Exportfs-u
/Root/data <world>

4. How to mount the home directory that acts as the httpd service on the first two lv servers;

Lv1, lv2:

Mount-t nfs 192.168.182.150:/root/data/var/www/html

This can be written to fstab and started upon startup.

192.168.182.150:/root/data/var/www/html nfs defaults 0 0

5. The following is a test:

1> Create index.html in/root/data/In node1. The content of this period is node + heartbeat + test.

650) this. width = 650; "src =" http://img1.51cto.com/attachment/201305/231550203.jpg "title =" 1.jpg"/>

2> now we can test lv1 down without affecting it.

3> now, we will drop node1, and the service on nodejs will switch to node2. Then, we will modify the index.html content to separate

Here, a two identifier is added.

Node1:/etc/init. d/heartbeat stop

Node2:

[Root @ node2 data] # vim index.html

Node + heartbeat + test two

Access vip now

650) this. width = 650; "src =" http://www.bkjia.com/uploads/allimg/131228/0132112M9-18.jpg "title =" 1.jpg"/>

Everything is normal! OK

This facilitates unified management of resources and achieves reliability;

This article is from the Coffee _ Blue Mountains blog, please be sure to keep this source http://lansgg.blog.51cto.com/5675165/1208485

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.