High-availability WEB Load Balancing cluster in RedHatEL4.0

Source: Internet
Author: User
1. Director configuration 1. Set the network interface address [root @ directorroot] # vi/etc/sysconfig/network-scripts/ifcfg-eth0DEVICE = eth0BOOTPROTO = staticBROADCAST = 192.168.0.255HWADDR = 00: 0C: 29: a2: BD: B5IPADDR = 192.168.0.160NETMASK = 255.2

1. Director Configuration

1. Set the network interface address
[Root @ director root] # vi/etc/sysconfig/net-scripts/ifcfg-eth0
DEVICE = eth0
BOOTPROTO = static
BROADCAST = 192.168.0.255
HWADDR = 00: 0C: 29: A2: BD: B5
IPADDR = 192.168.0.160
NETMASK = 255.255.255.0
NETWORK = 192.168.0.0
ONBOOT = yes

2. Edit The lvs script
[Root @ director root] # vi/etc/init. d/lvsdr
#! /Bin/bash
VIP = 192.168.0.222
RIP1 = 192.168.0.249
RIP2 = 192.168.0.20.
/Etc/rc. d/init. d/funcions
Case "$1" in
Start)
Echo "start LVS of DirectorServer"
# Set the Virtual ip address
/Sbin/ifconfig eth0: 0 $ VIP broadcast $ VIP netmask 255.255.255.255 up
/Sbin/route add-host $ VIP dev eth0: 0
# Clear ipvs Table
/Sbin/ipvsadm-C
# Set lvs
/Sbin/ipvsadm-A-t $ VIP: 80-s rr
/Sbin/ipvsadm-a-t $ VIP: 80-r $ RIP1: 80-g
/Sbin/ipvsadm-a-t $ VIP: 80-r $ RIP2: 80-g
# Run Lvs
/Sbin/ipvsadm
;;
Stop)
Echo "close LVS Directorsever"
/Sbin/ipvsadm-C
/Sbin/ifconfig eth0: 0 down
;;
*)
Echo "Usage: $0 {start | stop }"
Exit 1
Esac
# Save and exit, and set the file as an executable file
[Root @ director root] # chmod 755/etc/init. d/lvsdr

3. Install ipvsadm
[Root @ Director root] # modprobe-l | grep ipvs
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_ftp.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_lblc.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_wlc.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_sed.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_rr.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_wrr.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_nq.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_sh.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_dh.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_lblcr.ko
/Lib/modules/2.6.9-11.EL/ kernel/net/ipv4/ipvs/ip_vs_lc.ko
[Root @ director root] # ln-s/usr/src/kernels/2.6.9-11. EL-i686/usr/src/linux
[Root @ director root] # tar xzvf ipvsadm-1.24.tar.gz
[Root @ director ipvsadm-1.24] # cd ipvsadm-1.24
[Root @ director ipvsadm-1.24] # make
[Root @ director ipvsadm-1.24] # make install
Because I set up a WEB Server Load balancer cluster here, and I did not choose to install the WEB server components when installing RedHat EL 4.0, I need to install it separately here, if you have already installed it, you can skip this step (Note: You can use rpm-qa | grep http to check whether it has been installed)
[Root @ director root] # tar xzvf httpd-2.2.4.tar.gz
[Root @ director root] # cd httpd-2.2.4
[Root @ director httpd-2.2.4] #./configure-prefix =/usr/local/apache-enable-so-enable-rewrite
[Root @ director httpd-2.2.4] # make
[Root @ director httpd-2.2.4] # make install
[Root @ director httpd-2.2.4] # echo "/usr/local/apache/bin/apachectl">/etc/rc. local

4. Install heartbeat
Install libnet: http://www.packetfactory.net/libnet/ before installing heartbeat
[Root @ director root] # tar xzvf libnet.tar.gz
[Root @ director root] # cd libnet
[Root @ director libnet] #./configure
[Root @ director libnet] # make
[Root @ director libnet] # make install
[Root @ director libnet] # cd
Before installing heartbeat, you must create a heartbeat group and user.
[Root @ director root] # groupadd-g 694 haclient
[Root @ director root] # useradd-g 694-u 694 hacluster
[Root @ director root] # tar xzvf heartbeat-2.1.2.tar.gz
[Root @ director root] # cd heartbeat-2.1.2
[Root @ director heartbeat-2.1.2] #./ConfigureMe configure
[Root @ director heartbeat-2.1.2] # make
[Root @ director heartbeat-2.1.2] # make install
After heartbeat is installed, there will be a/etc/ha. d directory, which stores the heartbeat configuration file and the heartbeat configuration files. However, after heartbeat is installed by default, the three most important configuration files (ha. cf, haresources, and authkeys) of heartbeat are not put here. We need to copy them manually.
[Root @ director heartbeat-2.1.2] # cp doc/ha. cf doc/haresources doc/authkeys/etc/ha. d
Copy the configuration file of ldirector.
[Root @ director heartbeat-2.1.2] # cp ldirectord/ldirectord. cf/etc/ha. d

6. Edit the heartbeat configuration file.
[Root @ director heartbeat-2.1.2] # vi/etc/ha. d/ha. cf
# Error file storage point of hearbeat
Debugfile/var/log/ha-debug
# Heartbeat log file storage point
Logfile/var/log/ha-log
# Set the heartbeat interval to 2 seconds.
Keepalive 2
# The node is declared dead 60 seconds later.
Deadtime 60
# The waiting time before the "late heartbeat" Warning is sent in the log, measured in seconds.
Warntime 10
In some configurations, it takes some time for the Network to work properly after restart. This separate "deadtime" option can handle this situation. The value should be at least twice the usual deadtime.
Initdead 120
# Use port 694 for communication between bcast and ucast. This is the default port number that is officially registered in IANA.
Udpport 694
# Indicates using Broadcast heartbeat on the eth0 interface (replace eth0 with eth1, eth2, or any interface you use ).
Bcast eth0 # Linux
# Required. Host Name of the machine in the cluster, which is the same as the output of "uname-n.
Node director
Node bkdirector
# Required. When auto_failback is set to on, all resources will be retrieved from the slave node once the master node recovers online again. If this option is set to off, the master node cannot obtain resources again. This option is similar to the discarded nice_failback option.
Auto_failback on
# By default, heartbeat does not detect any service except itself, nor network conditions.
# When the network is interrupted, the switching between Load Balancer and Backup is not performed.
# You can use the ipfail plug-in to set 'Ping nodes 'to solve this problem. For more information, see the hearbeat document.
Ping_group group1 192.168.0.160 192.168.0.225
Respawn root/usr/lib/heartbeat/ipfail
Apiauth ipfail gid = root uid = root
Hopfudge 1
Use_logd yes
# Save and exit
Edit the haresources file. The haresources file notifies the machine in the heartbeat program which owns the resources. The actual resource name is/etc/init. d or/etc/ha. d/resource. in the d directory, Heartbeat uses the haresources configuration file to determine what it should do when it is started for the first time. This file lists the services provided by the cluster and the default owners of the services. Note: The files on the two cluster nodes must be the same; otherwise, BadThingsWillHappen.
[Root @ director heartbeat-2.1.2] # vi/etc/ha. d/haresources
Director lvsdr
# Set director to the master node. The cluster service provided is lvsdr. The name of the master node must be the same as that output by "uname-n ".
# Save and exit
Edit the authkeys file. The third file that needs to be configured, authkeys, determines your authentication key. There are three authentication methods: crc, md5, and sha1. You may ask, "which method should I use ?" In short:
If your Heartbeat runs on a security network, such as the crossover line in this example, you can use crc, which is the lowest cost from the resource perspective. If the network is not secure, but you want to reduce the CPU usage, use md5. Finally, if you want to get the best authentication without considering the CPU usage, use sha1, which is the most difficult to crack among the three.
[Root @ director heartbeat-2.1.2] # vi/etc/ha. d/authkeys
Auth 1
1 crc
# Save and exit
Ensure that the authkeys file can only be read by the root user.
[Root @ director heartbeat-2.1.2] # chmod 600/etc/ha. d/authkeys
Edit the/etc/hosts file, add the names of the two machines, and set the name to the IP address.
[Root @ director heartbeat-2.1.2] # vi/etc/hosts
192.168.0.160 director
192.168.0.225 bkdirector
Note: make the same settings on the backup ctor.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.