Linux high-availability cluster heartbeat achieves high http availability

Source: Internet
Author: User
Tags signal handler

There are many types of high-availability clusters in linux, such as common heartbeat, corosync, rhcs, and keepalived. The emergence of these cluster software provides high-availability assurance for our business production environment, this article will briefly introduce how to use heartbeat v2 to build a simple http high-availability cluster.

Before implementing the http high-availability cluster, at least two hosts are required, and three basic preparations are required:
1. Set the node name, and all nodes in the cluster can use the node name to resolve all hosts in the cluster. To ensure the high availability of cluster services, use/etc/hosts, and ensure that the value of uname-n must be consistent with that in hostname.
2. Use ssh to enable mutual trust between two servers.
3. Time synchronization.

Before installing the heartbeat software, complete the above basic work. Here we use two hosts (192.168.1.201, 192.168.1.202) for our high-availability cluster service.
1. log on to 192.168.1.201 and modify hostname = test1.qiguo.com. in/etc/sysconfig/network, modify HOSTNAME = test1.qiguo.com to ensure that the host name remains unchanged at the next startup of the server. Add 192.168.1.201 test1.qiguo.com test1; 192.168.1.20 test2.qiguo.com test2 to/etc/hosts, and then perform the same operation on the host of 192.168.1.202.
2. Execute ssh-keygen-t rsa in 192.168.1.201, and then use ssh-copy-id-I root @ test2 to achieve ssh mutual trust. This step must be performed on both machines.
3. Use the time synchronization command ntpdate 133.100.11.8 on two servers (available ntp server IP address)

After completing the preceding three steps, you can start to install heartbeat. You can go to epel to download the heartbeat installation package. By default, you need the following four packages: heartbeat-2.1.4-11.el5.i386, heartbeat-gui-2.1.4-11.el5.i386, heartbeat-pils-2.1.4-11.el5.i386, and heartbeat-stonith-2.1.4-11.el5.i386. But these four packages depend on the perl-MailTools-1.77-1.el5.noarch of the other two packages, So first you have to pack these two packages. Dependency errors are reported when you use the rpm-ivh perl-MailTools-1.77-1.el5.noarch, so install with the yum -- nogpgcheck localinstall perl-MailTools-1.77-1.el5.noarch. Install the remaining packages in the same way. Note: These software must be installed on both servers.

When heartbeat is installed, the default heartbeat configuration file is in/etc/ha. d. Rc. d In ha. d is the script related to resource management, and resource. d is the resource proxy script, and the service script is in/etc/ha. d/heartbeat. The heartbeat file installed by default does not have a configuration file. However, you can set ha from/usr/share/doc/heartbeat-2.1.4. cf, authkeys, and haresources files are stored in/etc/ha. d. The roles of these three configuration files are as follows:
Authkeys: key file. The permission for this file must be 600. Otherwise, heartbeat service cannot be started.
Ha. cf: heartbeat service configuration file
Haresources: Resource proxy configuration file
Below, we only need to configure these three files to implement our http high availability cluster. First, let's look at the authkeys file:
# Auth 1 provides key authentication
#1 crc Cyclic Redundancy Verification Code Authentication
#2 sha1 HI! Sha1 algorithm Authentication
#3 md5 Hello! Md5 authentication
It is best to use sha or md5 authentication, and the crc performance is low. If the md5 authentication configuration file is used:
Auth 1 #1 Code uses the following line starting with 1 as the key authentication Condition
1 md5 9adc3f50d9bb9e9c795fce0a839aa766
To generate md5, you only need to input echo "qiguo" | md5sum in the shell command line.

The second configuration file ha. cf contains a lot of content, which is briefly described as follows:

# Debugfile/var/log/ha-debug # Whether to enable debug log logfile/var/log/ha-log # location of log files # logfacility local0 # log facility, if logfile is enabled, do not start this option keepalive 2 # Heartbeat detection at every interval # deadtime 30 # after how long the server has not detected its existence, it is deemed that it has been disconnected # warntime 10 # warning duration # initdead 120 # How long a cluster has been started, and the second cluster has not yet started, the cluster is considered unsuccessful # udpport 694 # The listening port # baud 19200 # The sending rate of the serial line bcast eth0 # Send heartbeat detection in broadcast mode (Here we use broadcast mode, start bcast eth0 directly. This method consumes resources when there are many machines in the LAN.) # mcast eth0 255.0.0 . 1 694 1 0 # Send heartbeat detection in multicast mode # ucast eth0 192.168.1.2 # Send heartbeat detection in Unicast mode # auto_failback on # resume after the master node fails, whether to jump to the master node. on indicates the new jump. # Stonith baytech/etc/ha. d/conf/stonith. baytech # defines stonith, how to isolate offline nodes # node ken3 # node name in the cluster, each node needs to use a node, and the value must be the same as that of uname-n. node test1.qiguo.com node test2.qiguo.com # ping 10.10.254 # specify the ping address ping 192.168.1.1 # network management address

The third configuration file, haresources, is the cluster resource configuration file. The above provides a lot of configuration samples, with one of the sample configuration files to illustrate: # node1 10.0.0.170 Filesystem:/dev/sda1 ::/data1: ext2
Node1 is the name of the master node, 10.0.0.170 is the vip, and Filesystem is the resource proxy (the resource proxy can be obtained from/etc/ha. d/resource. d and/etc/init. d/from search, ":" represents the parameter of the resource proxy ). Here we make http highly available, so the configuration is as follows:
Test1.qiguo.com IPaddr: 192.168.1.210/24/eth0 httpd
After the above three configuration files are successfully copied to the host 192.168.1.202. After the replication is complete, the httpd service is installed on the two hosts respectively. The installed httpd service must not enable them to start automatically at startup. If all configurations are successful, you can disable the httpd service and start the heartbeat service.

heartbeat[4825]: 2014/05/11_23:54:35 info: Version 2 support: falseheartbeat[4825]: 2014/05/11_23:54:35 WARN: Logging daemon is disabled --enabling logging daemon is recommendedheartbeat[4825]: 2014/05/11_23:54:35 info: **************************heartbeat[4825]: 2014/05/11_23:54:35 info: Configuration validated. Starting heartbeat 2.1.4heartbeat[4826]: 2014/05/11_23:54:35 info: heartbeat: version 2.1.4heartbeat[4826]: 2014/05/11_23:54:35 info: Heartbeat generation: 1399811242heartbeat[4826]: 2014/05/11_23:54:35 info: glib: UDP Broadcast heartbeat started on port 694 (694) interface eth0heartbeat[4826]: 2014/05/11_23:54:35 info: glib: UDP Broadcast heartbeat closed on port 694 interface eth0 - Status: 1heartbeat[4826]: 2014/05/11_23:54:35 info: glib: ping heartbeat started.heartbeat[4826]: 2014/05/11_23:54:35 info: G_main_add_TriggerHandler: Added signal manual handlerheartbeat[4826]: 2014/05/11_23:54:35 info: G_main_add_TriggerHandler: Added signal manual handlerheartbeat[4826]: 2014/05/11_23:54:35 info: G_main_add_SignalHandler: Added signal handler for signal 17heartbeat[4826]: 2014/05/11_23:54:35 info: Local status now set to: 'up'heartbeat[4826]: 2014/05/11_23:54:36 info: Link test1.qiguo.com:eth0 up.heartbeat[4826]: 2014/05/11_23:54:36 info: Link 192.168.1.1:192.168.1.1 up.heartbeat[4826]: 2014/05/11_23:54:36 info: Status update for node 192.168.1.1: status pingheartbeat[4826]: 2014/05/11_23:54:41 info: Link test2.qiguo.com:eth0 up.heartbeat[4826]: 2014/05/11_23:54:41 info: Status update for node test2.qiguo.com: status upharc[4835]:     2014/05/11_23:54:41 info: Running /etc/ha.d/rc.d/status statusheartbeat[4826]: 2014/05/11_23:54:42 info: Comm_now_up(): updating status to activeheartbeat[4826]: 2014/05/11_23:54:42 info: Local status now set to: 'active'heartbeat[4826]: 2014/05/11_23:54:42 info: Status update for node test2.qiguo.com: status activeharc[4853]:     2014/05/11_23:54:42 info: Running /etc/ha.d/rc.d/status statusheartbeat[4826]: 2014/05/11_23:54:53 info: remote resource transition completed.heartbeat[4826]: 2014/05/11_23:54:53 info: remote resource transition completed.heartbeat[4826]: 2014/05/11_23:54:53 info: Initial resource acquisition complete (T_RESOURCES(us))IPaddr[4907]:   2014/05/11_23:54:53 INFO:  Resource is stoppedheartbeat[4871]: 2014/05/11_23:54:53 info: Local Resource acquisition completed.harc[4957]:     2014/05/11_23:54:53 info: Running /etc/ha.d/rc.d/ip-request-resp ip-request-respip-request-resp[4957]:  2014/05/11_23:54:53 received ip-request-resp IPaddr::192.168.1.210/24/eth0 OK yesResourceManager[4976]:  2014/05/11_23:54:53 info: Acquiring resource group: test1.qiguo.com IPaddr::192.168.1.210/24/eth0 httpdIPaddr[5002]:   2014/05/11_23:54:53 INFO:  Resource is stoppedResourceManager[4976]:  2014/05/11_23:54:53 info: Running /etc/ha.d/resource.d/IPaddr 192.168.1.210/24/eth0 startIPaddr[5097]:   2014/05/11_23:54:53 INFO: Using calculated netmask for 192.168.1.210: 255.255.255.0IPaddr[5097]:   2014/05/11_23:54:53 INFO: eval ifconfig eth0:0 192.168.1.210 netmask 255.255.255.0 broadcast 192.168.1.255IPaddr[5068]:   2014/05/11_23:54:53 INFO:  SuccessResourceManager[4976]:  2014/05/11_23:54:53 info: Running /etc/init.d/httpd  start
Observe the log and you can see that the highly available http cluster has been started. Now, manually execute shutdown-h now on the test1 server and observe the Log changes. (You can also use the hb_standby script that comes with heartbeat to switch. The default value is in the/usr/lib/heartbeat directory) 
heartbeat[11796]: 2014/05/11_20:56:46 info: Received shutdown notice from 'test1.qiguo.com'.heartbeat[11796]: 2014/05/11_20:56:46 info: Resources being acquired from test1.qiguo.com.heartbeat[11862]: 2014/05/11_20:56:46 info: acquire local HA resources (standby).heartbeat[11863]: 2014/05/11_20:56:46 info: No local resources [/usr/share/heartbeat/ResourceManager listkeys test2.qiguo.com] to acquire.heartbeat[11862]: 2014/05/11_20:56:46 info: local HA resource acquisition completed (standby).heartbeat[11796]: 2014/05/11_20:56:46 info: Standby resource acquisition done [all].harc[11888]:    2014/05/11_20:56:46 info: Running /etc/ha.d/rc.d/status statusmach_down[11903]:       2014/05/11_20:56:46 info: Taking over resource group IPaddr::192.168.1.210/24/eth0ResourceManager[11928]: 2014/05/11_20:56:46 info: Acquiring resource group: test1.qiguo.com IPaddr::192.168.1.210/24/eth0 httpdIPaddr[11954]:  2014/05/11_20:56:46 INFO:  Resource is stoppedResourceManager[11928]: 2014/05/11_20:56:46 info: Running /etc/ha.d/resource.d/IPaddr 192.168.1.210/24/eth0 startIPaddr[12049]:  2014/05/11_20:56:46 INFO: Using calculated netmask for 192.168.1.210: 255.255.255.0IPaddr[12049]:  2014/05/11_20:56:46 INFO: eval ifconfig eth0:0 192.168.1.210 netmask 255.255.255.0 broadcast 192.168.1.255IPaddr[12020]:  2014/05/11_20:56:46 INFO:  SuccessResourceManager[11928]: 2014/05/11_20:56:46 info: Running /etc/init.d/httpd  startmach_down[11903]:       2014/05/11_20:56:46 info: /usr/share/heartbeat/mach_down: nice_failback: foreign resources acquiredmach_down[11903]:       2014/05/11_20:56:46 info: mach_down takeover complete for node test1.qiguo.com.heartbeat[11796]: 2014/05/11_20:56:46 info: mach_down takeover complete.

Open the logs on the slave server. Observe that all the resources have been taken on the slave server. Now, access 192.168.1.210 to view the content on the host test2, after test1 is launched, because the auto_failback value is set to on, the resource will be retrieved again and no log files will be stored here. Here, a simple high-availability httpd service has been established.

In many cases, the httpd High Availability service also uses the file sharing service, so sometimes the file system needs to be shared. You only need to define the resources of one file system in haresources.
Test1.qiguo.com IPaddr: 192.168.1.210/24/eth0 Filesystem: 192.168.1.230:/html:/var/www/html: nfs httpd. The nfs file system is used for mounting.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.