Heartbeat Building high-availability NFS

Source: Internet
Author: User
Tags sha1

Basic Environment preparation;


650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/58/C0/wKioL1S7pgGApQ4qAAFru3qSIDE579.jpg "title=" 1.png " Width= "height=" 245 "border=" 0 "hspace=" 0 "vspace=" 0 "style=" width:500px;height:245px; "alt=" Wkiol1s7pggapq4qaafru3qside579.jpg "/>

[[email protected] ~]# vim/etc/sysconfig/network (Modify host name)

Networking=yes

hostname=node1.dragon.com ( here each host modifies itself to node1,2, 3)

[[Email protected]~]# vim/etc/hosts (Modify the host file, point to node1,2,3 respectively )

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

:: 1 localhost localhost.localdomain localhost6 localhost6.localdomain6

127.0.0.1node1.dragon.com Node1 ( Each host modifies itself to node1,2,3)

172.16.18.10node1.dragon.com Node1

172.16.18.20node2.dragon.com Node2

172.16.18.30node3.dragon.com Node3

[[email protected] ~]# scp/etc/hosts node2:/etc/( pass the file to node2,node3)

[Email protected] ~]# scp/etc/hosts node3:/etc/

[Email protected] ~]# scp/etc/sysconfig/network node2:/etc/sysconfig/

[Email protected] ~]# scp/etc/sysconfig/network node3:/etc/sysconfig/

[[email protected] localhost ~]# init 6(modify three server names remember to restart)

Generate the private key;

facilitates transfer of files between several servers. Here is only Node1 to node2 and node3 pass the private key, if Node2 want to give node1 file only need the same configuration;

[[email protected] ~]# ssh-keygen-t RSA ( generate key, no password )

[[email protected] ~]# Ssh-copy-id node2 (key passed to Node2)

[[email protected] ~]# Ssh-copy-id node3 (key passed to Node2)

Install related Packages

(performed in two Node1 and Node2)

[[email protected] yum.repos.d]# Yum-ygroupinstall "Development tools" "Platform Development" (Installation development Platform, development environment, three-server full-installed)

[email protected] ~]# Yum install net-snmp-libs libnet PyXML perl-timedate (Installation dependency, Node1,node2 install )

[Email protected] heartbeat2]# CD heartbeat2/

[[email protected] heartbeat2]# ll (relevant package can be downloaded to the official website)

-rw-r--r--1 root root 1420924 Sep heartbeat-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 3589552 Sep 2013heartbeat-debuginfo-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 282836 Sep heartbeat-devel-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 168052 Sep heartbeat-gui-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 108932 Sep heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 92388 Sep heartbeat-pils-2.1.4-12.el6.x86_64.rpm

-rw-r--r--1 root root 166580 Sep heartbeat-stonith-2.1.4-12.el6.x86_64.rpm

[Email protected] heartbeat2]# RPM-IVH heartbeat-pils-2.1.4-12.el6.x86_64.rpmheartbeat-stonith-2.1.4-12.el6.x86_ 64.rpm heartbeat-2.1.4-12.el6.x86_64.rpm

Set up time synchronization

[[Email protected] yum.repos.d]# date; SSH node2 ' date ' (view time )

[[email protected] yum.repos.d]# ntpdate172.16.0.1; SSH node2 ' ntpdate172.16.0.1 ' (my time server is 172.16.0.1, Can also go to Baidu to find time server)

[[email protected] heartbeat2]# which ntpdate (view a command path)

/usr/sbin/ntpdate

[[email protected] yum.repos.d]# crontab–e (execute on two node, set task schedule, synchronize every three minutes)

*/3 * * * */usr/sbin/ntpdate 172.16.0.1 &>/dev/null

Configuration Heartbeat

[Email protected] yum.repos.d]# cd/usr/share/doc/heartbeat-2.1.4/

[[email protected] heartbeat-2.1.4]# Cp-pauthkeys haresources ha.cf/etc/ha.d/( keep original properties, copy files )

Ha.cf:heartbeat the main configuration file;

Authkeys: cluster information encryption algorithm and key;

Haresources:heartbeat v1 's CRM configuration interface;

[Email protected] heartbeat-2.1.4]# cd/etc/ha.d/

[Email protected] ha.d]# chmod Authkeys

[[email protected] ha.d]# vim ha.cf (Edit Master config file;)

# File to write other messages to

Logfile/var/log/ha-log

# Facility to use for syslog ()/logger

#logfacility local0

.................

mcast eth0 225.0.0.14 6941 0 (multicast address, range 224.0.2.0~238.255.255.255

, can be filled out , based on multicast communication , select Network card, group, port, multi-broadcast text TTL value (1 is direct to the target host),0 refused to receive the local loopback

# Set up a unicast/udp heartbeat medium

..........

# (Note:auto_failback can is any Boolean or "legacy")

Auto_failback on ( default option, automatic recovery on Failure )

..........

# Tell me what machines is in the cluster

# node NodeName ...-Must match uname-n

#node Ken3

#node Kathy

node node1.dragon.com (define node name, must be the name of each host #uname-N)

Node node2.dragon.com

..........

# Note:don ' t use a cluster node as ping node

#ping 10.10.10.254

Ping 172.16.18.30 (Define the quorum node, which can Ping Pass to indicate normal operation)

..........

# Library in the system.

compression bz2 (define compression tools)

# Confiugre Compression threshold

Compression_threshold 2 (less than 2K without compression)

[[email protected] ~]# OpenSSL Rand-hex 6 (Generate a random number)

Febe06c057d0

[[email protected] ha.d]# vim Authkeys (cluster information encryption algorithm and key)

#auth 1

#1 CRC

#2 SHA1 hi!

#3 MD5 hello!

Auth 1

1 SHA1 febe06c057d0

[[email protected] ha.d]# vim haresources (set resource management haresources master node )

# They must match the names of the nodes listed in HA.CF, which in turn

# must match the ' uname-n ' of some node in the cluster. So they aren ' t

# Virtual in any sense of the word.

node1.dragon.com 172.16.18.51/16/eth0/172.16.255.255 httpd (Master node,VIP(virtual IP), service initiated)

[[email protected] ha.d]# scp-p authkeysharesources ha.cf node2:/etc/ha.d/ (node1,node2 configuration Sync)

[[Email protected] heartbeat2]# service Heartbeatstart; SSH node2 ' service heartbeat start ' ( start the two node Heartbeat Service , the httpd service is automatically enabled )

[Email protected] ha.d]# vim/var/www/html/index.html

[Email protected] heartbeat2]# vim/var/www/html/index.html

Access Virtual IP, Discover VIP on Node1 node

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M02/58/C3/wKiom1S7pVuyDj_SAAB8j81JFcY906.jpg "style=" float: none; "title=" 2.png "alt=" Wkiom1s7pvuydj_saab8j81jfcy906.jpg "/>


[[Email protected] ha.d]# service heartbeat stop (again access 172.16.18.51, will jump to node2)

650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M00/58/C0/wKioL1S7pi3B8RIcAACD5JJ5mbU387.jpg "title=" 3.png " Style= "WIDTH:350PX;HEIGHT:172PX;" width= "height=" 172 "border=" 0 "hspace=" 0 "vspace=" 0 "alt=" Wkiol1s7pi3b8ricaacd5jj5mbu387.jpg "/>

[[Email protected] ha.d]# service heartbeat start

[Email protected] ha.d]# cd/usr/lib64/heartbeat/

[[email protected] heartbeat]#./hb_standby ( You can also enter a command to make yourself an alternate node )

[[Email protected] Heartbeat]#./hb_takeover ( You can also enter a command to change the master node )

Add NFS Server

[Email protected] yum.repos.d]# MKDIR/WEB/HTDOCS–PV

[Email protected] yum.repos.d]# vim/web/htdocs/index.html

[[Email protected]]# vim/etc/exports (shared file /web/htdocs)

/WEB/HTDOCS172.16.0.0/16 (Rw,no_root_squash)

[[Email protected] yum.repos.d]# Service NFS Start (Start NFS service)

[[email protected] yum.repos.d]# chkconfig NFS on (set boot to start this service)

[[email protected] yum.repos.d]# chkconfig–list NFS (view boot boot)

[Email protected] heartbeat]# cd/etc/ha.d/

[[Email protected] ha.d]# service heartbeat stop;ssh node2 ' service Heartbeat Stop ' (turn off service first)

[[email protected] ha.d]# mount-t NFS 172.16.18.30:/web/htdocs/mnt (try to mount, preferably on both nodes)

[[email protected] ha.d]# Mount (see if the mount is successful)

[[email protected] ha.d]# umount/mnt/ (uninstall after successful)

[Email protected] ha.d]# vim haresources

#node1. dragon.com172.16.18.51/16/eth0/172.16.255.255 httpd ( Note the previous configuration )

Node1.dragon.com172.16.18.51/16/eth0/172.16.255.255filesystem::172.16.18.30:/web/htdocs::/var/www/html::nfs httpd

Node1(default master node name) IP address Filesystem(proxy file)::172.16.18.30:/web/htdocs(proxy file address, we are here to share files, It can also be a local file, if it is local file directly write local path)::/var/www/html Hang at point :: NFS File system Services : httpd

[[email protected] ha.d]# SCP haresourcesnode2:/etc/ha.d/ (copy to Node2)

[[Email protected] ha.d]# service heartbeatrestart; SSH node2 ' service heartbeat restart ' (start service)

[[email protected] ha.d]# Mount (wait a few seconds to find the file has been mounted successfully)

172.16.18.30:/web/htdocson/var/www/html type NFS (rw,vers=4,addr=172.16.18.30,clientaddr=172.16.18.10)

visit 172.16.18.51(virtual IP) again to display the NFS page!


650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/58/C0/wKioL1S7pleCe3a_AADAKOkd-j0614.jpg "style=" width : 400px;height:183px; "title=" 4.png "width=" "height=" 183 "border=" 0 "hspace=" 0 "vspace=" 0 "alt=" wkiol1s7plece3a_ Aadakokd-j0614.jpg "/>

[[email protected] ha.d]# serviceheartbeat Stop (stop primary node service)

[[email protected] ha.d]# mout (the file is already mounted to the standby node)

172.16.18.30:/web/htdocson/var/www/html type NFS (rw,vers=4,addr=172.16.18.30,clientaddr=172.16.18.20)

can still access!!!

650) this.width=650; "src=" Http://s3.51cto.com/wyfs02/M01/58/C3/wKiom1S7pYjhjiwgAAC5MCa3_gg256.jpg "style=" width : 400px;height:159px; "title=" 5.png "width=" "height=" 159 "border=" 0 "hspace=" 0 "vspace=" 0 "alt=" Wkiom1s7pyjhjiwgaac5mca3_gg256.jpg "/>

Heartbeat Building high-availability NFS

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.