Centos builds TFS Nameserver HA and centosnameserver

Source: Internet
Author: User
Tags install perl nameserver

Centos builds TFS Nameserver HA and centosnameserver
Background

Tfs requires compiling with gcc 4.1.2. The two solutions 1 are centos5 (with gcc 4.1.2) and 2 is centos6 to reduce gcc to 4.1.2. I am using the first method to successfully run tfs. The problem arises when using HA. The official requirement is to use heartbeat 3.xand centos5 is 2.x. Choose to compile heartbeat 3 in centos5 successfully, but you also need to compile pacemaker. Some problems may occur during compilation, such as version matching and incomplete dependencies, even after compilation, it will not waste time to compile XXX. After all, centos5 is very old.

Therefore, we decided to put the tfs compiled by centos5 into centos6 and run it successfully, and set up HA in centos6.

The problem with this method is that some O & M commands in the tfs of centos6 are not easy to use and xxx dependent libraries cannot be loaded. The solution is to use the O & M command in centos5. (In the long run, we should reduce the centos6 gcc to 4.1.2. If you have time, let's talk about it)

Let's get down to the truth.

System Environment

Nameserver1 master 192.168.6.129eth0

Nameserver2 slave 192.168.6.128 eth0

Vip192.168.6.100 (vip eth0: 0)

(You do not need to manually execute ifconfig eth0: 0 192.168.6.100 netmask running limit 255.0 up)

The system is centos6.6. installed by vmware ..

 

First install the package

Yuminstall heartbeat

You also need to install pacemaker. Do not install pacemaker using yum. Because the installed pacemaker is version 1.1.12, and pacemaker isolates crm after version 1.1.8, so download the pacemaker-1.1.7 build installation (http://down1.chinaunix.net/distfiles/pacemaker-1.1.7.tar.gz)

Install dependencies before Compilation

Yum install perl-TimeDate OpenIPMI-libs unzip libxslt unzip librdmacm pkgconfig libtool intltool gettext-devel glib2-devel python-devel libxml2-devel pam-devel ncurses-devel pygtk2 libtool-ltdl unzip clusterlib libtool-ltdl-devel swig gnutls-devel resource-agents clusterlib cluster-glue-libs-devel heartbeat-devel

(Use the epel source .)


# Tar-vxf pacemaker-1.1.7.tar.gz

# Cd ClusterLabs-pacemaker-b5b0a7b

#./Autogen. sh

#./Configure

# Make & make install

Both hosts must be installed.



Configure namesever

Ns. conf of the namesever of the two hosts

[Public]

# Vip

Ip_addr = 192.168.6.100

# Listening port

Port = 8108

[Nameserver]

Ip_addr_list = 192.168.6.129 | 192.168.6.128

The above items are the same as those on the two hosts, and the other items remain unchanged. Just pay attention to these items.

 

Configure hearbeat

Assume that the two host names are

192.168.6.129 test1

192.168.6.128 test2

(Test1, test2 changed to your host name, on test1 directly ping test2 can indicate correct, modify the host name refer to the http://blog.csdn.net/l241002209/article/details/42269435)

 

Then execute

# Cd $ TBLIB_ROOT/scripts/ha/

# Vi ha. cf

Debugfile/var/log/ha-debug

Debug 1

Keepalive 2

Warntime 5

Deadtime 10

Initdead 30

Auto_failback off

Autojoin none

Ucast eth0 192.168.6.128 <-- note that this is the peer address.

Udpport 694

Node tes1

Node test2

Compression bz2

Logfile/var/log/ha-log

Logfacility local0

Crm respawn

(After modification, save and exit: wq)

#./Deploy

#./Nsdep

 

Then execute

# Cd $ TBLIB_ROOT/scripts/ha/

# Vi ha. cf

Debugfile/var/log/ha-debug

Debug 1

Keepalive 2

Warntime 5

Deadtime 10

Initdead 30

Auto_failback off

Autojoin none

Ucast eth0 192.168.6.129 <-- note that this is the peer address.

Udpport 694

Node tes1

Node test2

Compression bz2

Logfile/var/log/ha-log

Logfacility local0

Crm respawn

(After modification, save and exit: wq)

#./Deploy

#./Nsdep

 

Authkeys of the two hosts must be consistent

Run in test1

# Sudo scp/etc/ha. d/authkeys root@192.168.6.128:/etc/ha. d/

(The root user does not add sudo, which is not described below)

Configure crm

Run the following commands in test1 and test2:

# Sudo vi/etc/passwd

Find hacluster: x: 498: 498: heartbeat user:/var/lib/heartbeat/cores/hacluster:/sbin/nologin (basically in the last line)

Change/sbin/nologin to/bin/bash.

(After modification, save and exit: wq)

# Sudo passwd hacluster

Set the password twice.

# Su hacluster

Enter Password

# Crm_attribute -- type crm_config -- attr-name duplicate Ric-cluster -- attr-value true

# Crm_attribute -- type crm_config -- attr-name stonith-enabled -- attr-value false

# Crm_attribute -- type rsc_defaults -- nameresource-stickiness -- update 100

 

(If the error message indicating that the connection fails, start heartbeat first, # sudo service heartbeat start)

# Exit

# Vi ns. xml

............

<Instance_attributesid = "ip-alias-instance_attributes">

<Nvpair id = "ip-alias-instance_attributes-ip" name = "ip" value = "192.168.6.100"/>

<Nvpair id = "ip-alias-instance_attributes-nic" name = "nic" value = "eth0: 0"/>

</Instance_attributes>

<Operations>

..............

 

<Primitive class = "ocf" id = "tfs-name-server" provider = "heartbeat" type = "NameServer">

<Instance_attributesid = "tfs-name-server-instance_attributes">

<Nvpair id = "tfs-name-server-instance_attributes-basedir" name = "basedir" value = "Here tfs installation path is written"/>

<Nvpair id = "tfs-name-server-instance_attributes-nsip" name = "nsip" value = "192.168.6.129"/> (note that tes1 writes 192.168.6.129, test2 writes 192.168.6.128, other red points should also be written based on the actual situation of the host. The authors are the same)

<Nvpair id = "tfs-name-server-instance_attributes-nsport" name = "nsport" value = "8108"/>

<Nvpair id = "tfs-name-server-instance_attributes-user" name = "user" value = "user name for starting tfs"/>

</Instance_attributes>


..................

(After modification, save and exit: wq)

# Sudo cp ns. xml/var/lib/heartbeat/crm/

# Sudo chown hacluster: haclient/var/lib/heartbeat/crm/ns. xml

# Su hacluster

# Cibadmin -- replace -- obj_type = resources -- xml-file/var/lib/heartbeat/crm/ns. xml

# Exit

# Sudo service heartbeat start (note that restart is started before)

 

Verify HA

Wait for a moment or

# Sudo tail-f/var/log/ha-log (view in real time, do not want to read ctrl + c)

Run netstat-lntp on test1 and test2 respectively.

Only one host has a nameserver process.


Run the command on the host with this process (replace it with your process number)

# Sudo kill 49148

Then run the command on another host (several seconds may be required)

# Netstat-lntp

The nameserver is started.

This indicates that when the nameserver of one host is exposed, the nameserver of another host of HA will be started to ensure the normal operation of tfs.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.