Build HA clusters in Linux

Source: Internet
Author: User

Build HA clusters in Linux

HA (high available) is also known as dual-machine hot backup for key services. Simply put, there are two machines A and B. Normally, A provides services and B is on standby. When A goes down or the service goes down, it switches to B to continue providing services. Frequently used open-source software for High Availability includes heartbeat and keepalived. keepalived provides the load balancing function.

Next we will use heartbeat for the HA cluster and use the nginx service as the corresponding HA service.
Test preparation:
The two machines are both CentOS6.6, each with an added network adapter eth1. Use host mode to connect to vmnet1 and set the network segment to 192.168.11.0; eth0 to NAT mode and the network segment to 192.168.20.0;

After the boot, copy the ifcfg-eth0 to the ifcfg-eth1, change DEVICE to eth1, IP address to: 192.168.11.20
# Cd/etc/sysconfig/network-scripts/
# Cp ifcfg-eth0 ifcfg-eth1
# Cat ifcfg-eth1
DEVICE = eth1
TYPE = Ethernet
ONBOOT = yes
NM_CONTROLLED = yes
BOOTPROTO = static
IPADDR = 192.168.11.20

After the configuration is complete, restart the nic and ifconfig to view the IP addresses of eth0 and eth1;

#/Etc/init. d/network restart
# Ifconfig

Host Configuration:

Hostname hello
Eth0: 192.168.20.20
Eth1: 192.168.11.20

Slave Configuration:
Hostname web
Eth0: 192.168.20.30
Eth1: 192.168.11.30

1. Set hostname to hello and web;
2. Firewall must be disabled for both the master and slave nodes; selinux must be disabled;
# Iptables-F
# Setenforce 0
Setenforce: SELinux is disabled

3. Add the following content to vi/etc/hosts // Master/Slave:
192.168.20.20 hello
192.168.20.30 web

4. Install the epel extension Source
# Yum install-y epel-release

5. Install heartbeat and libnet on both machines.

# Yum install-y heartbeat * libnet nginx

Nginx acts as a proxy service;
6. Master (hello) Configuration
The installation directory of Heartbeat is/etc/ha. d;
Copy the template of the configuration file to the/etc/ha. d directory;
Cd/usr/share/doc/heartbeat-3.0.4/
Cp authkeys ha. cf haresources/etc/ha. d/
Cd/etc/ha. d

Vi authkeys // remove the # sign or add the following content:

Auth 3
3 md5 Hello!

Change permission to 600

Chmod 600 authkeys

Vi haresources // Add

Hello 192.168.11.100/24/eth1: 0 nginx

192.168.11.100 ip addresses are mobile ip addresses and virtual ip addresses, which are used to bind services. The subnet mask is 24 bits. Use eth1: 0 to bind virtual ip addresses and run nginx for testing;

Note: The haresources file settings of the two hosts must be identical.

Vi ha. cf // change to the following content:
Debugfile/var/log/ha-debug
Logfile/var/log/ha-log
Logfacility local0
Keepalive 2
Deadtime 30
Warntime 10
Initdead 60
Udpport 694
Ucast eth1 192.168.11.30
Auto_failback on
Node hello
Node web
Ping 192.168.11.1
Respawn hacluster/usr/lib/heartbeat/ipfail

7. Copy the three configurations on the master node to the master node:

# Cd/etc/ha. d/
# Scp authkeys ha. cf haresources web:/etc/ha. d/

8. From the top (web), you only need to edit ha. cf
Vi/etc/ha. d/ha. cf // you only need to change the location
Change ucast eth1 192.168.11.30 to ucast eth1 192.168.11.20
9. Start heartbeat
First Master, then slave
# Service heartbeat start

10. check and test
Ifconfig: Check whether eth1: 0 exists;
[Root @ yong ~] # Ifconfig
Eth0 Link encap: Ethernet HWaddr 00: 0C: 29: 43: 3D: 32
Inet addr: 192.168.20.20 Bcast: 192.168.255.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe43: 3d32/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 433 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 429 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 37095 (36.2 KiB) TX bytes: 75381 (73.6 KiB)
Interrupt: 18 Base address: 0x2000
Eth1 Link encap: Ethernet HWaddr 00: 0C: 29: 43: 3D: 3C
Inet addr: 192.168.11.20 Bcast: 192.168.11.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe43: 3d3c/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 396 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 399 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 83641 (81.6 KiB) TX bytes: 89725 (87.6 KiB)
Interrupt: 18 Base address: 0x2080

Eth1: 0 Link encap: Ethernet HWaddr 00: 0C: 29: 43: 3D: 3C
Inet addr: 192.168.11.100 Bcast: 192.168.11.255 Mask: 255.255.255.0
Up broadcast running multicast mtu: 1500 Metric: 1
Interrupt: 18 Base address: 0x2080

Lo Link encap: Local Loopback
Inet addr: 127.0.0.1 Mask: 255.0.0.0
Inet6 addr: 1/128 Scope: Host
Up loopback running mtu: 65536 Metric: 1
RX packets: 0 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 0 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 0 (0.0 B) TX bytes: 0 (0.0 B)

Check whether there are nginx and heartbeat processes;

[Root @ yong ha. d] # ps aux | grep nginx
Root 3731 0.0 0.1 5000 632? Ss nginx: master process/usr/local/nginx/sbin/nginx-c/usr/local/nginx/conf/nginx. conf
Nobody 3732 0.0 0.1 5200? S nginx: worker process

[Root @ yong ha. d] # ps aux | grep heartbeat
Root 2765 0.3 1.3 6684 6676? SLs heartbeat: master control process
Root 2770 0.0 1.2 6488 6480? SL heartbeat: FIFO reader
Root 2771 0.0 1.2 6484 6476? SL heartbeat: write: ucast eth1
Root 2772 0.0 1.2 6484 6476? SL heartbeat: read: ucast eth1
Root 2773 0.0 1.2 6484 6476? SL heartbeat: write: ping 192.168.11.1
Root 2774 0.0 1.2 6484 6476? SL heartbeat: read: ping 192.168.11.1
498 2787 0.0 0.2 5380 1488? S/usr/lib/heartbeat/ipfail

11. Test 1: intentionally disabling ping on the master
# Iptables-I INPUT-p icmp-j DROP

The Windows client keeps pinging 192.168.11.100. If ping is disabled, it will be interrupted for a while;

Eth1: 0 is displayed in the ifconfig file. The nginx process is also displayed, indicating that the process is switched to the slave machine;
[Root @ web ~] # Ifconfig
Eth0 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: EC
Inet addr: 192.168.255.30 Bcast: 192.168.255.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe97: c3ec/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 460 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 469 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 37104 (36.2 KiB) TX bytes: 101866 (99.4 KiB)
Interrupt: 19 Base address: 0x2000

Eth1 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: F6
Inet addr: 192.168.11.30 Bcast: 192.168.11.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe97: c3f6/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 1022 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 1035 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 191275 (186.7 KiB) TX bytes: 182393 (178.1 KiB)
Interrupt: 18 Base address: 0x2080

Eth1: 0 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: F6
Inet addr: 192.168.11.100 Bcast: 192.168.11.255 Mask: 255.255.255.0
Up broadcast running multicast mtu: 1500 Metric: 1
Interrupt: 18 Base address: 0x2080

12. Test 2: Stop the heartbeat service on the master node.
1 # service heartbeat stop

The Windows client has been pinging 192.168.11.100 for a long time. After the heartbeat service is stopped, it will be interrupted for a while, and then it will be sent directly;
Eth1: 0 is displayed in the ifconfig file. The nginx process is also displayed, indicating that the process is switched to the slave machine;


13. Test 3: Split-brain test. The eth1 Nic is stopped for both the master and slave nodes;

If the eth1 Nic is disabled and the ifdown eth1 heartbeat line is disconnected, split-brain occurs. The connection between the master and slave nodes is broken. If the master node does not receive the heartbeat at the time of death, the master node is considered dead.
When the master node is dead, the slave node takes over the service and the slave node uses the virtual IP address. Both the master node and slave node are running the service;
The host machine does not have an eth1 Nic and has been running nginx;
[Root @ yong ha. d] # ifconfig
Eth0 Link encap: Ethernet HWaddr 00: 0C: 29: 43: 3D: 32
Inet addr: 192.168.20.20 Bcast: 192.168.255.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe43: 3d32/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 11442 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 15376 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 842833 (823.0 KiB) TX bytes: 10226838 (9.7 MiB)
Interrupt: 18 Base address: 0x2000
Lo Link encap: Local Loopback
Inet addr: 127.0.0.1 Mask: 255.0.0.0
Inet6 addr: 1/128 Scope: Host
Up loopback running mtu: 65536 Metric: 1
RX packets: 1163 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 1163 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 txqueuelen: 0
RX bytes: 150095 (146.5 KiB) TX bytes: 150095 (146.5 KiB)

[Root @ yong ha. d] # ps aux | grep nginx
Root 11686 0.0 0.1 5000 636? Nginx: master process/usr/local/nginx/sbin/nginx-c/usr/local/nginx/conf/nginx. conf
Nobody 11687 0.0 0.1 5200? S nginx: worker process

View the address from the machine. eth1: 0 appears.
[Root @ localhost ha. d] # ifconfig
Eth0 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: EC
Inet addr: 192.168.255.30 Bcast: 192.168.255.255 Mask: 255.255.255.0
Inet6 addr: fe80: 20c: 29ff: fe97: c3ec/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 7901 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 5891 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 603592 (589.4 KiB) TX bytes: 2094606 (1.9 MiB)
Interrupt: 19 Base address: 0x2000
Eth1 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: F6
Inet6 addr: fe80: 20c: 29ff: fe97: c3f6/64 Scope: Link
Up broadcast running multicast mtu: 1500 Metric: 1
RX packets: 20774 errors: 0 dropped: 0 overruns: 0 frame: 0
TX packets: 59144 errors: 0 dropped: 0 overruns: 0 carrier: 0
Collisions: 0 FIG: 1000
RX bytes: 3513861 (3.3 MiB) TX bytes: 12998603 (12.3 MiB)
Interrupt: 18 Base address: 0x2080
Eth1: 0 Link encap: Ethernet HWaddr 00: 0C: 29: 97: C3: F6
Inet addr: 192.168.11.100 Bcast: 0.0.0.0 Mask: 255.255.255.0
Up broadcast running multicast mtu: 1500 Metric: 1
Interrupt: 18 Base address: 0x2080

[Root @ localhost ha. d] # ps aux | grep nginx
Root 8938 0.0 0.0 3684 584? Nginx: master process/usr/local/nginx/sbin/nginx-c/usr/local/nginx/conf/nginx. conf
Nobody 8940 0.0 0.1 4936? S nginx: worker process
Nobody 8941 0.0 0.2 4936? S nginx: worker process

14. Test 4: Set auto_failback off
You need to configure the master and slave nodes to start heartbeat. After the ping is disabled on the master node, the server jumps to the slave node. After the master node recovers from the ping, the server does not jump to the master node and provides services from the master node;


HA High Availability construction is complete;

LVS + Keepalived achieves layer-4 load and high availability

LVS + Keepalived high-availability server Load balancer cluster architecture Experiment

Heartbeat + LVS build a high-availability server Load balancer Cluster

Build an LVS load balancing test environment

A stress test report for LVS

Haproxy + Keepalived + Apache configuration notes in CentOS 6.3

Haproxy + KeepAlived WEB Cluster on CentOS 6

Keepalived + Haproxy configure high-availability Load Balancing

Haproxy + Keepalived build high-availability Load Balancing

Configure LVS + Keepalived + ipvsadm on CentOS 7

Keepalived high-availability cluster Construction

This article permanently updates the link address:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.