RHCS cluster Architecture (i) fence implementation of Nginx high availability

Source: Internet
Author: User
the role of fence

fence mainly in the framework of the role to prevent two of servers at the same time to write data to resources, destroying the security and consistency of resources to lead to the occurrence of brain crack. Fence Classification:

Hardware fence: By turning off the power to kick off the bad server by breaking down the server power

software fence: To kick off a bad server with cable and software Operating System

Virtual machine system: redhat6.5
 Real Machine: redhat7.2
Yum Source
[rhel6.5]
 name=rhel6.5
 baseurl=http://10.10.10.250/rhel6.5
 gpgcheck=0

 [highavailability]
 name= Highavailability
 baseurl=http://10.10.10.250/rhel6.5/highavailability
 gpgcheck=0

 [LoadBalancer]
 name=loadbalancer
 baseurl=http://10.10.10.250/rhel6.5/loadbalancer
 gpgcheck=0

 [ Scalablefilesystem]
 name=scalablefilesystem
 Baseurl=http://10.10.10.250/rhel6.5/scalablefilesystem
 gpgcheck=0

 [Resilientstorage]
 Name=resilientstorage
 Baseurl=http://10.10.10.250/rhel6.5/resilientstorage
 
host name corresponds to IP
Server1  ====>>   10.10.10.1 (Configure Nginx, Ricci, and Luci) Server2 ====>> 10.10.10.2   (Apache )
 server3  ====>>   10.10.10.3 (Apache)
 server4  ====>> 10.10.10.4   (Configure Nginx, Ricci
Nginx Installation
Tar zxf/mnt/nginx-1.10.1.tar.gz
 vim/mnt/nginx-1.10.0/src/core/nginx.h          # # #关闭版本显示
  #define Nginx_ Version      1010001
  #define Nginx_version      "1.10.1"
  #define Nginx_ver          "NGINX"

 vim/ MNT/NGINX-1.10.0/AUTO/CC/GCC #             #关闭调试环境
 # debug
 #CFLAGS = "$CFLAGS-G"

 Yum install-y pcre-devel GCC openssl-devel
./configure--prefix=/usr/local/nginx--with-http_ssl_module--with-http_stub_status_module Make
 && make install

 
Vim vim/usr/local/nginx/conf/nginx.conf                        # # # #在http里面加入
 upstream Dream {
     server 10.10.10.2:80;
         Server 10.10.10.3:80;
     }
 server {
    listen;
    server_name www.dream.com;
    Location/{
           proxy_pass http://dream;
    }
 }
nginx Startup script

under the/etc/init.d/of Server1 and Server4.

#!/bin/bash
 ./etc/rc.d/init.d/functions
 Nginx=${nginx-/usr/local/nginx/sbin/nginx}
 Prog=nginx
 Retval=0
 Start () {
         echo-n $ "Starting $prog:"
     daemon $nginx
         retval=$?
          echo return
         $RETVAL
 }

 Stop () {
     echo-n $ "Stopping $prog:"
     killproc  $nginx
     retval= $?
     Echo
 }
 Reload () {
     echo-n $ "reloading $prog:"
     $nginx-S reload
     Echo
 } case
 "$" in< C24/>start)
     start
     ;;
   Stop)
     stop
     ;;
   Status)
        status-p $nginx
    retval=$?
   ;; Restart)
     stop
     start
     ;;
   Reload)
         reload
     ;;
   *)
   echo $ "Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload|status|fullstatus| Graceful|help|configtest} "
    retval=2
 esac
 exit $RETVAL
Ricci and Luci installation server1:10.10.10.1
Yum Install-y Ricci Luci
 /etc/init.d/ricci Restart/etc/init.d/luci restart passwd Ricci ###                                 You must set the password in redhat6.1 above
 chkconfig Ricci on                           # # #一定要设置开机自启
 
server4:10.10.10.4
Yum install-y Ricci
 /etc/init.d/ricci restart
 passwd Ricci chkconfig on 
 
Web page Settings

Note: If the Create failure is performed in Server1 and Server4, the >/etc/cluster/cluster.conf is cleared and the page setup steps are performed

https://10.10.10.1:8084/

log in using the root user

Cluster Create:

at this point server1 and server4 download the package and reboot, with PS aux view:


after reboot:

Failover domains settings:

Resource settings:

Service Group settings:


Click Add Resource Add IP address and script separately, and then submit:

Query

server1:10.10.10.1

[Root@server1 cluster]# cman_tool status
 version:6.2.0
 Config version:1 cluster name:11 cluster
 Id: 14140
 Cluster member:yes
 Cluster generation:80
 Membership State:cluster-member nodes:2 Expected votes:1 Total
 votes:2
 Node votes:1
 quorum:1  
 Active subsystems:9
 flags:2node 
 Ports bound:0  
 node name:server1
 node id:1
 multicast addresses:239.192.55.115 
 node addresses:1 0.10.10.1 

 [Root@server1 cluster]# clustat                      # # #查看状态
 cluster Status for one @ Thu Apr 14:50:09 2018
 Me Mber status:quorate member

  Name                             ID   Status
  ----------                             ----------
  Server1                                     1 Online, local
  server4                                     2 online

we can also use IP addr to query VIP location. Test

to add host resolution to the real machine:

Vim/etc/hosts
 10.10.10.100 www.dream.com

 Curl www.dream.com

discovery can be load balanced and executed in Server1:

can be found in the real machine can be accessed, the VIP ran into the server4, and the Nginx service is also opened, but when you perform the Echo C>/proc/sysrq-trigger, the simulation kernel crashes, found that Server1 did not take over, Here is the powerful fence explaining the fence mechanism

in the real machine:

[Root@foundation25 network-scripts]# mkdir/etc/cluster
 [root@foundation25 network-scripts]# DD If=/dev/urandom Of=/etc/cluster/fence_xvm.key bs=128 count=1             # # #生成随机数key
 1+0 Records in
 1+0 records out
 128 bytes ( 128 B) copied, 0.000149135 s, 858 kb/s

pass key to 2 nodes:

Scp/etc/cluster/fence_xvm.key root@10.10.10.1:/etc/cluster/
 Scp/etc/cluster/fence_xvm.key root@10.10.10.4:/ etc/cluster/

Fence Settings:

 [Root@foundation23 mnt]# fence_virtd-c # # #如果没有yum进行下载即可 Module search Path [/usr/lib64/fence-virt : Available backends:libvirt 0.1 Available listeners:serial 0.4 Multicast 1.2 Listener modules are re

 Sponsible for accepting requests from fencing clients. Listener module [multicast]: # # #模式 The multicast Listener module is designed to use EN

 Vironments where the guests and hosts may communicate over a network using multicast.

 The multicast address was the address that a client would use to send fencing requests to FENCE_VIRTD.

 multicast IP address [225.0.0.12]: # # #广播地址 Using IPv4 as family. Multicast IP Port [1229]: # # #端口, you can specify Setting a preferred interface causes fence_vir  TD to listen in that interface.
 Normally, it listens on all interfaces. In environments where the virtual machines are using the host machine as a gateway,This *must* is set (typically to virbr0).

Set to ' none ' for no interface.  Interface [virbr0]: Br0 # # #此处根据自己的网卡名进行设置 The key file is the shared key information  Which is used to authenticate fencing requests.

 The contents of this file must is distributed to each physical host and virtual machine within a cluster. Key File [/etc/cluster/fence_xvm.key]: Backend modules are responsible for routing requests to the appropriate

 Sor or management layer.

 Backend module [Libvirt]: Configuration complete.
    = = = Begin Configuration = = backends {libvirt {uri = "Qemu:///system";
        } listeners {multicast {port = ' 1229 ';
        Family = "IPv4";
        interface = "Br0";
        Address = "225.0.0.12";
     Key_file = "/etc/cluster/fence_xvm.key";
    } FENCE_VIRTD {module_path = "/usr/lib64/fence-virt";
    backend = "Libvirt";
 Listener = "Multicast"; = = = End Configuration = = RepLace/etc/fence_virt.conf with the above [y/n]? Y systemctl Restart Fence_virtd.service # # #重启fence服务, its configuration file is in/etc/fence_virt.conf
Web page Settings

Fence devices Settings:

The method name can be customized and the same settings are added to the fence in Server4:

uuid View method:
Test

server1:10.10.10.1

echo C>/proc/sysrq-trigger                             # # #测试用的命令, kernel crashes

Server1 will automatically power down to restart and rejoin the cluster, if the implementation on behalf of fence build success ...

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.