An example of haproxy + keepalived + JBoss cluster implementation architecture

Source: Internet
Author: User
Tags database issues syslog jboss haproxy ibm server
I. Basic Environment


Two IBM x3650m3, centos5.9 x64 operating system, connected to an IBM ds3400 storage, the underlying system uses the GFS File System for file sharing, and the database is another independent Oracle RAC cluster, this architecture does not need to consider database issues.
For details about GFS file system and related configurations, see the IBM x3650m3 + GFS + IPMI fence production environment configuration example. This article is based on the previous article. The host names of the two servers are node01 and node02 respectively. Because the application architecture is simple and the server resources are limited, the dual-host mutual standby mode high availability architecture is achieved through the two servers. From: http://koumm.blog.51cto.com/

An example of IBM x3650m3 + GFS + IPMI fence production environment Configuration
Http://koumm.blog.51cto.com/703525/1544971

The architecture diagram is as follows:

650) This. width = 650; "Title =" aaaa.jpg "alt =" wkiom1p_q97cwm_saajrjnxbe6y045.jpg "src =" http://s3.51cto.com/wyfs02/M02/47/BE/wKiom1P_Q97CwM_sAAJRJNxbE6Y045.jpg "/>

1. network environment and IP address preparation, centos5.9 x641) node 1 Host Name: node01

Note: The IBM server must connect the dedicated imm2 port or the system MGMT network port to the switch, which is in the same segment as the local IP address.

IPMI: 10.10.10.85/24
Eth1: 192.168.233.83/24
Eth1: 0 10.10.10.87/24

 

2) node 2 host name: node02

IPMI: 10.10.10.86/24
Eth1: 192.168.233.84/24
Eth1: 0 10.10.10.88/24

 

3) configure the node01 and node02 hosts files

# Cat/etc/hosts

192.168.233.83 node01
192.168.233.84 node02
192.168.233.90 VIP
10.10.10.85 node01_ipmi
10.10.10.86 node02_ipmi

 

Ii. Dual-host keepalived Configuration

Implement a VIP address. In this example, the VIP address is 192.168.233.90.

1. Install keepalived Software

(Keepalive-1.2.12 installed after no problem.

(1) download the software package and install it on node01 and node02
wget http://www.keepalived.org/software/keepalived-1.2.12.tar.gz tar zxvf keepalived-1.2.12.tar.gz cd keepalived-1.2.12 ./configure --prefix=/usr/local/keepalived make && make installcp /usr/local/keepalived/sbin/keepalived /usr/sbin/ cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/ cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ mkdir /etc/keepalived

 

2. Create the keepalived configuration file 1) on node 1 node01

Modify the configuration file. The bound Nic is eth1.

Note: The slave server has a different priority than the IP address of the local server.

# vi /etc/keepalived/keepalived.conf  ! Configuration File for keepalivedglobal_defs {   notification_email {     [email protected]   }   notification_email_from [email protected]   smtp_server 127.0.0.1   smtp_connect_timeout 30   router_id LVS_DEVEL}vrrp_instance VI_1 {    state MASTER         interface eth1    virtual_router_id 51    mcast_src_ip 192.168.233.83       priority 100          advert_int 1    authentication {        auth_type PASS        auth_pass 876543    }    virtual_ipaddress {        192.168.233.90        }}
2) configure the file on node02 Node 2
# vi /etc/keepalived/keepalived.conf  ! Configuration File for keepalivedglobal_defs {   notification_email {     [email protected]   }   notification_email_from [email protected]   smtp_server 127.0.0.1   smtp_connect_timeout 30   router_id LVS_DEVEL}vrrp_instance VI_1 {    state MASTER         interface eth1    virtual_router_id 51    mcast_src_ip 192.168.233.84    priority 99      advert_int 1    authentication {        auth_type PASS        auth_pass 876543    }    virtual_ipaddress {        192.168.233.90      }}
3. Start and create the keepalived service on node01 and node02. 1) start the service and enable it upon startup:
service keepalived start chkconfig --add keepalived chkconfig keepalived on

 

2) test and observe the VIP drift (1) VIP address observation

HOST: Observe the VIP address as follows:

[[email protected] /]# service keepalived start Starting keepalived: [ OK ][[email protected] /]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo 2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000 link/ether e4:1f:13:65:0e:a0 brd ff:ff:ff:ff:ff:ff 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether e4:1f:13:65:0e:a2 brd ff:ff:ff:ff:ff:ff inet 192.168.233.83/24 brd 192.168.230.255 scope global eth1 inet 10.10.10.87/24 brd 10.10.10.255 scope global eth1:0 inet 192.168.233.85/32 scope global eth1 4: usb0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether e6:1f:13:57:0e:a3 brd ff:ff:ff:ff:ff:ff [[email protected] /]#

Note: you can disable the keepalived service and observe the VIP movement through CAT/var/log/messages.

 

3. haproxy reverse proxy configuration

Configure node01 and node02

1. Add non-local IP address bonding support
# vi /etc/sysctl.conf net.ipv4.ip_nonlocal_bind = 1# sysctl –p

 

2. Install haproxy Software
# tar zxvf haproxy-1.4.25.tar.gz # cd haproxy-1.4.25 # make TARGET=linux26 PREFIX=/usr/local/haproxy # make install PREFIX=/usr/local/haproxy # cd /usr/local/haproxy # mkdir conf

 

3. Install socat
# wget http://www.dest-unreach.org/socat/download/socat-2.0.0-b5.tar.gz # tar zxvf socat-2.0.0-b5.tar.gz # ./configure --disable-fips # make && make install

 

4. Create a configuration file 1) create a configuration file on node01
# vi /usr/local/haproxy/conf/haproxy.cfgglobal log 127.0.0.1 local0 maxconn 65535 chroot /usr/local/haproxy uid 99 gid 99 stats socket /usr/local/haproxy/HaproxSocket level admin daemon nbproc 1 pidfile /usr/local/haproxy/haproxy.pid #debugdefaults log 127.0.0.1 local3 mode http option httplog option httpclose option dontlognull option forwardfor option redispatch retries 2 maxconn 2000 balance source #balance roundrobin stats uri /haproxy-stats contimeout 5000 clitimeout 50000 srvtimeout 50000listen web_proxy 0.0.0.0:80 mode http option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.233.90 server node01 192.168.233.83:8000 weight 3 check inter 2000 rise 2 fall 1 server node02 192.168.233.84:8000 weight 3 backup check inter 2000 rise 2 fall 1listen stats_auth 0.0.0.0:91 mode http stats enable stats uri /admin stats realm "Admin console" stats auth admin:123456 stats hide-version stats refresh 10s stats admin if TRUE

 

2) create a configuration file on node02
# vi /usr/local/haproxy/conf/haproxy.cfgglobal log 127.0.0.1 local0 maxconn 65535 chroot /usr/local/haproxy uid 99 gid 99 stats socket /usr/local/haproxy/HaproxSocket level admin daemon nbproc 1 pidfile /usr/local/haproxy/haproxy.pid #debugdefaults log 127.0.0.1 local3 mode http option httplog option httpclose option dontlognull option forwardfor option redispatch retries 2 maxconn 2000 balance source #balance roundrobin stats uri /haproxy-stats contimeout 5000 clitimeout 50000 srvtimeout 50000listen web_proxy 0.0.0.0:80 mode http option httpchk GET /test.html HTTP/1.0\r\nHost:192.168.233.90 server node01 192.168.233.83:8000 weight 3 backup check inter 2000 rise 2 fall 1 server node02 192.168.233.84:8000 weight 3 check inter 2000 rise 2 fall 1listen stats_auth 0.0.0.0:91 mode http stats enable stats uri /admin stats realm "Admin_console" stats auth admin:123456 stats hide-version stats refresh 10s stats admin if TRUE

Note: Dual-node master-slave mode optimizes the application of the local node as the master node, or load balancing mode.

 

5. Configure the haproxy log file on node01 and node02. 1) configure the haproxy log.
# vi /etc/syslog.conf local3.* /var/log/haproxy.log local0.* /var/log/haproxy.log *.info;mail.none;authpriv.none;cron.none;local3.none /var/log/messages

Note: The third line removes the/var/log/message function to record haproxy. log.

# vi /etc/sysconfig/syslog SYSLOGD_OPTIONS="-r -m 0"

Manual execution

Service syslog restart touch/var/log/haproxy. log chown nobody: Nobody/var/log/haproxy. log Note: 99 is the default nobody user chmod U + x/var/log/haproxy. log

 

2) haproxy log Cutting
# vi /root/system/cut_log.sh #!/bin/bash # author: koumm # desc: # date: 2014-08-28 # version: v1.0 # modify:# cut haproxy log if [ -e /var/log/haproxy.log ]; then mv /var/log/haproxy.log /var/log/haproxy.log.bak fiif [ -e /var/log/haproxy.log.bak ]; then logrotate -f /etc/logrotate.conf chown nobody:nobody /var/log/haproxy.log chmod +x /var/log/haproxy.log fisleep 1if [ -e /var/log/haproxy.log ]; then rm -rf /var/log/haproxy.log.bak fi

Note: run the script with the root permission.
# Crontab-e
59 23 *** Su-root-c '/root/system/cut_log.sh'

 

6. Configure haproxy to start the service
# vi /etc/init.d/haproxy#!/bin/sh # chkconfig: 345 85 15 # description: HAProxy is a TCP/HTTP reverse proxy which is particularly suited for high availability environments.# Source function library. if [ -f /etc/init.d/functions ]; then . /etc/init.d/functions elif [ -f /etc/rc.d/init.d/functions ] ; then . /etc/rc.d/init.d/functions else exit 0 fi# Source networking configuration. . /etc/sysconfig/network# Check that networking is up. [ ${NETWORKING} = "no" ] && exit 0 [ -f /usr/local/haproxy/conf/haproxy.cfg ] || exit 1RETVAL=0start() { /usr/local/haproxy/sbin/haproxy -c -q -f /usr/local/haproxy/conf/haproxy.cfgif [ $? -ne 0 ]; then echo "Errors found in configuration file." return 1 fiecho -n "Starting HAproxy: " daemon /usr/local/haproxy/sbin/haproxy -D -f /usr/local/haproxy/conf/haproxy.cfg -p /var/run/haproxy.pid RETVAL=$?echo [ $RETVAL -eq 0 ] && touch /var/lock/subsys/haproxy return $RETVAL }stop() { echo -n "Shutting down HAproxy: " killproc haproxy -USR1 RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/haproxy [ $RETVAL -eq 0 ] && rm -f /var/run/haproxy.pid return $RETVAL }restart() { /usr/local/haproxy/sbin/haproxy -c -q -f /usr/local/haproxy/conf/haproxy.cfg if [ $? -ne 0 ]; then echo "Errors found in configuration file, check it with ‘haproxy check‘." return 1 fi stop start }check() { /usr/local/haproxy/sbin/haproxy -c -q -V -f /usr/local/haproxy/conf/haproxy.cfg }rhstatus() { status haproxy }condrestart() { [ -e /var/lock/subsys/haproxy ] && restart || : }# See how we were called.case "$1" in start) start ;; stop) stop ;; restart) restart ;; reload) restart ;; condrestart) condrestart ;; status) rhstatus ;; check) check ;; *) echo $"Usage: haproxy {start|stop|restart|reload|condrestart|status|check}" RETVAL=1 esacexit $RETVAL

 

(2) create a service on node01 and node02
chmod +x /etc/init.d/haproxy chkconfig --add haproxy chkconfig haproxy on service haproxy start

 

(3) test monitoring

Http: // 192.168.233.85: 91/admin
Http: // 192.168.233.83: 91/admin
Http: // 192.168.233.84: 91/admin
Because no application is available, the agent reports an error 503.

 

4. Jboss-EAP-4.3 cluster configuration points: 1) JBoss and Java Basic Environment configuration is omitted, JBoss session replication is the focus of this example. 2) JBoss and application code are deployed in the GFS cluster file system directory. The two nodes can access the same content. 3) The Monitoring script can be deployed to monitor JBoss applications. if the process is dead or inaccessible, restart the application. This article will skip this article.

 

1. added the JBoss session replication function.

Configure session replication in the Application

# Vi/cluster/zhzxxt/deploy/APP. War/WEB-INF/Web. xml

Add a line directly under <web-app> <distributable/>

<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> <distributable/>

 

2. Modify the cluster Id 1) modify the cluster ID

# Vi/cluster/jboss4/Server/node01/deploy/jboss-web-cluster.sar/META-INF/
# Vi/cluster/jboss4/Server/node02/deploy/jboss-web-cluster.sar/META-INF/
<Attribute name = "clustername"> tomcat-app-cluster </attribute>

2) use TCP to implement session replication and communication, comment out the original UDP multicast configuration file, and bind the multicast port to the last IP address of the local machine, the IP address CIDR blocks bound to two servers in multiple CIDR blocks are different. The replication process cannot communicate with each other. The problem is solved by changing to the TCP mode.
<config> <TCP bind_addr="192.168.233.83" start_port="7810" loopback="true" tcp_nodelay="true" recv_buf_size="20000000" send_buf_size="640000" discard_incompatible_packets="true" enable_bundling="true" max_bundle_size="64000" max_bundle_timeout="30" use_incoming_packet_handler="true" use_outgoing_packet_handler="false" down_thread="false" up_thread="false" use_send_queues="false" sock_conn_timeout="300" skip_suspected_members="true"/> <TCPPING initial_hosts="192.168.233.83[7810],192.168.233.84[7810]" port_range="3" timeout="3000" down_thread="true" up_thread="true" num_initial_members="3"/> <MERGE2 max_interval="100000" down_thread="true" up_thread="true" min_interval="20000"/> <FD_SOCK down_thread="true" up_thread="true"/> <FD timeout="10000" max_tries="5" down_thread="true" up_thread="true" shun="true"/> <VERIFY_SUSPECT timeout="1500" down_thread="true" up_thread="true"/> <pbcast.NAKACK max_xmit_size="60000" use_mcast_xmit="false" gc_lag="0" retransmit_timeout="300,600,1200,2400,4800" down_thread="true" up_thread="true" discard_delivered_msgs="true"/> <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" down_thread="false" up_thread="false" <pbcast.GMS print_local_addr="true" join_timeout="3000" down_thread="true" up_thread="true" join_retry_timeout="2000" shun="true" view_bundling="true"/> <FC max_credits="2000000" down_thread="true" up_thread="true" min_threshold="0.10"/> <FRAG2 frag_size="60000" down_thread="true" up_thread="true"/> <pbcast.STATE_TRANSFER down_thread="true" up_thread="true" use_flush="false"/> </config>

After the entire architecture is configured, it is stable and reliable in the test process.

This article is from the "koumm Linux technology blog" blog, please be sure to keep this source http://koumm.blog.51cto.com/703525/1546326

An example of haproxy + keepalived + JBoss cluster implementation architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.