ACTIVEMQ Cluster Environment
Role IP hostname
master:10.0.0.4 CDH1
Slave 1:10.0.0.5 CDH2
Slave 2:10.0.0.6 Cdh3
1. Install JDK on your Server
Note:the same steps on each node (include Cdh1, CDH2, and CDH3)
To elevate permissions, Run command:
[Email protected]~]$ sudo su-
Step 1:download and install JDK 7
[Email protected] ~]# rpm-qa|grep Jdk|xargs rpm-e--nodeps
[Email protected] ~]# wget--no-check-certificate--no-cookies--header "cookie:oraclelicense= Accept-securebackup-cookie "
sudo tar zxvf jdk-7u67-linux-x64.tar.gz-c/opt/
Step 2:configuration environment variable
Cat >/etc/profile.d/java.sh<<eof
Export java_home=/opt/jdk1.7.0_67
Export path=\ $PATH: \ $JAVA _home/bin
Eof
Step 3:take effect immediately
[[email protected] ~]# source /etc/profile.d/java.sh
[[email protected] ~]# echo "10.0.0.4 cdh1" >>/etc/hosts
[[email protected] ~]# echo "10.0.0.5 cdh2" >>/etc/hosts
[[email protected] ~]# echo "10.0.0.6 cdh3" >>/etc/hosts
[[email protected] ~]# curl -LO http://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm
[[email protected] ~]# yum localinstall cloudera-cdh-5-0.x86_64.rpm -y
[[email protected] ~]# yum clean all -y
[[email protected] ~]# yum repolist
[[email protected] ~]# rpm --import http://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
[[email protected] ~]# yum install zookeeper* -y
cat >/etc/zookeeper/conf/zoo.cfg <<EOF
maxClientCnxns=50
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper
clientPort=2181
server.1=cdh1:2888:3888
server.2=cdh2:2888:3888
server.3=cdh3:2888:3888
EOF
on cdh1
[[email protected] ~]# /etc/init.d/zookeeper-server init --myid=1 && /etc/init.d/zookeeper-server start
on cdh2
[[email protected] ~]# /etc/init.d/zookeeper-server init --myid=2 && /etc/init.d/zookeeper-server start
on cdh3
[[email protected] ~]# /etc/init.d/zookeeper-server init --myid=3 && /etc/init.d/zookeeper-server start
[[email protected] ~]# zookeeper-client -server cdh1:2181
step1: Download latest activemq
[[email protected] ~]# wget http://www.us.apache.org/dist/activemq/5.11.1/apache-activemq-5.11.1-bin.tar.gz
step2: Extract archive
[[email protected] ~]# tar zxvf apache-activemq-5.11.1-bin.tar.gz -C /opt/
[[email protected] ~]# cd /opt/apache-activemq-5.11.1/conf
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
6.2. After Activemq.xml
<persistenceAdapter>
<replicatedleveldb
directory= "${activemq.data}/leveldb"
Replicas= "3"
Bind= "tcp://0.0.0.0:0"
Zkaddress= "cdh1:2181,cdh2:2181,cdh3:2181"
Hostname= "Cdh1"
Sync= "Local_disk"
Zkpath= "/activemq/leveldb-stores"
/>
</persistenceAdapter>
Strong>
Note:each node of the hostname must be different.
7. Testing activemq high availability
step 1: Start ActiveMQ on each node
[[email protected] ~]# cd /opt/apache-activemq-5.11.1/
[[email protected] apache-activemq-5.11.1]# ./bin/activemq start
step 2: Verify Service on each node
Note: We can see only the 61616 port of the CDH1 node is working
[[email protected] ~]# netstat -tulpn | grep 61616
step 3: Stop ActiveMQ on cdh1
[[email protected] ~]# cd /opt/apache-activemq-5.11.1/
[[email protected] apache-activemq-5.11.1]# ./bin/activemq stop
13. Output results of cdh1 and cdh2 are compared.
Note: we can see cdh2 node has taken over the cdh1‘s work.