ActiveMQ High-availability cluster installation, configuration (ZooKeeper + LevelDB)

Source: Internet
Author: User
Tags failover zookeeper

Build ActiveMQ clusters based on ZooKeeper and LevelDB

Official Document: Http://activemq.apache.org/replicated-leveldb-store.html

Cluster schematic diagram:

650) this.width=650; "Src=" http://activemq.apache.org/replicated-leveldb-store.data/replicated-leveldb-store.png "alt=" Replicated-leveldb-store.png "/>

The principle of high availability:

Use Zookeeper (cluster) to register all ACTIVEMQ Broker. Only one of the brokers can provide services, is considered master, and the other broker is in standby and is considered slave. If master fails to provide services due to a failure, ZooKeeper will elect a broker from Slave to act as master.

Slave Connect master and synchronize their storage state, Slave does not accept client connections. All storage operations are copied to the

Connect to Master's slaves. If Master is down, the newly updated slave will become master. the failed node is re-added to the cluster after recovery and connected to master into slave mode.

All message operations for disk that need to be synchronized will wait for the storage state to be replicated to the other legal nodes before completing the operation. The

With, if one configured replicas=3, then the legal size is (3/2) +1=2. Master will store and update and wait (2-1) = 1

Slave Storage and update is complete before reporting success. As for why is 2-1, familiar with zookeeper should know that there is a node

To exist as a viewer. When a new master is selected, you need to protect at least one legal node online to be able to find the latest

State of node. This node can become the new Master. Therefore, it is recommended to run at least 3 replica nodes to prevent a node

Failed and the service was interrupted. (the principle is similar to the highly available implementation of the zookeeper cluster)


1. ActiveMQ Cluster deployment plan:

Environment: CentOS 6.6 x64, JDK8

Version: ActiveMQ 5.13.4

ZooKeeper cluster Environment: 192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181

(ZooKeeper cluster Deployment please refer to http://bobbie.blog.51cto.com/8986511/1912740)

host cluster port management port node installation directory
192.168.1.14 61616 8161 /usr/local/activemq/
192.168.1.15 /usr/local/activemq/
192.168.1.16 /usr/local/activemq/

Note: The port is started on master and the other 2 slave do not have these 2 ports

2, firewall open the corresponding port

vim/etc/sysconfig/iptables# MQ cluster-a input-m State--state new-m tcp-p TCP--dport 8161-j accept-a input-m Stat E--state new-m tcp-p tcp--dport 61616-j ACCEPT

3, download decompression reference: http://bobbie.blog.51cto.com/8986511/1913006

4. Cluster configuration:

Configure a persistent adapter in the Conf/activemq.xml in 3 ActiveMQ nodes. Modify which bind, zkaddress,

Hostname and Zkpath. Note: Each ActiveMQ must have the same brokername, otherwise it cannot join the cluster.

Persistent configuration in Node1:

Vim /usr/local/activemq/conf/activemq.xml18     <broker xmlns= "/http Activemq.apache.org/schema/core " brokername=" cluster " datadirectory=" ${activemq.data} ">37          <persistenceAdapter>38              <replicatedLevelDB39                  directory= "${activemq.data}/leveldb" 40                  replicas= "3" 41                  bind= " tcp://0.0.0.0:0 "42                  zkaddress= "192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181" 43          &nBsp;       hostname= "192.168.1.14" 44                  sync= "Local_disk" 45                  zkpath= "/activemq/leveldb-stores "46             />47          </persistenceAdapter>

Persistent configuration in Node2:

Vim /usr/local/activemq/conf/activemq.xml18     <broker xmlns= "/http Activemq.apache.org/schema/core " brokername=" cluster " datadirectory=" ${activemq.data} ">37          <persistenceAdapter>38              <replicatedLevelDB39                  directory= "${activemq.data}/leveldb" 40                  replicas= "3" 41                  bind= " tcp://0.0.0.0:0 "42                  zkaddress= "192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181" 43          &nBsp;       hostname= "192.168.1.15" 44                  sync= "Local_disk" 45                  zkpath= "/activemq/leveldb-stores "46             />47          </persistenceAdapter>

Persistent configuration in Node3:

Vim /usr/local/activemq/conf/activemq.xml18     <broker xmlns= "/http Activemq.apache.org/schema/core " brokername=" cluster " datadirectory=" ${activemq.data} ">37          <persistenceAdapter>38              <replicatedLevelDB39                  directory= "${activemq.data}/leveldb" 40                  replicas= "3" 41                  bind= " tcp://0.0.0.0:0 "42                  zkaddress= "192.168.1.11:2181,192.168.1.12:2181,192.168.1.13:2181" 43          &nBsp;       hostname= "192.168.1.16" 44                  sync= "Local_disk" 45                  zkpath= "/activemq/leveldb-stores "46             />47          </persistenceAdapter>

5. Start 3 ACTIVEMQ nodes sequentially:

/ETC/INIT.D/ACTIVEMQ start

6, cluster node state analysis:

After the cluster is started, you can see that ACTIVEMQ has 3 nodes, namely 00000000000,00000000001,00000000002. The first 00000000000 value, you can see that the value of elected is not NULL, indicating that the node is master, and the other two nodes are Slave.

7. Cluster Usability testing:

The ActiveMQ client can only access the Master's broker, and the other broker in Slave cannot access it. So the client connection

The connection Broker should use the failover protocol.

Failover: (tcp://192.168.1.14:61616,tcp://192.168.1.15:61616,tcp://192.168.1.16:61616)? randomize=false

When a ActiveMQ node hangs, or a zookeeper node hangs, the ActiveMQ service still works. If only the remaining

A ActiveMQ node, because it cannot be elected MASTER,ACTIVEMQ does not function properly; Similarly, if zookeeper has only one node activity left, ActiveMQ will not be able to provide services normally, regardless of whether the ActiveMQ nodes are alive.


Single-section configuration point click: http://bobbie.blog.51cto.com/8986511/1913006


This article is from the "Focus" blog, so be sure to keep this source http://bobbie.blog.51cto.com/8986511/1913052

ActiveMQ High-availability cluster installation, configuration (ZooKeeper + LevelDB)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.