zookeeper+activemq+ Cluster Message Middleware construction

Source: Internet
Author: User

Zookeeper is a distributed, open-source distributed Application Coordination Service that contains a simple set of primitives that can be used by distributed applications to implement synchronization services, configure maintenance and naming services, and so on. Zookeeper is a sub-project of Hadoop, in distributed applications

, because engineers do not use the lock mechanism well, and message-based coordination mechanisms are not suitable for use in certain applications, there is a need for a reliable, extensible, distributed, configurable coordination mechanism to unify the state of the system

Principle: The core of the zookeeper is atomic broadcasting, which guarantees synchronization between the various servers. The protocol that implements this mechanism is called the Zab protocol. The ZAB protocol has two modes, namely the recovery mode (select Master) and broadcast mode (synchronous). When the service starts or the leader collapses

After the crash, Zab into the recovery mode, when the leader is elected, and most of the server completed and leader State synchronization, the recovery mode is over. State synchronization ensures that the leader and server have the same system state. To ensure the sequential consistency of transactions,

The zookeeper uses an incremented transaction ID number (ZXID) to identify the transaction. All the proposals (proposal) were added to the ZXID when they were presented. The implementation of ZXID is a 64-bit number, it is 32 bits high is the epoch used to identify whether the leader relationship changes, each time a leader is selected

Out, it will have a new epoch, marking the current reign of the leader. The low 32 bits are used to increment the count.
Each server has three states in the process of working:
Looking: Current server does not know who leader is, is searching
Leading: The current server is an elected leader
Following:leader has been elected and the current server is in sync with it.


First, pre-preparation

Environment requires 6 machines: Hosts file to be consistent

Edit/etc/hosts File:

127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4

:: 1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
192.168.1.105 node105
192.168.1.106 node106
192.168.1.107 node107
192.168.1.108 node108
192.168.1.109 node109
192.168.1.110 node110

The host name is defined as: node105 in sequence

To start the installation:

Root login system will copy jdk-7u76-linux-x64.tar.gz to/opt

Cd/opt

Tar zxvf jdk-7u76-linux-x64.tar.gz

The files generated after decompression are named jdk1.7.0_76.

Vim/etc/profile

Put the following code on the last side of the file

Export java_home=/opt/jdk1.7.0_76
Exportjava_bin=/opt/jdk1.7.0_76/bin
Exportjre_home= $JAVA _home/jre
Exportclasspath=.: $JAVA _home/lib/dt.jar: $JAVA _home/lib/tools.jar: $JRE _home/lib/rt.jar
Exportpath= $PATH: $JAVA _home/bin: $JRE _home/bin
Exportjava_home Java_bin PATH CLASSPATH

Source/etc/profile Making variables effective


Ii. deployment of Zookeeper (deployed on three nodes)
192.168.1.105,192.168.1.106,192.168.1.107 to deploy zookeeper on three nodes
Upload zookeeper-3.4.6.tar.gz to/opt below
Cd/opt
Tar xzf zookeeper-3.4.6.tar.gz
Ln-s zookeeper-3.4.6 Zookeeper

Mkdir/opt/zookeeper/data
This is the ZK data store directory
Cd/opt/zookeeper/data
VI myID
If it is node105 then the content is 1, if it is node106 then the content is 2, if it is node107 the content is 3
Mkdir/opt/zookeeper/logs
This is the ZK log store directory

Cp/opt/zookeeper/conf/zoo_sample.cfg/opt/zookeeper/conf/zoo.cfg
--zookeeper will read the contents of the Zookeeper/conf/zoo.cfg file at startup, zookeeper/conf/there is no zoo.cfg file under the default, so we can according to Zookeeper/conf/zoo_ Sample.cfg to generate Zookeeper/conf/zoo.cfg
Vi/opt/zookeeper/conf/zoo.cfg
Add the following content
Datadir=/opt/zookeeper/data
Datalogdir=/opt/zookeeper/logs

server.1=node105:2888:3888
server.2=node106:2888:3888
server.3=node107:2888:3888

Datadir=/data/app/zookeeper/data
Datalogdir=/data/app/zookeeper/logs


Each node start service:
/opt/zookeeper/bin/zkserver.sh start
/opt/zookeeper/bin/zkserver.sh status

Node1
/opt/zookeeper/bin/zkserver.sh status
JMX enabled by default
Using config:/opt/zookeeper/bin/. /conf/zoo.cfg
Mode:follower

Node2
/opt/zookeeper/bin/zkserver.sh status
JMX enabled by default
Using config:/opt/zookeeper/bin/. /conf/zoo.cfg
Mode:follower

Node3
/opt/zookeeper/bin/zkserver.sh status
JMX enabled by default
Using config:/opt/zookeeper/bin/. /conf/zoo.cfg
Mode:leader

Visible Node3 to Leader



Active MQ is an open source JMS product based on the Apcache 2.0 licenced release. Its features are:
1) Provide the point-to message mode and publish/subscribe message mode;
2) Support open source application server such as JBoss and Geronimo, support the message-driven of spring framework;
3) added a peer-to-peer transport layer that can be used to create reliable peer JMS network connections;
4) with JMS infrastructure services such as message persistence, transactions, and cluster support


Third, the deployment of single cluster ACTIVEMQ
Planning
192.168.1.105,192.168.1.106,192.168.1.107 is composed of the first cluster, assuming the name is called cluster001 ZOOKEEPER+MQ
If you have a second cluster
The 192.168.1.108,192.168.1.109,192.168.1.110 consists of a second cluster, assuming that the name is called Cluster002 MQ.



This document demonstrates only the deployment of cluster001, and the deployment of cluster002 is similar to cluster001.

Deployment of 2.cluster001 clusters

1) Create/opt/activemq/cluster001 directories in three hosts, respectively
$ mkdir-p/opt/activemq/cluster001
Upload apache-activemq-5.11.1-bin.tar.gz to/opt/activemq/cluster001 directory

2) Unzip and name by node
$ cd/opt/activemq/cluster001
$ TAR-XVF apache-activemq-5.11.1-bin.tar.gz
$ mv apache-activemq-5.11.1 node-0x
(X for node number, 1 for node105, 2 for node106, 3 for node107, same as below)

3) cluster configuration:
Configure a persistent adapter in the Conf/activemq.xml in 3 ActiveMQ nodes. Modify which bind, zkaddress,
Hostname and Zkpath. Note: Each ACTIVEMQ must have the same brokername, otherwise it cannot join the cluster.
Vim/opt/activemq/cluster001/node-01/conf/activemq.xml
Vim/opt/activemq/cluster001/node-02/conf/activemq.xml
Vim/opt/activemq/cluster001/node-03/conf/activemq.xml



Configuration in Node-01:
<broker xmlns= "Http://activemq.apache.org/schema/core" brokername= "cluster001" datadirectory= "${activemq.data} ">
<persistenceAdapter>
<!--kahadb directory= "${activemq.data}/kahadb"/-
<replicatedleveldb
directory= "${activemq.data}/leveldb"
Replicas= "3"
Bind= "tcp://0.0.0.0:62621"
Zkaddress= "192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181"
Hostname= "node105"
Zkpath= "/activemq/leveldb-stores"
/>
</persistenceAdapter>
</broker>


<destinationPolicy>
<policyMap>
<policyEntries>
<policyentry topic= ">" >
<!--the constantpendingmessagelimitstrategy is used to prevent
Slow topic consumers to block producers and affect other consumers
By limiting the number of messages that is retained
For more information, see:

Http://activemq.apache.org/slow-consumer-handling.html

-
<pendingMessageLimitStrategy>
<constantpendingmessagelimitstrategy limit= "/>"
</pendingMessageLimitStrategy>
</policyEntry>

<!--green tag is the code to be added--
<policyentry queue= ">" enableaudit= "false" >
<networkBridgeFilterFactory>
<conditionalnetworkbridgefilterfactory replaywhennoconsumers= "true"/>
</networkBridgeFilterFactory>
</policyEntry>

</policyEntries>

</policyMap>
</destinationPolicy>


Configuration in Node-02:
<broker xmlns= "Http://activemq.apache.org/schema/core" brokername= "cluster001" datadirectory= "${activemq.data} ">
<persistenceAdapter>
<!--kahadb directory= "${activemq.data}/kahadb"/-
<replicatedleveldb
directory= "${activemq.data}/leveldb"
Replicas= "3"
Bind= "tcp://0.0.0.0:62621"
Zkaddress= "192.168.1.105:2181,192.168.1.106:2181,192.168.1.10

This article is from the "7835882" blog, please be sure to keep this source http://crfsz.blog.51cto.com/7835882/1872781

zookeeper+activemq+ Cluster Message Middleware construction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.