First install RABBITMQ, then continue to look down
Article title: "RABBITMQ single Installation Small Remember"
Article Address: http://www.bbtang.info/591.html
You also need to modify the host file 1 127.0.0.1 rabbitmq1 localhost.localdomain localhost4 localhost4.localdomain4 2:: 1 localhost local Host.localdomain localhost6 localhost6.localdomain6 3 192.168.1.254 rabbitmq1 4 192.168.1.22 rabbitmq2
Each machine except its own 127.0.0.1 host is different, the bottom is the same.
PS: We use RABBITMQ1 service, RABBITMQ2 do node services
At the same time, the host name of your own machine must be given to this is the same as yours, otherwise the machine restart, not the host name, to join the cluster will be problematic.
You can modify it by modifying the hostname option in vim/etc/sysconfig/network, and hostname rabbitmq1 can be temporarily modified as well.
Note to start RABBITMQ when the installation is complete OH ~ 1./rabbitmq-server Start
Description
RABBITMQ clusters are dependent on Erlang clusters to work, so it is important to build Erlang's cluster scene first. The nodes in Erlang's cluster are implemented through a magic cookie that is stored in the $home/.erlang.cookie (as my root user installed is in my Root/.erlang.cookie), The file is a 400 permission. So we must ensure that each node cookie confrontation, otherwise the node can not communicate between. Copy Cookie Content
Open the file and then you need to copy the contents of one of the server's. Erlang.cookie to another machine, preferably copy the content, because the file permissions are wrong, the problem will occur, in the final exit to save the use of wq!! To force save. Common cluster
For a queue, a message entity exists only in one of the nodes, and a, b two nodes have only the same metadata, that is, the queue structure.
When the message enters the queue of a node, consumer pulls from the b node, RABBITMQ temporarily transmits the message between A and B, takes out the message entity in a and sends it to consumer via B.
So consumer should try to connect each node and fetch messages from it. That is, for the same logical queue, a physical queue is established on multiple nodes. Otherwise, whether consumer A or B, the export is always in a, will create a bottleneck.
One problem with this pattern is that when node A fails, Node B cannot get to a message entity that is not yet consumed in Node A.
If the message is persisted, then a node is restored before it can be consumed, if it is not persisted, then it is not.
Cluster configuration
Main service configuration script 1/rabbitmqctl stop_app 2/rabbitmqctl reset 3./rabbitmqctl Start_app
This is used to execute on the RABBITMQ1, this also can not execute, directly in the node server to execute the script below, but to ensure that the RABBITMQ service is started normally.
Node service configuration script 1/rabbitmqctl stop_app 2/rabbitmqctl reset 3./rabbitmqctl join_cluster--ram rabbit@rabbitmq1 4./rabbitmqct L Start_app
and the main service configuration script is the only difference is a third row of content, where –ram refers to as a memory node, if you want to do as a disk node, you do not need to add –ram this parameter, the 3rd line written this is good 1./rabbitmqctl Join_cluster--ram Rabbit@rabbitmq1
PS: As long as the node list contains itself, it becomes a disk node. In a RABBITMQ cluster, at least one disk node must exist.
Now execute the script on the master node, wait for execution, and then execute the Service node script, not in the wrong order Oh ~
After execution, view the node status on each machine 1./rabbitmqctl cluster_status
You will see the following separately
RABBITMQ1 View cluster status Results 1 [root@rabbitmq1 sbin]#./rabbitmqctl cluster_status 2 cluster Status of node rabbit@rabbitmq1 ... 3 [{NODES,[{DISC,[RABBIT@RABBITMQ1]},{RAM,[RABBIT@RABBITMQ2]}]}, 4 {running_nodes,[rabbit@rabbitmq2, RABBIT@RABBITMQ1]}, 5 {partitions,[]} 6 ... done.
RABBITMQ2 View cluster status Results 1 [root@rabbitmq2 sbin]#./rabbitmqctl cluster_status 2 cluster Status of node rabbit@rabbitmq2 ... 3 [{NODES,[{DISC,[RABBIT@RABBITMQ1]},{RAM,[RABBIT@RABBITMQ2]}]}, 4 {running_nodes,[rabbit@rabbitmq1, RABBIT@RABBITMQ2]}, 5 {partitions,[]} 6 ... done.
The contents of the two machines are almost the same, so that your cluster is built successfully.
The common cluster model has been set up and completed.
mirrored queues
The RABBITMQ default cluster mode for the above configuration, but does not guarantee the high availability of queues, although the interchange machine, binding these can be replicated to any node in the cluster, but the queue content will not replicate, although the model to solve a project group node pressure, but the queue node downtime directly cause the queue can not be applied, Can only wait for the reboot, so in order to be in the queue node downtime or failure can be normal application, it is necessary to replicate the contents of the queue to each node in the cluster, you need to create a mirrored queue.
Mirror queues are based on common cluster patterns, so you have to configure the normal cluster before you can set up a mirrored queue.
I am through the management side of the Web page to set the mirror queue, also can pass the command, the official example. Http://www.rabbitmq.com/ha.html Open after turning to the bottom there are two examples, you can refer to see, here only to say that the page set 1 >1. Click the Admin menu--> the policies option on the right side--> The bottom left add/update a policy 2 3 >2. According to the contents of the picture, fill in 4 5 >3. Click Add Policy Add Policy
At this point you will come to your two RABBITMQ server's Web Admin side amind menu to see the queue you just created.
Below we will add a queues queue to see the effect, here is the test results, the others are not filled in first
Notice the x-ha-policy in the red box = All this, online said no this will not be replicated, but I test the time as if it can be copied, at least queues queue is OK, add it first.
When added here you can specify the node option, which is where to put the Queues node, but it is not necessary to do the mirror, hehe
You'll see the effect after the addition is complete.
Briefly explain
AA This is just added the Arguments parameter specified x-ha-policy = All
AB This is not specified arguments parameter, this can see the gap of
BA and BB in order to do a demo contrast, the two are not compliant with the synchronization strategy, so node is not behind the +1 logo, you put the mouse on the +1 logo can see him on another machine also has a.
Q: What do you think will happen if I reboot rabbitmq2 ....
The +1 identities of A:AA and AB disappear and resume after startup.
Q: What happens if you restart rabbitmq1 ....
A: The +1 logo on RABBITMQ2 AA and AB disappears and the RABBIT@RABBITMQ1 in node option becomes RABBIT@RABBITMQ2, while BA and BB disappear, restart still disappear, haha, because these two do not mirror OH ~
Here is the image of the cluster introduction to the queue here, to achieve high availability, the need for HA software to cooperate oh ~ Here first do not repeat, the next article again ....
Error handling
If the error message prompts for a primary node conflict, you can go to the directory to modify the corresponding file 1 Cd/usr/local/rabbitmq_server-3.1.3/var/lib/rabbitmq/mnesia 2 vim rabbit\@ Rabbitmq2/cluster_nodes.config
or directly to delete the files in this directory, this is the cluster profile and persistent data storage location, can Gerze change is forced to delete