As mentioned before, in a cluster environment, only metadata in the queue is synchronized on all nodes in the cluster, but data in the queue only exists on one node. This is disappointing: if the data is not redundant, it is easy to lose data or even durable, if the node where the data is located is lost, it will wait for the node to be restored. is there a message redundancy solution? Yes, rabbitmq has been supporting the mirror Queue (mirrored Queue) Since 2.6.0 ). messages are replicated between nodes. Like other master-slave designs, messages also have the concept of master and slave. Once a node fails, an slave will be elected as the master in the remaining nodes. note that mirrored queue is not a silver bullet, and its limitations will be mentioned later.
Start now
Uri uri = new Uri( "amqp://192.168.10.160:9992/" ); ...... ......ch.ExchangeDeclare(exchange, exchangeType, true);//,true,true,false,false, true,null);ch.QueueDeclare(pic_process_queue, true, false , false, new Dictionary <string, string>() { { "x-ha-policy" , "all" } }); ch.QueueBind(pic_process_queue, exchange, routingKey); ch.QueueDeclare(pic_process_queue2, true, false , false, null); ch.QueueBind(pic_process_queue2, exchange, routingKey);
We use the rabbitmqctl tool to check the cluster status. Note that the content of the new script rabbitmq-util is basically the same as that of rabbitmqctl, but the cookie is specified.
[root@localhost scripts]# ./rabbitmq-util -n z_91@zen.com cluster_statusCluster status of node 'z_91@zen.com' ...[{nodes,[{disc,['z_92@zen.com']},{ram,['z_93@zen.com','z_91@zen.com']}]},{running_nodes,['z_92@zen.com','z_93@zen.com','z_91@zen.com']}]...done.[root@localhost scripts]# ./rabbitmq-util -n z_93@zen.com list_queues name pid slave_pidsListing queues ...qp_pic_queue <'z_92@zen.com'.2.7008.0> [<'z_93@zen.com'.2.6931.0>, <'z_91@zen.com'.2.7445.0>]qp_pic_queue2 <'z_92@zen.com'.2.7013.0> []...done.[root@localhost scripts]#
We can see that the connected rabbit is 92 nodes and port 9992. Now we close 92 nodes normally (using rabbitmqctl instead of kill). Now we modify the client code, connect to node 93 URI uri = new uri ("amqp: // 192.168.10.160: 9993/"); run the same code as we expected: qp_pic_queue declared that after all mirrored nodes are closed on 92 nodes, we can re-create the queue on 93. qp_pic_queue2 is not so lucky and a 404 error is thrown. see the following:
What if I just copied some nodes?
In the preceding example, messages are copied throughout the cluster. What if the message image is specified between several nodes? In fact, there is no difficulty in coding implementation, because the X-ha-policy parameter supports displaying the specified node. below is the processing logic of the parameter for creating an image queue in rabbit_amqqueue.erl I intercepted; below is a section of C # client code that shows the nodes on which messages are to be copied.
..\rabbitmq-server-2.8.7\src\rabbit_amqqueue.erldetermine_queue_nodes(Args) -> Policy = rabbit_misc:table_lookup(Args, <<"x-ha-policy">>), PolicyParams = rabbit_misc:table_lookup(Args, <<"x-ha-policy-params">>), case {Policy, PolicyParams} of {{_Type, <<"nodes">>}, {array, Nodes}} -> case [list_to_atom(binary_to_list(Node)) || {longstr, Node} <- Nodes] of [Node] -> {Node, undefined}; [First | Rest] -> {First, [First | Rest]} end; {{_Type, <<"all">>}, _} -> {node(), all}; _ -> {node(), undefined} end.
Declare a yaqp_pic_queue
ch.ExchangeDeclare(exchange, exchangeType, true);//,true,true,false,false, true,null); ch.QueueDeclare(pic_process_queue, true, false , false, new Dictionary <string, object>() { { "x-ha-policy" , "nodes" }, { "x-ha-policy-params", new List<string >() { "z_91@zen.com", "z_93@zen.com" } } }); ch.QueueBind(pic_process_queue, exchange, routingKey);
View slave_pids with rabbitmqctl
[root@localhost scripts]# ./rabbitmq-util -n z_93@zen.com cluster_statusCluster status of node 'z_93@zen.com' ...[{nodes,[{disc,['z_92@zen.com']},{ram,['z_93@zen.com','z_91@zen.com']}]},{running_nodes,['z_92@zen.com','z_91@zen.com','z_93@zen.com']}]...done.[root@localhost scripts]# ./rabbitmq-util -n z_91@zen.com list_queues name pid slave_pidsListing queues ...qp_pic_queue <'z_93@zen.com'.2.6931.0> [<'z_91@zen.com'.2.7445.0>, <'z_92@zen.com'.3.235.0>]yaqp_pic_queue <'z_91@zen.com'.2.7875.0> [<'z_93@zen.com'.2.7387.0>]qp_pic_queue2 <'z_92@zen.com'.3.232.0> []...done.[root@localhost scripts]#
But the actual operation is a bit difficult: We need to specify this parameter when writing code, but these nodes are not necessarily alive at runtime. declare fails if the specified node is not all online. during Node Adjustment, hard encoding also puts us in a dilemma. Therefore, rabbitmq in action is recommended to use the all option of X-ha-policy, that is, to copy messages throughout the cluster.
What if a new node is added halfway through?
The ideal state is always simple, and the real environment is always complex. If we want to add a new node to the cluster at runtime (adding a node is normal ), what will message replication do? If a new node is added, rabbitmq does not synchronize historical data and only copies new messages. the assumption here is that as the message is removed by the consumer, the data of all nodes will eventually be aligned and consistent.
A natural question is: If the master node exits from the cluster and an slave is selected as the master node, what should I do if I have unfortunately selected a node that just joined the cluster? Will the message be lost? Rest assured that rabbitmq will maintain whether the node status has been synchronized. You can view the status by using the synchronised_slave_pids parameter of rabbitmqctl. let's take a look at the example below. If the nodes in slave_pids and synchronised_slave_pids are the same, they are all synchronized. if they are inconsistent, it is easy to compare which data has not been synchronized.
[root@localhost scripts]# ./rabbitmq-util -n z_91@zen.com list_queues name pid slave_pids synchronised_slave_pidsListing queues ...zen_qp_pic_queue <'z_91@zen.com'.2.8009.0> [<'z_93@zen.com'.2.7517.0>] [<'z_93@zen.com'.2.7517.0>]qp_pic_queue <'z_93@zen.com'.2.6931.0> [<'z_91@zen.com'.2.7445.0>, <'z_92@zen.com'.3.235.0>, <'z_94@zen.com'.1.595.0>] [<'z_91@zen.com'.2.7445.0>, <'z_92@zen.com'.3.235.0>, <'z_94@zen.com'.1.595.0>]yaqp_pic_queue <'z_91@zen.com'.2.7875.0> [<'z_93@zen.com'.2.7387.0>] [<'z_93@zen.com'.2.7387.0>]qp_pic_queue2 <'z_92@zen.com'.3.232.0>yy_qp_pic_queue <'z_91@zen.com'.2.7920.0> [<'z_93@zen.com'.2.7434.0>] [<'z_93@zen.com'.2.7434.0>]...done.[root@localhost scripts]#
Official site: highly available queues http://www.rabbitmq.com/ha.html
In the end, the thumbnail is miss every time it becomes an illusion.