Each RABBITMQ node runs RABBITMQ application, sharing the user virtual hosts, queues, exchanges, etc.
A nodes group is called a cluster.
All the data/state required for all RABBITMQ broker operations are full node replication, the only exception being the message queues, which is stored only on the node that created it, but is visible/desirable to all nodes.
In order to replicate the message queues to all nodes you need to open mirrored queues.
Create a RABBITMQ brokers cluster below to open the mirrored queues.
3 nodes:
Node1 10.15.85.141
Node2 10.15.85.142
Node3 10.15.85.143
Note: This is not a complete ha scenario, just to demonstrate a non-single point of failure.
1. First step: Stop all OpenStack services on the node-01
2. Copy an Erlang cookie to the other 2 nodes:
% Scp/var/lib/rabbitmq/.erlang.cookie [email Protected]:/var/lib/rabbitmq/.erlang.cookie
% Scp/var/lib/rabbitmq/.erlang.cookie [email Protected]:/var/lib/rabbitmq/.erlang.cookie
Ensure that the permissions are: User ' rabbitmq ', group ' RABBITMQ ' 400
% Chown Rabbitmq:rabbitmq/var/lib/rabbitmq/.erlang.cookie
% chmod 400/var/lib/rabbitmq/.erlang.cookie
3. Start the RABBITMQ service for node-02 and node-03
% Chkconfig Rabbitmq-server on
% Service Rabbitmq-server start
Now there is a 3 RABBITMQ brokers, the next step is to build a cluster and open mirrored queues.
4. Let node 2/3 join the Node1:
On node-02:
# Rabbitmqctl Stop_app
Stopping node ' [email protected] ' ...
... done.
# rabbitmqctl Join_cluster [email protected]
Clustering node ' [email protected] ' with ' [email protected] ' ...
... done.
# Rabbitmqctl Start_app
Starting node ' [email protected] ' ...
... done.
On node-03:
# Rabbitmqctl Stop_app
Stopping node ' [email protected] ' ...
... done.
# rabbitmqctl Join_cluster [email protected]
Clustering node ' [email protected] ' with ' [email protected] ' ...
... done.
# Rabbitmqctl Start_app
Starting node ' [email protected] ' ...
... done.
Now there are 3 nodes of the RABBITMQ cluster. To view cluster status:
# Rabbitmqctl Cluster_status
Cluster Status of node ' [email protected] ' ...
[{nodes,[{disc,[' [email protected] ', ' [email protected] ',
' [Email protected] '}]},
{running_nodes,[' [email protected] ', ' [email protected] ',
' [Email protected] '},
{partitions,[]}]
... done.
RabbitMQ Clustering cannot handle network segmentation, that is, cannot cross WAN Use, Shovel or Federation plugins can solve this problem.
4.1 Cluster nodes can be stop/start at any time
node2$ Rabbitmqctl Stop
node2$ Rabbitmq-server--detached
4.2 Cluster nodes can be exited and joined at any time
node2$ Rabbitmqctl Stop_app
node2$ Rabbitmqctl Reset
node2$ Rabbitmqctl Start_app
Now the Node2 is independent.
You can also remove Node2 from the Node1:
node2$ Rabbitmqctl Stop_app
node1$ rabbitmqctl Forget_cluster_node [email protected]
Note: At this time Node2 still think that it and Node1 is a cluster, the direct reboot it will go wrong and need to reset it:
node2$ Rabbitmqctl Reset
4.3 for experimentation, multiple RABBITMQ nodes can be run on a machine, provided that each node's name differs from Ip/port.
See:
Http://www.rabbitmq.com/clustering.html
5. Create policy to Enable mirrored queues
% rabbitmqctl set_policy HA ' ^ (?! Amq\.). * ' {' ha-mode ': ' All '} '
For detailed configuration see:
Http://www.rabbitmq.com/ha.html
6. Since Message Queuing can be accessed from 3 arbitrary RABBITMQ Brokers, setup a lb HAProxy on Node 1:
Install Haproxy on node-01.
% Yum Install Haproxy
Edit/etc/haproxy/haproxy.cfg and create a simple TCP proxy for RabbitMQ.
Global
Daemon
Defaults
Mode TCP
Maxconn 10000
Timeout Connect 5s
Timeout Client 100s
Timeout Server 100s
Listen RABBITMQ 10.15.85.141:5670
Mode TCP
Balance Roundrobin
Server node-01 10.15.85.141:5672 Check Inter 5s rise 2 Fall 3
Server node-02 10.15.85.142:5672 Check Inter 5s rise 2 Fall 3
Server node-03 10.15.85.143:5672 Check Inter 5s rise 2 Fall 3
Haproxy is located in 10.15.85.141:5670
The connection to this proxy will be round-robin to 3 nodes.
7. Start Haproxy
% Service Haproxy start
8. Make OpenStack service point to Haproxy
Neutron:
Openstack-config--set/etc/neutron/neutron.conf DEFAULT rabbit_host 10.15.85.141
Openstack-config--set/etc/neutron/neutron.conf DEFAULT Rabbit_port 5670
Nova:
Openstack-config--set/etc/nova/nova.conf DEFAULT rabbit_host 10.15.85.141
Openstack-config--set/etc/nova/nova.conf DEFAULT Rabbit_port 5670
Glance:
Openstack-config--set/etc/glance/glance-api.conf DEFAULT rabbit_host 10.15.85.141
Openstack-config--set/etc/glance/glance-api.conf DEFAULT Rabbit_port 5670
Cinder:
Openstack-config--set/etc/cinder/cinder.conf DEFAULT rabbit_host 10.15.85.141
Openstack-config--set/etc/cinder/cinder.conf DEFAULT Rabbit_port 5670
Ceilometer:
Openstack-config--set/etc/ceilometer/ceilometer.conf DEFAULT rabbit_host 10.15.85.141
Openstack-config--set/etc/ceilometer/ceilometer.conf DEFAULT Rabbit_port 5670
9 finally write this cluster to the configuration file/etc/rabbitmq/rabbitmq.config, so that the cluster will be created automatically when node starts
[{rabbit,
[{cluster_nodes, {[' [email protected] ', ' [email protected] ', ' [email protected] ', RAM}]}].
Note: There are many old documents that still recommend Pacemake and DRBD as Rabbitmq ha scenarios, with the queue mirror technology, which is obsolete.
See:
Http://www.rabbitmq.com/pacemaker.html
High availability with Pacemaker and DRBD
Reference:
Https://www.rdoproject.org/RabbitMQ
Http://blog.flux7.com/blogs/tutorials/how-to-creating-highly-available-message-queues-using-rabbitmq
RABBITMQ High Availability