RABBITMQ Learning Cluster deployment

Source: Internet
Author: User
Tags message queue reset node server rabbitmq

Production environment:

CentOS 6.3 x86_64

Server host name and IP list:

mq136 172.28.2.136
mq137 172.28.2.137
mq164 172.28.2.164
mq165 172.28.2.165

Hosts are parsed on each node server.

Cat >>/etc/hosts/<<eof

mq136 172.28.2.136

mq137 172.28.2.137

mq164 172.28.2.164

mq165 172.28.2.165

Eof

First, Introduction
RABBITMQ is a popular open source Message Queuing system, developed in Erlang language. The distributed communication security policy for Erlang can be attributed to all or None. RABBITMQ is the standard implementation of the AMQP (Advanced Message Queuing protocol). The structure diagram of the RABBITMQ is as follows:

Several concept notes:

Broker: The Message Queuing server entity is simply the case.
Exchange: A message switch that specifies what rules the message is routed to and to which queue.
Queue: A message queue carrier in which each message is put into one or more queues.
Binding: Bind, which is the role of binding exchange and queue according to routing rules.
Routing key: The routing keyword, exchange messages are delivered based on this keyword.
Vhost: Virtual host, a broker can open multiple vhost, as a separate user permissions.
Producer: The message producer is the program that delivers the message.
Consumer: The message consumer is the program that receives the message.
Channel: The message channels, in each connection of the client, multiple channels can be established, each channel represents a session task.

The use of Message Queuing is probably as follows:

(1) The client connects to the Message Queuing server and opens a channel.
(2) The client declares an exchange and sets the related properties.
(3) The client declares a queue and sets the related properties.
(4) The client uses routing key to establish a good binding relationship between Exchange and queue.
(5) Clients post messages to exchange.

When Exchange receives a message, it routes messages to one or more queues based on the key of the message and the binding that has been set.

There are several types of exchange, which are called direct switches that are delivered entirely according to key, for example, when the routing key is set to "ABC", the client submits a message that only the key "ABC" is set to be posted to the queue. When the key is matched with a pattern, it is called a topic switch, and the symbol "#" matches one or more words, and the symbol "*" matches exactly one word. For example, "abc.#" matches "Abc.def.ghi", "abc.*" matches only "Abc.def". There is also a need for a key, called the fanout Switch, which takes broadcast mode, when a message comes in, it is posted to all queues that are bound to the switch.

RABBITMQ supports the persistence of messages, that is, data is written on disk, and for data security reasons, I think most users will choose to persist. Message Queuing persistence consists of 3 parts:
(1) Exchange persistence, specifying durable + 1 at the time of declaration
(2) Queue persistence, specify durable when declaring = 1
(3) Message persistence, specifying Delivery_mode = 2 (1 non-persistent) on delivery

If both Exchange and queue are persisted, then the binding between them is persistent. If there is a persistence between Exchange and queue, a non-persisted, binding is not allowed.

Next we understand the Message Queuing RabbitMQ cluster, because RabbitMQ is developed with Erlang, RabbitMQ relies entirely on Erlang's cluster, and the Erlang cluster is very convenient, so configuring RabbitMQ clusters becomes very simple.

RABBITMQ cluster nodes include memory nodes, disk nodes. As the name implies, the memory node is where all the data is placed in memory, and the disk node places the data on disk. However, as mentioned earlier, if the message is persisted when the message is posted, even the memory node, the data is safely placed on the disk.

A good design architecture can be as follows: In a cluster, there are more than 3 machines, 1 of which use disk mode, others use memory mode. Several other nodes in memory mode are undoubtedly faster, so clients (consumer, producer) connections access them. Disk-mode nodes, because disk IO is relatively slow, are used only for data backup.

Two, each node installation RABBITMQ

The installation is very simple and takes only a few steps:

1. Installing the Epel source

RPM-IVH http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-7.noarch.rpm

Wget-o/etc/yum.repos.d/epel-erlang.repo Http://repos.fedorapeople.org/repos/peter/erlang/epel-erlang.repo

2. Installing Erlang

Yum Install Erlang Xmlto git-y

RPM--import HTTP://WWW.RABBITMQ.COM/RABBITMQ-SIGNING-KEY-PUBLIC.ASC

3. Installing RABBITMQ

You can choose to install with Yum, or you can choose to download the RPM package installation, or you can use the source code to compile the installation.

Download Address: http://www.rabbitmq.com/download.html

This article selects RPM Package installation:

wget http://www.rabbitmq.com/releases/rabbitmq-server/v2.8.6/rabbitmq-server-2.8.6.noarch.rpm

RPM-IVH rabbitmq-server-2.8.6.noarch.rpm

4. Start each node RABBITMQ and verify the startup situation

[root@mq136 ~]# rabbitmq-server--detached &

[root@mq136 ~]# PS aux |grep rabbitmq

RABBITMQ 1394 0.0 0.0 10828 540? S Oct08 0:11/usr/lib64/erlang/erts-5.8.5/bin/epmd-daemon

Root 2483 0.0 0.0 103244 836 pts/1 s+ 17:40 0:00 grep rabbitmq

RABBITMQ 5657 6.3 1.9 2224044 157200? Sl Oct08 959:17/usr/lib64/erlang/erts-5.8.5/bin/beam.smp-w w-k true-a30-p 1048576---Root/usr/lib64/erlang-progn Ame Erl---home/var/lib/rabbitmq---Noshell-noinput-sname rabbit@mq136-boot/var/lib/rabbitmq/mnesia/rabbit@mq136-p Lugins-expand/rabbit-kernel inet_default_connect_options [{nodelay,true}]-SASL errlog_type ERROR-SASL sasl_error_ Logger false-rabbit Error_logger {file, "/var/log/rabbitmq/rabbit@mq136.log"}-rabbit Sasl_error_logger {file, "/var/ Log/rabbitmq/rabbit@mq136-sasl.log "}-os_mon start_cpu_sup False-os_mon start_disksup False-os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia/rabbit@mq136"-noshell-noinput

RABBITMQ 5698 0.0 0.0 10788 520? Ss Oct08 0:00 inet_gethost 4

RABBITMQ 5699 0.0 0.0 12892 692? S Oct08 0:00 inet_gethost 4

RABBITMQ 11446 0.0 0.0 12892 680? S Oct13 0:00 inet_gethost 4

[root@mq136 ~]# lsof-i:5672

COMMAND PID USER FD TYPE DEVICE size/off NODE NAME

BEAM.SMP 5657 rabbitmq 18u IPv4 5879364 0t0 TCP *:amqp (LISTEN)


Third, cluster configuration

Cluster Environment Description:

mq136 as a disk node, all other nodes as memory nodes.

1. Create a join cluster script on each node

mq136:

Cat >>/home/zjqui/scripts/cluster.sh<<eof

Rabbitmqctl Stop_app

Rabbitmqctl Reset

Rabbitmqctl Cluster

Rabbitmqctl Start_app

Eof

mq137:

Cat >>/home/zjqui/scripts/cluster.sh<<eof

Rabbitmqctl Stop_app

Rabbitmqctl Reset

rabbitmqctl cluster rabbit@mq136

Rabbitmqctl Start_app

Eof

mq164:

Cat >>/home/zjqui/scripts/cluster.sh<<eof

Rabbitmqctl Stop_app

Rabbitmqctl Reset

rabbitmqctl cluster rabbit@mq136

Rabbitmqctl Start_app

Eof

mq165:

Cat >>/home/zjqui/scripts/cluster.sh<<eof

Rabbitmqctl Stop_app

Rabbitmqctl Reset

rabbitmqctl cluster rabbit@mq136

Rabbitmqctl Start_app

Eof


2. Each node joins the cluster environment

[root@mq136 ~]# chmod +x/home/zjqui/scripts/cluster.sh

Start Script sequence: Run the mq136 node cluster script before running another node cluster script:

[Root@mq136 ~]#/home/zjqui/scripts/cluster.sh

Once the nodes have successfully run, look at the overall status of the cluster:

[root@mq136 ~]# Rabbitmqctl Cluster_status

Cluster Status of Node rabbit@mq136 ...

[{nodes,[{disc,[rabbit@mq136]},

{ram,[rabbit@mq165,rabbit@mq164,rabbit@mq137]}]},

{running_nodes,[rabbit@mq164,rabbit@mq165,rabbit@mq137,rabbit@mq136]}]

... done.

You can see mq136 as the disc node, and the other nodes are RAM nodes. Cluster simple configuration to this completion:



Deploying Nova uses a highly available queue for the main two steps:
1. Configure the RABBITMQ cluster to ensure that all RABBITMQ services are not running, and then synchronize the cookies on all RABBITMQ servers in the cluster:
sudo service rabbitmq-server stop scp/var/lib/rabbitmq/.erlang.cookie \ Root@rabbit2:/var/lib/rabbitmq/.erlang.cooki E sudo service rabbitmq-server start
set up a cluster
Root@rabbit2:rabbitmqctl Stop_app root@rabbit2:rabbitmqctl cluster rabbit@rabbit1 Root@rabbit2:ra Bbitmqctl Start_app


Build disc clusters

Root@rabbit2:rabbitmqctl stop_app root@rabbit2:rabbitmqctl cluster rabbit@rabbit1 rabbit@rabbit2 Root@rabbit2:rabbi Tmqctl Start_app


cluster status can be seen on any RABBITMQ in the cluster after completion
$ sudo rabbitmqctl cluster_status cluster Status of node rabbit@rabbit1 ... [{Nodes,[{disc,[rabbit@rabbit1]},{ram,[rabbit@rabbit2]}]}, {running_nodes,[rabbit@rabbit2,rabbit@rabbit1]}] ...   Done. 2. Configure Nova edit nova.conf
rabbit_hosts = rabbit1:5672,rabbit2:5672 Rabbit_host = rabbit1 Rabbit_ha_queues = True
If Rabbit_hosts is configured, then Nova will connect to a RABBITMQ service sequentially, and if the MQ service being used is disconnected then try to connect to the next one sequentially, because all MQ messages are synchronized, so the messages are not lost.
If Rabbit_host is configured, the Haproxy must be erected before the cluster to ensure that the cluster VIP service is normal.

Then restart all of the Nova services.


You can see if the queue is highly available with the following command

sudo rabbitmqctl list_queues name arguments   Listing queues     compute [{"X-ha-policy", "All"}]   compute.u2      [{" X-ha-policy "," All "}]   compute_fanout_e35fae767bf645afa37649ece0fbc20f [{" X-ha-policy "," All "}]    conductor       [{"X-ha-policy", "All"}]   conductor.u2     [{"X-ha-policy", "All"}]   conductor_fanout_e36e99cf03214e45be95aed2028b998a        [{"X-ha-policy"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.