RabbitMQ cluster configuration, rabbitmq Cluster

Source: Internet
Author: User
Tags rabbitmq haproxy

RabbitMQ cluster configuration, rabbitmq Cluster

Reference:

This article describes the basic installation and basic cluster configuration of rabbitmq.

I. Environment 1. Operating System

CentOS-7-x86_64-Everything-1511

2. Version

Haproxu version: 1.7.7

Erlang version: 20.0

Rabbitmq version: rabbitmq-server-3.6.10

Https://bintray.com/rabbitmq/rabbitmq-server-rpm/download_file? File_path = rabbitmq-server-3.6.10-1.el7.noarch.rpm

3. Topology

Ii. RabbitMQ installation and configuration (single-host)

Take the node rmq-node1 as an example, rmq-node2/3 adjusted properly.

1. Install erlang

RabbbitMQ is developed based on erlang. erlang is installed first, and yum is used here.

1) Update the EPEL Source
# Yum official source without erlang; # Install EPEL: http://fedoraproject.org/wiki/epel/faq?howtouse=root@rmq-node1 ~] # Rpm-Uvh restart ~] # Yum install foo
2) Add the erlang solution Library
# If you do not add an erlang solution, the erlang version installed by yum will be old. # For details about how to add and install the solution, see https://www.erlang-solutions.com/resources/download.html#root@rmq-node1 ~]. # Wget https://packages.erlang-solutions.com/erlang-solutions-1.0-1.noarch.rpm?root@rmq-node1 ~] # Rpm-Uvh erlang-solutions-1.0-1.noarch.rpm # need to install the public key to verify the signature; [root @ rmq-node1 ~] # Rpm -- import https://packages.erlang-solutions.com/rpm/erlang_solutions.asc
3) install erlang
# Download speed will be slow [root @ rmq-node1 ~] # Yum install erlang-y
2. Install RabbitMQ1) download RabbitMQ
[root@rmq-node1 ~]# wget https://bintray.com/rabbitmq/rabbitmq-server-rpm/download_file?file_path=rabbitmq-server-3.6.10-1.el7.noarch.rpm[root@rmq-node1 ~]# mv download_file\?file_path\=rabbitmq-server-3.6.10-1.el7.noarch.rpm rabbitmq-server-3.6.10-1.el7.noarch.rpm
2) import the authentication Signature
[root@rmq-node1 ~]# rpm --import https://www.rabbitmq.com/rabbitmq-release-signing-key.asc
3) Installation
# Install yum locally using the downloaded rpm package, but it may involve retrieving the dependency package. You also need to import the authentication signature. # For download and installation, see http://www.rabbitmq.com/install-rpm.html#root@rmq-node1 ~]. # Yum install rabbitmq-server-3.6.10-1.el7.noarch.rpm-y
3. Start 1) set startup
[root@rmq-node1 ~]# systemctl enable rabbitmq-server
2) Start
[root@rmq-node1 ~]# systemctl start rabbitmq-server
4. Verify 1) view the status
[root@rmq-node1 ~]# systemctl status rabbitmq-server

2) view logs
# Important information about rabbitmq startup is provided in the log, such as the node name, $ home directory, cookie hash value, log file, and data storage directory; # The information given will indicate that there is no configuration file (such as), the default installation does not have this file [root @ rmq-node1 ~] # Cat/var/log/rabbitmq/rabbit@rmq-node1.log

5. rabbitmq. conf
# Manually create a directory, copy the configuration sample file to the configuration directory and rename it [root @ rmq-node1 ~] # Mkdir-p/etc/rabbitmq [root @ rmq-node1 ~] # Cp/usr/share/doc/rabbitmq-server-3.6.10/rabbitmq. config. example/etc/rabbitmq. config # configuration restart takes effect [root @ rmq-node1 ~] # Systemctl restart rabbitmq-server # You can also create the environment configuration file:/etc/rabbitmq/rabbitmq-env.conf
6. Install the web management plug-in
# Management plugin by default in the release version of RabbitMQ, enable can # service restart, configuration effective [root @ rmq-node1 ~] # Rabbitmq-plugins enable rabbitmq_management

7. Set iptables
# Tcp4369 port is used for cluster neighbor discovery; # tcp5671, 5672 port is used for AMQP 0.9.1 and 1.0 clients; # tcp15672 port is used for http api and rabbitadmin access, which is only enabled when management plugin is enabled; # tcp25672 port for erlang distributed node/tool communication [root @ rmq-node1 ~] # Vim/etc/sysconfig/iptables-a input-p tcp-m state -- state NEW-m tcp -- dport 4369-j ACCEPT-A INPUT-p tcp-m state -- state NEW- m tcp -- dport 5671-j ACCEPT-A INPUT-p tcp-m state -- state NEW-m tcp -- dport 5672-j ACCEPT-A INPUT-p tcp-m state -- state NEW-m tcp -- dport 15672-j ACCEPT-A INPUT-p tcp-m state -- state NEW-m tcp -- dport 25672-j ACCEPT [root @ rmq-node1 ~] # Service iptables restart
8. Management plugin Logon account 1) guest Account
# By default, rabbit only has a guest account, but for security purposes, the guest account can only log on from localhost. If you need a guest account, you can remotely log on to rabbitmq. conf file: # according to the instructions, remove the comments of Line 1 parameters and the symbols at the end of the sentence. We recommend that you do not enable remote logon for the guest account. # restart the service and the configuration takes effect. [Root @ rmq-node1 ~] # Vim/etc/rabbitmq. config

2) create a Logon account using CLI
# "Rabbitmqctl add_user" add account and set password [root @ rmq-node1 ~] # Rabbitmqctl add_user admin @ 123 # "rabbitmqctl set_user_tags" sets the account status [root @ rmq-node1 ~] # Rabbitmqctl set_user_tags admin administrator # "rabbitmqctl set_permissions" sets the account permissions [root @ rmq-node1 ~] # Rabbitmqctl set_permissions-p "/" admin ". *" # "rabbitmqctl list_users" list accounts [root @ rmq-node1 ~] # Rabbitmqctl list_users

9. Management plugin login verification

Browser access: http: // 172.16.3.231: 15672

1) log on with the guest account

2) log on to the account created by the CLI

3. cluster configuration

RabbitMQ uses the distributed feature of erlang to build a cluster. The erlang cluster is implemented through magic cookie, Which is saved in $ home /. erlang. cookie, Which is/var/lib/rabbitmq /. erlang. cookie. To ensure the consistency of the cookie on each node in the cluster, you can select a node cookie and use scp to synchronize it to other nodes.

1. Synchronize cookies
# Note. erlang. cookie file permissions, rabbitmq account, permissions 400 or 600, for the group or other account permissions will report an error [root @ rmq-node1 ~] # Scp/var/lib/rabbitmq/. erlang. cookie root@172.16.3.232:/var/lib/rabbitmq/[root @ rmq-node1 ~] # Scp/var/lib/rabbitmq/. erlang. Cookies root@172.16.3.233:/var/lib/rabbitmq/
2. Configure hosts
# Cluster creation using the "cluster @ node" format, you need to set the hosts file in advance; # rmq-node2/3 configuration identical [root @ rmq-node1 ~] # Echo-e "172.16.3.231 rmq-node1 \ n172.16.3.20.rmq-node2 \ n172.16.3.233 rmq-node3">/etc/hosts
3. Use the-detached parameter to start the node.
# Rmq-node2/3 because of replacement. erlang. cookie. If you use this command, it will be invalid and an error will be reported. You can use "systemctl stop rabbitmq-server" to stop the service and "systemctl start rabbitmq-server" to start the service, then "rabbitmqctl stop" [root @ rmq-node1 ~] # Rabbitmqctl stop [root @ rmq-node1 ~] # Rabbitmq-server-detached

4. Build cluster (rmq-node2 & rmq-node3) 1) build cluster (disk node)
# "Rabbit @ rmq-node1" in "rabbitmqctl join_cluster rabbit @ rmq-node1", rabbit represents the cluster name, rmq-node1 represents the cluster node, rmq-node2 and rmq-node3 are connected to the rmq-node1, and the connection is automatically established between them. # If you need to use the memory node, add a "-- ram" parameter, such as "rabbitmqctl join_cluster -- ram rabbit @ rmq-node1 ", at least one "disk" node is required in a cluster [root @ rmq-node2 ~] # Rabbitmqctl stop_app [root @ rmq-node2 ~] # Rabbitmqctl join_cluster rabbit @ rmq-node1 [root @ rmq-node2 ~] # Rabbitmqctl start_app [root @ rmq-node3 ~] # Rabbitmqctl stop_app [root @ rmq-node3 ~] # Rabbitmqctl join_cluster rabbit @ rmq-node1 [root @ rmq-node3 ~] # Rabbitmqctl start_app

2) modify the disk node to the memory Node
# If the node is already a "disk" node, you can change it to a memory node [root @ rmq-node3 ~] # Rabbitmqctl stop_app [root @ rmq-node3 ~] # Rabbitmqctl change_cluster_node_type ram [root @ rmq-node3 ~] # Rabbitmqctl start_app
5. view the cluster status
[root@rmq-node1 ~]# rabbitmqctl cluster_status

6. Set the high availability of the image queue

So far, although the cluster has been successfully built, it is only the default normal cluster, exchange, binding and other data that can be copied to each node of the cluster.

However, for queues, each node only has the same metadata, that is, the queue structure, but the queue entity only exists in the nodes that create and modify the queue, that is, the queue content is not copied (read from other nodes, temporary communication transmission can be established ).

After the node goes down, other nodes cannot obtain the message entity that has not been consumed from the node. If persistence is performed, you need to wait until the downtime node recovers. During this period, the remaining nodes cannot create a persistent queue that has been created for the downtime node. If persistence is not performed, the message is lost.

# Any node can execute the following command: Set all queues as mirror queues, that is, the queues will be copied to each node, and the status of each node remains unchanged; # You can run the command to view: rabbitmqctl list_policies; # image queue related explanations and settings & Operations, please refer to: http://www.ywnds.com /? P = 4741 [root @ rmq-node1 ~] # Rabbitmqctl set_policy ha-all "^" '{"ha-mode": "all "}'

 

4. Set Haproxy1. haproxy. cfg
# Only the monitoring configuration for rabbit_cluster is provided here. For the global and default configurations, see http://www.cnblogs.com/netonline/p/7593762.html#root@haproxy-1 ~]. # Vim/usr/local/haproxy/etc/haproxy. monitoring listen RabbitMQ_Cluster # monitoring mode tcp # Load Balancing mode balance roundrobin # access port bind 0.0.0.0: 5672 # backend server detection server rmq-node1 172.16.3.231: 15672 check inter 2000 rise 2 fall 3 server rmq-node2 172.16.3.small: 15672 check inter 2000 rise 2 fall 3 server rmq-node3 172.16.3.233: 15672 check inter 2000 rise 2 fall 3
2. effect verification

5. Notes and Problems 1. Notes 2. Network partition Problems

After the cluster is completed, a "partitioned network" error is encountered.

For more information about network partitioning, see: https://www.rabbitmq.com/partitions.html

Https://my.oschina.net/moooofly/blog/424660

1) symptom

Rmq-node1 & rmq-node3 and rmq-node2 split into two network partitions

2) view logs

Viewing rmq-node1 and rmq-node2 logs can validate the time when network partitions were formed

3) Cause

If other nodes cannot connect to the node for more than one minute (set to net_ticktime), Mnesia determines that another node is invalid. When the two failed nodes restore the connection status, they will think that the Peer has been down. In this case, Mnesia will determine that a network partition has occurred, this situation will be recorded in the RabbitMQ log file.

There are many causes of network partitioning, which are common as follows:

When a network partition occurs, each partition can run independently and the other nodes (partitions) are considered unavailable. Among them, queue, binding, and exchange can be created and deleted in each partition. When an image queue is split due to a network partition, it is finally displayed as a master generated in each partition, in addition, each partition can work independently, and other undefined and strange behaviors may also occur.

4) Restore manual processing

# The simpler and more direct method is to restart all nodes in the cluster, but ensure that the first node to be restarted is a trusted node.

# You can also observe the logs of the three nodes.

Automatic Processing

RabbitMQ provides three automatic processing methods: pause-minority mode, pause-if-all-down mode, and autoheal mode. The default action is ignore mode, which is not processed, this mode is suitable for scenarios with extremely stable networks.

Automatically process the configuration file:/etc/rabbitmq. conf, line 1 "cluster_partition_handling". The following parameters can be configured:

Pause_minority

{Pause_if_all_down, [nodes], ignore | autoheal}

Autoheal

Configuration File see: https://www.rabbitmq.com/configure.html#configuration-file

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.