RABBITMQ Problem Summary

Source: Internet
Author: User
Tags memory usage rabbitmq amq
Log Plugin

Turn on logging plugin (available through the admin interface)

Rabbitmq-plugins Enable rabbitmq_tracing

Rabbitmqctl trace_on

Opening trace affects the message writing function and closes when it is opened properly.

After installing the above plug-in and turning on trace_on, you will find two more exchange:amq.rabbitmq.trace and Amq.rabbitmq.log, the types are: topic.

As long as you subscribe to these two topics, you can receive: Client connection, message delivery and other specific information.

Setting the RABBITMQ log level and path

Vi/etc/rabbitmq/rabbitmq.conf

[

{rabbit, [{log_levels, [{connection, Warning}]}]}

].

Can also be written like this

[

{rabbit, [{log_levels, [{Connection, Error}}]}

].

PS: The last] has., hope not to ignore.

Log file path settings

Vi/etc/rabbitmq/rabbitmq-env.conf

Rabbitmq_log_base=/vol/pic/log/rabbitmq

PS: Default Path/VAR/LOG/RABBITMQ

Network Partition Recovery

Http://www.rabbitmq.com/partitions.html

/etc/rabbitmq.config

[

{Rabbit,

[{tcp_listeners,[5672]},

{cluster_partition_handling, ignore}]

}

].

RABBITMQ node capacity configuration

Http://www.rabbitmq.com/memory.html

Rabbitmqctl Set_vm_memory_high_watermark 0.5//change the upper memory limit of the RABBITMQ occupied machine to 0.5

can be done by

Rabbitmqctl status View {vm_memory_high_watermark,0.5} in results


Memory Management

The RABBITMQ server calculates the total memory size when it is started. It is then controlled according to the percentage specified by the Vm_memory_high_watermark parameter. You can rabbitmqctl set_vm_memory_high_watermark fraction dynamic settings by command.

By default, the value of Vm_memory_high_watermark is 0.4, and when RABBITMQ uses more than 40% of memory, the system blocks all connections. Once the alarm is lifted (the message is taken away by the consumer, or the message is written to the hard disk), the system resumes its work.

A 32-bit system uses a maximum of 2G of single-process memory, and the memory alarm line is 2g*40% even if the system has 10G.

Information about memory limits can be found in the log

Example:/home/data/rabbitmq/log

=info report==== 28-apr-2015::14:11:16 = = =

Memory limit set to 3804MB of 7609MB total.

When Vm_memory_high_watermark is 0 o'clock, there is no more memory alert. And all of the message releases will be stopped. This method can be used if you want to suppress the case of message publishing.


Disk Management

Information about memory limits can be found in the log

Example:/home/data/rabbitmq/log

=info report==== 28-apr-2015::14:11:16 = = =

Disk free limit set to 50MB

When the RABBITMQ memory usage reaches the preset value, and the blocking information is released, an attempt is made to output the in-memory information to disk. Persistent and non-persisted information will be written to disk. (Persistent information is written to disk as soon as it enters the queue)

If the disk has a preset value of 50% and the memory default is 0.4, then the queue information is written to disk when memory usage reaches 20%.

You can change the policy to write to the disk by setting Vm_memory_high_watermark_paging_ratio (the default value is 0.5), for example:

[{rabbit, [{vm_memory_high_watermark_paging_ratio, 0.75}, {Vm_memory_high_watermark, 0.4}]}].

The above configuration starts paging at 30% for memory used, and blocks publishers at 40%.

The above configuration will write the queue information to disk when the memory is full 30%, blocking the information release occurs when memory usage is 40%.

Recommended Vm_memory_high_watermark not exceed 50%.

RABBITMQ Queue Extra long causes QUEUEINGCONSUMERJVM overflow

Resolves an issue in which the JVM memory overflows due to long queueingconsumer in the RABBITMQ queue

Our server uses RABBITMQ as a container for message relay. One day I wonder if the RABBITMQ queue will digest in time. Then the following command was queried: Rabbitmqctl list_vhosts | Grep-p ". *\.host" | Xargs-i rabbitmqctl list_queues-p {} |  grep "Queue". No, don't know, a look at the jump: Most of the server's queues are basically empty, but some servers have a queue that has more than 500W records. The general RABBITMQ process takes up memory but 100m-200m, these queues have an extra-long rabbitmq process that can consume more than 2G of memory.

Apparently, there was a problem with the Message Queuing consumer. Development View Log The log of the Java service as the consumer of this queue is also stuck, and after restarting the service (this is not done, you should use Jstat, jstack to troubleshoot, rather than just restart) and quickly get stuck. That's when he remembered. Using Jstat, the jstat found that the JVM's memory was exhausted and then went into an endless full GC, so of course the queue message and output log information were not processed. The output of the Jstat is as follows:

--------------------------------------------------------------------------

[Root@mail ~]# Jstat-gcutil 29389

S0 S1 E O P ygc ygct FGC fgct GCT

100.00 0.00 100.00 100.00 59.59 1639 2.963 219078 99272.246 99275.209

--------------------------------------------------------------------------

Use Jmap to export the Java stack at this time, command: Jmap-dump:format=b,file=29389.hprof 29389. The resulting dump file is analyzed in the mat (Eclipse memory Analyzer) and it is clear that Queueingconsumer holds a large number of objects causing a JVM memory overflow, as shown in the screenshot below:

Search the Internet, found that someone has encountered a similar problem: RabbitMQ queueingconsumer possible memory leak. The workaround is to call the channel's Basicqos method to set the maximum number of messages saved in temporary memory. The setting of this value is recommended to refer to "Some queuing theory:throughput, latency and bandwidth" to weigh the decision.

Beat the head to set the Basicqos to 16 after restarting the service, the queue finally began to digest. Using Jstat to observe the JVM memory usage, there is no more surge overflow phenomenon.

Summary: When using RABBITMQ, be sure to set the QoS parameters reasonably. I feel RABBITMQ's default approach is actually very fragile, prone to avalanche. "You had a queue in Rabbit. You are some clients consuming from the that queue. If you don't set a QoS setting at all (Basic.qos) and then Rabbit'll push all the queue's messages to the clients as fast a s the network and the clients would allow. This way, if for some reason, the queue of more than the accumulation of messages, it may lead to comsumer memory overflow card death, so a vicious circle, the queue of messages constantly piled up not to digest, completely tragic.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.