Message Queue performance comparison-ActiveMQ, RabbitMQ, ZeroMQ, and activemqrabbitmq

Source: Internet
Author: User

Message Queue performance comparison-ActiveMQ, RabbitMQ, ZeroMQ, and activemqrabbitmq
Dissecting Message Queues

Overview:

I spent some time profiling various databases to execute distributed messages. In this analysis, I have looked at several different aspects, including API features, ease of deployment and maintenance, and performance quality .. The message queue has been divided into two groups: brokerless and brokered.

Brokerless message queues are peer-to-peer and do not involve middlemen in information transmission. brokered queues have some server endpoints.

 

Performance analysis systems:  

  Brokerless
Nanomsg
ZeroMQ

 

 Brokered
ActiveMQ
NATS
Kafka
Kestrel
NSQ
RabbitMQ
Redis
Ruby-nats

 

Test environment:

First, let's take a look at the performance indicators, because this is what people are most concerned about. I have measured two key indicators: throughput and latency.

All tests run on a 2.6 GHz i7 processor for the MacBook Pro, with 16 GB of memory. These tests evaluate the publishing and subscription topology of a single producer and single consumer. This provides a good baseline. This would be an interesting baseline scaling topology, but more instruments are needed.

 

  

 

Start test: 1. throughput benchmark:

Throughput benchmark refers to the number of messages that the system can process per second. Note that there may be a single "throughput" in the queue ". We send messages between two different endpoints, So we observe a "sender" throughput and a "receiver" throughput, that is, the number of messages that can be sent per second and the number of messages that can be received per second ..

In this test, 1,000,000 1 kb messages are sent and the time for sending and receiving messages on both sides is calculated. Here, 1 KB data is selected because this data is closer to the Message requests we encounter during daily development, many performance tests tend to use smaller messages within the range of 100 to 500 bytes ., although each system is different, we can only select similar test data for testing. Only one proxy is used for message-oriented middleware systems.

  Brokeless:

  

We can see from the picture that there is a high throughput in the sending test, but it is interesting that the ratio difference between the sender and the receiver.

ZeroMQ can send more than 5000000 messages per second, but can only receive about 600000 messages per second.

Instead, nanomsg receives nearly 3000000 shy frames per second.

 

 

   Brokered:

  

 

 

We can intuitively observe that Brokered message queues have at least two orders of magnitude less throughput than Brokerless. Half of Brokered message queues have a throughput of less than 25000 messages per second. Redis throughput may be misleading. Although Redis provides the publish/subscribe function, it is not really designed as a powerful message queue. Using ZeroMq in a similar way, Redis cut down slow users. It is important to note that this is not a reliable way to process such a volume of messages. We can regard it as a special point. Kafla and ruby-nats have similar characteristics as Redis, but can handle intermittent fault information reliably. NATs has a superior throughput in this respect

 

Through the above graph analysis, we can see that the Brokered queue has the same throughput in terms of sending and receiving. Unlike the Brokerless, the throughput of the sender and receiver is significantly different.

 

2. Latency Benchmarks Latency Benchmark

The second key performance indicator is message latency, which measures how long it takes to transmit messages between endpoints. Intuition may tell us that this is only the inverse of throughput, that is, if the throughput is message/Second, the delay is second/message. However, looking at this image from the ZeroMQ White Paper, we can see that this is not a case.

                  

The reality is that the delay line for each message is not uniform, it can be different for each. In fact, the relationship between latency and throughput is somewhat involved.

Different from throughput, latency measurement does not distinguish between the sender and receiver, but serves as a whole. However, since each message has its own latency, we will look at their average value. Further, we can see that the average message delay is related to the number of sent messages .. Intuition tells us that more information means more queues, which means higher latency.

Medium:

Blue: nanomsg

RED: ZeroMq

      

                    

 

In general, our assumptions prove correct because more messages are sent to the system, and the latency of each message increases. Interestingly, when we were close to 1000000 messages, the latency slowed down by 500000 points .. Another interesting observation is the initial peak of message latency between 1000 and 5000, which is more significant than nanomsg. This is difficult to determine the causal relationship, but these changes may reflect how to implement message batch processing and other network stack traversal optimization in each database .. More data points provide better visibility.

      

Let's take a look at the Brokered queue and some interesting new similar models.

ActiveMq

Kafka

RabbitMq

                

Their latency is higher than other Brokered delays, so ACtiveMq and RabbitMq are divided into their own AMQP categories.

 

 

Now we have seen some empirical data on how these different libraries are executed. I will look at how they work from a pragmatic perspective. Message throughput and speed are very important, but it is not practical if the library is difficult to use, deploy or maintain ..

 

 

ZeroMQ and Nanomsg

Technically speaking, nanomsg is not a message queue, but a library that executes socket-style distributed messages in a variety of convenient ways. Therefore, there is nothing to deploy except to embed the database into the application .. This makes deployment non-problematic.

Nanomsg was written by ZeroMQ and discussed with me in a very similar way to work on libraries. From a development perspective, nanomsg provides a fully-clean API. Different from ZeroMQ, it is considered that there is no context, and the socket is bound. In addition, nanomsg provides pluggable transport and communication protocols to make them more open and scalable. Its Additional built-in scalability Protocol also makes it quite attractive.

Like ZeroMQ, it ensures that messages are transmitted completely and orderly atomically, but they are not delivered. Partial messages cannot be delivered, and some messages may not be delivered.

ZeroMq developerMartin Sustrik: Clearly states:

Guaranteed delivery is a myth. no thing is 100% guaranteed. that's the nature of the world we live in. what we shoshould do instead is to build an internet-like system that is resilient in face of failures and routes around damage. (guaranteed delivery is a myth. No 100% guarantee. This is the nature of the world we live in. What we should do is to build an Internet-like system that is elastic in the face of failures and route damages .)

 

ActiveMQ and RabbitMQ

ActiveMQ and RabbitMQ are both specific implementations of AMQP. They play a role that ensures that the delivery can be done with caution. Both AcitveMQ and RabbitMQ support persistent or non-persistent information delivery. By default, messages are stored in the disk to ensure data consistency during the restart of the message queue and avoid message loss. They also support synchronous and asynchronous message sending. The former has a substantial impact on latency. To ensure delivery, these proxies use message validation, which also results in a high latency cost.

In terms of availability and fault tolerance, these proxies support clusters through shared storage or no sharing. The queue can be replicated across cluster nodes, so there is no single point of failure or message loss.

AMQP is an extraordinary protocol, and its creators claim to be overly-designed. These extra guarantees are at the cost of primary complexity and performance compromise. Basically, it is more difficult for customers to implement and use it.

Because they are message proxies, ActiveMQ and RabbitMQ are additional mobile components that need to be managed in a distributed system, which leads to deployment and maintenance costs.

Redis

The last is Redis. Although Redis is ideal for lightweight messaging and temporary storage, I cannot advocate using it as the backbone of a distributed messaging system. Its pub/sub is fast, but its functions are limited. It requires a lot of work to build a robust system. There are solutions that are better suited to this problem, such as the solutions described above, and there are still some scaling problems.

In addition, Redis is easy to use, easy to deploy and manage, and occupies a relatively small space. Based on the use case, it can be a great choice for Real-Time Messages

 

 

Add original blog address: http://bravenewgeek.com/dissecting-message-queues/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.