Advanced Message Queuing protocol (4) over InfiniBand

Source: Internet
Author: User

<Design and evaluation of benchmarks for financial applications using Advanced Message Queuing protocol (amqp) over InfiniBand>
Hari subramoni, Gregory Marsh, sundeep narravula, Ping Lai, and dhabaleswar K. Panda
Department of Computer Science and Engineering, the Ohio State University

InfiniBand is a new network transmission technology in recent years. It features high bandwidth and low latency. A unified interconnection structure is formed through a persistent cable connection mode, which can process storage I/O, network I/O, and inter-process communication (IPC ). It can eliminate the bottlenecks that currently impede servers and storage systems. It is a high-performance I/O Technology Dedicated to servers rather than PCs and is mainly used in data centers.
InfiniBand supports two transmission models: Channel semantics and memory semantics ). The main channel mode is discrete data transmission and receiving, and the memory mode uses rdma technology to allow the process to access the memory of the remote process without passing through the CPU. InfiniBand supports the TCP/IP protocol, IP over Ib (ipoib), and socket direct protocol (SDP ).

Amqp performance test


Test environment:
Node:
CPU: Intel Xeon quad dual-core processor, 6 GB RAM
Operating System: Red Hat Enterprise Linux 4u4.
NIC: 1 gige network interface controller (NIC) with InfiniBand Host channel adapter (HCA)

Broker version
Qpid version m3 alpha
Running parameter: TCP no delay

Test the basic environment:
Test the maximum network traffic (using socket ):
Ipoib: 550 Mbps
SDP: 650 Mbps

 

Direct Exchange-single publisher single consumer (DE-SPSC)

(A) DE-SPSC small message latency
DE-SPSC large message latency

DE-SPSC message rate, and
(D) MPI level message Rate

 

Direct Exchange-multiple publishers multiple consumers (DE-MPMC)

(A) DE-MPMC bandwidth
(B) DE-MPMC CPU utilization over ipoib

(C) DE-PP small message latency
DE-PP large message latency

 

Fanout exchange-single publisher multiple consumers (FE-SPSC)

(A) bandwidth
(B) ipoib message Rate

(C) 1 gige message Rate
(D) SDP message Rate

 

Topic exchange-single publisher single consumer (TE-SPSC) Benchmark

(A) bandwidth
(B) ipoib message Rate

(C) 1 gige message Rate
(D) SDP message Rate

Conclusion:
Ipoib has better processing performance for small messages and SDP has better performance for large data processing.
The CPU usage of borker is proportional to the number of producer and consumer.
Ipoib is very sensitive to message rate, which is related to the stack overhead of ipoib. If rdma technology is used, the performance will be further improved.
Test data shows that it is best to mix these technologies and add multiple brokers to solve the broker bottleneck problem.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.