NIO Series 6: Evaluation and analysis of the performance of the popular NiO Framework Netty and Mina

Source: Internet
Author: User
The test method uses both Mina and Netty to implement a echoserver based on NIO, testing the performance of different sizes of network messages




Test environment Client-server: Model Name:intel (R) Core (TM) i5-2320 CPU @ 3.00GHz cache size:6144 KB CPU cores: 4 jdk:1.6.0_30-b12 NETWORK:1000MB Memory:-xms256m-xmx256m Linux:centos 5 .7, Kernel 2.6.18-274.el5
Test tools: JMeter v2.4
Version: Mina 2.0.7 netty 3.6.2.Final

Configuration: Mina Io-processor CPU Core Executor CPU kernel number buffer initial buffer size, set to 2048 ( 2k) Netty Boss Netty default configuration 1 worker CPU core number executor CPU cores in fact, from the rationale On the above, the echo-type application does not configure the Executor business execution thread pool for better performance and lower consumption, but in real business applications, real business scenario processing usually involves a variety of complex logic calculations, caching, database, external interface access, and blocking IO to avoid business execution delays Thread execution causes throughput degradation, usually separating IO processing threads and business processing threads, so our test case also configures the business execution thread pool to examine the scheduling effectiveness of their thread pools.
Mina thread pool Set io processor:ioacceptor acceptor = new Niosocketacceptor (Integer.parseint (Iopool))            ; Executor:acceptor.getFilterChain (). AddLast ("ThreadPool", New Executorfilter (Integer.parseint       OL)); Netty thread pool Set io worker:new nioworkerpool (Executors.newcachedthreadpool (), Integer.parseint (i Opool)) Executor:new Orderedmemoryawarethreadpoolexecutor (Integer.parseint (Executorpool), 0, 0)






Test results
Mina Tps Cpu Network IO Art (average response time)
90%RT (90% response time)
1k 45024/sec
150%
50mb/sec
< 1ms
1ms
2k 35548/sec
170% 81mb/sec
< 1ms
1ms
5k 10155/sec
90%
55mb/sec
3 ms
1ms
10k 8740/sec
137% 98mb/sec 3ms 4ms
50k 1873/sec 128% 100mb/sec 16ms 19ms
100k 949/sec 128% 100mb/sec 33ms 43ms

Netty Tps Cpu Network IO Art (average response time)
90%RT (90% response time)
1k 44653/sec
155%
50mb/sec
< 1ms
1ms
2k 35580/sec
175% 81mb/sec
< 1ms
1ms
5k 17971/sec
195%
98mb/sec
3 ms
1ms
10k 8806/sec
195% 98mb/sec 3ms 4ms
50k 1909/sec 197% 100mb/sec 16ms 18ms
100k 964/sec 197% 100mb/sec 32ms 45ms





Test reviews Mina and Netty in 1k, 2k, 10k, 50k, 100k message Large-hour TPS near Mina there is an obvious anomaly (red callout) in the 5k message, the TPS is lower, the network IO throughput is lower, and the network IO throughput of Netty in 5k message is compared 98mb/sec (basically close to the precursor network card limit) 5k message above the basic can be filled with network IO, bottleneck in Io, so TPS and response time is basically not quite the difference.
The question is why Mina has significantly reduced IO throughput when it comes to 5k messages.




Test analysis
By analyzing the source code of Mina and Netty, it is found that there are some differences between the two frames in the buffer allocation strategy when dealing with IO Read events. In the network IO processing, every time the program calls the socket API to read bytes from TCP buffer is always changing, it will be subject to packet size, operating system TCP buffer size, TCP protocol algorithm implementation, network link bandwidth of various factors. Therefore, in processing each read event, the NIO framework also needs to dynamically allocate a buffer to temporarily store the read bytes, and the efficiency of buffer allocation is the key factor that affects the performance of the Network IO Framework program.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.