Node. js performance test under various configurations and Business pressure of multi-core Single Server

Source: Internet
Author: User
Tags nginx server nginx reverse proxy

This article is the original article, from the http://cnodejs.org, reprint please indicate the source and the author
Author: Snoopy
Http://cnodejs.org/blog? P = 2334

In the previous article, I mentioned using taskset to bind a multi-core CPU to run node. JS can improve its stability and performance. We talk about data. Today, we spent a day doing stress testing. Although the results are for reference only, they can also explain some problems.

Statement: the performance test data is absolutely authentic and the results are for reference only.

Network Environment: Intranet
Stress testing server:
Server: Linux 2.6.18
Server Configuration: Intel (r) Xeon (TM) CPU 3.40 GHz 4 CPUs
Memory: 6 GB

Packet sending server:
Package sending tool: the AB testing tool that comes with Apache 2.2.19
Server: Linux 2.6.18
Server Configuration: Pentium (r) dual-core CPU e5800 @ 3.20 GHz 2 CPUs
Memory: 1 GB

First round of testing: NULL frame Testing

Server Node. js code:
VaR HTTP = require ('http ');
VaR Server = http. createserver (function (request, response ){
Response. writehead (200, {'content-type': 'text/plain '});
Response. End ('Hello world \ n ');})
Server. Listen (8880); // Note: The four ports 8880-8883 are set as needed.
Console. Log ('server running at http: // 10.1.10.150: 8880 /');

The very simple code returns "Hello World", which is also the official sample code. We will first test the bare-metal performance of the node. js empty framework. The performance of specific services will be tested later.

Nginx code:
Upstream node_server_pool {
Server 10.1.10.150: 8880;
Server 10.1.10.150: 8881;
Server 10.1.10.150: 8882;
Server 10.1.10.150: 8883;
}
Server {
Listen 8888;
SERVER_NAME 10.1.10.150;
Location /{
Proxy_pass http: // node_server_pool;
}
}

First test: open only one node. js service on the server, port 8880, and directly test the pressure on port 8880 through AB.
1000/10 1000/30
3000/10 3000/30
5000/30 7000/30
8000/30
RPS 1801
1613 1995
1993 2403
2233 1963
Tpq 1, 0.63
0.62 0.49
0.46 0.42
0.45 0.52
Fail 0
0 0 0
0 0
170

Note:
1000/10: indicates the command./AB-C 1000-T 10 http: // 10.1.10.150: 8880/
RPS: indicates the number of requests processed per second and the main indicator of concurrency.
Tpq: the processing time of each request, in milliseconds.
Fail: indicates the average number of failed requests.

Test 2:

Node. JS opens the ports 8882-8883 and binds them to two different CPUs through taskset. Then nginx reverse proxy and Server Load balancer are used to listen to port 8888, and nginx is opened to two processes, bind to the other two CPUs, so the AB test package will be sent to port 8888.

TIPS: I used to run four nodes. JS binds 4 CPUs and then nginx opens 4 processes and binds 4 CPUs. As a result, the pressure test has been stuck, and it turns out to be a node. JS and nginx compete for CPU. Later, the Chu-han division, nginx uses the first two CPUs, while node. JS takes the last two CPUs. In addition, nginx is similar to a car. It requires a hot car first, and a few stress tests after restart. Then the water temperature will come up later, so you can step on the accelerator. Haha.

Step 1: Test nginx Server Load balancer, access port 8888, and then turn off the node. js process one by one. It is found that when all the ports are disabled, the page cannot be accessed, and the Server Load balancer is set successfully. Step 2: Test the CPU binding status. Perform the AB stress test on port 8881-8882 separately. It is found that only the CPU power consumption of the corresponding binding is high, and the binding is successful.
1000/30 3000/30
4000/30 5000/30
7000/30
RPS 2526
2471 2059
2217 2016
Tpq 1, 0.43
0.41 0.47
0.44 0.48
Fail 0
0 0 0
50

Third test:

Node. JS opens the three ports 8881-8883, bind them to three different CPUs through taskset, and then use nginx reverse proxy and Server Load balancer to listen to port 8888 in nginx and open a process in nginx, bind to the first CPU, so the AB test package will be sent to port 8888.
1000/30 3000/30
5000/30 7000/30
RPS 2269
2305 2164
2149
Tpq 1, 0.43
0.43 0.45
0.48
Fail 0
0 0 0

Summary of the First Round of evaluation: single process
Dual-process, three-Process
Commund 5000/30
3000/30 3000/30
RPS 2403
2471 2305
Tpq 1, 0.42
0.41 0.43
Fail 0
0 0

Note: Only data with similar peaks can be compared here.

Why is the third method worse than the second one? Obviously, one more node is enabled. the JS process is actually caused by the speed of nginx forwarding. This is similar to letting an STI run on the pony road. Even if the STI horsepower is high, the road condition is too bad and it is not running fast, let magt run on the high shelf.

If the framework is empty and does not involve business processing, open a single node. JS and dual-open node. there is no difference in JS performance. After all, nginx forwarding also consumes a portion of the performance, which is improved by about 5%-10%. Of course, the stability has increased by 100%.

Next, let's start the second round of Performance Testing. Let's look at the difference between the two when there is pressure to handle the business.

Second round of testing: Testing with business processing pressure

The nginx code remains unchanged, and the node. js code is changed:
VaR HTTP = require ('http ');
VaR Server = http. createserver (function (request, response ){
For (VAR I = 0; I <180000; I ++ ){
VaR J = I/3;
}
Response. writehead (200, {'content-type': 'text/plain '});
Response. End ('Hello world \ n ');})
Server. Listen (8882 );
Console. Log ('server running at http: // 10.1.10.150: 8882 /');

The only difference here is that a loop is added to simulate business processing.

Test results in the second round:

Process
Single-process, dual-Process
Three-process, single-process
Dual-process, three-Process
Single-process, dual-Process
Three Processes

Commond
1000/30 1000/30
1000/30 3000/30
3000/30 3000/30
5000/30 5000/30
5000/30

RPS 203
311 432
198 300
451 202
294 412

Tpq 1, 4.93
3.2 2.37
5.03 3.33
2.2 4.94
3.42 2.44

50% req
4500 Ms 1500 Ms
750 ms 5000 Ms
2000 ms 1500 Ms
7000 MS 2000 ms
2000 ms

Fail 0
0 0
0 0 0
235 0
0

Note:
1000/30: indicates the command./AB-C 1000-T 30 http: // 10.1.10.150: 8888/
RPS: indicates the number of requests processed per second and the main indicator of concurrency.
Tpq: the processing time of each request, in milliseconds.
Fail: indicates the average number of failed requests.
50% Req: the number of milliseconds in which 50% of requests are returned.

Summary of two rounds of test results:

VaR type1 = streaking node. js Service
VaR type2 = nginx reverse proxy + Server Load balancer + node. js bind multiple CPUs

When the business processing is relatively simple, the performance difference between type1 and type2 is not great. However, when the business processing pressure comes up, the performance of type2 processing requests per second is improved by 100%, the response speed is improved by 200%, and the stability is improved by 200%. To sum up, you need to start the node on a 4-core CPU server. the best way to use the JS service is to bind the first CPU to the nginx process and the last three CPUs to the node. JS process.

My blog address with a picture of the truth: http://snoopyxdy.blog.163.com/

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.