Network optimization Nginx and node.js have several ways?

Source: Internet
Author: User
Keywords User experience website construction
Tags .net apache application applications based closed configuration connections

Translator: Alfredcheung

Nginx and Node.js are naturally paired on the topic of building high throughput Web applications. They are all based on the event-driven model and are designed to easily break through the c10k bottlenecks of traditional Web servers such as Apache. Preset configurations can be highly concurrent, but there is still work to do if you want to do more than thousands of requests per second on inexpensive hardware.

This article assumes that readers use Nginx's httpproxymodule to act as a reverse proxy for upstream node.js servers. We will introduce the tuning of the Ubuntu 10.04 system Sysctl, as well as the Node.js application and Nginx tuning. Of course, if you use the Debian system, you can achieve the same goal, but the method of tuning is different.

Network tuning

If we do not first understand the underlying transmission mechanism of nginx and node.js, and make specific optimization, it may be futile to fine-tune the two. Typically, Nginx connects clients with upstream applications via a TCP socket.

Our system has many thresholds and limitations for TCP, which are set by kernel parameters. The default values for these parameters are often set for general purposes and do not meet the high traffic, short life requirements required by the Web server.

Here are some of the parameters for tuning TCP to be candidate. For them to take effect, you can put them in a/etc/sysctl.conf file, or put them in a new profile, such as/etc/sysctl.d/99-tuning.conf, and then run the sysctl-p and let the kernel load them. We use Sysctl-cookbook to do this physical work.

Note that the values listed here are safe to use, but we recommend that you look at the meaning of each parameter to choose a more appropriate value based on your load, hardware, and usage.

Net.ipv4.ip_local_port_range= ' 1024 65000 '
Net.ipv4.tcp_tw_reuse= ' 1 '
net.ipv4.tcp_fin_timeout= ' 15 '
net.core.netdev_max_backlog= ' 4096 '
net.core.rmem_max= ' 16777216 '
net.core.somaxconn= ' 4096 '
net.core.wmem_max= ' 16777216 '
net.ipv4.tcp_max_syn_backlog= ' 20480 '
net.ipv4.tcp_max_tw_buckets= ' 400000 '
Net.ipv4.tcp_no_metrics_save= ' 1 '
net.ipv4.tcp_rmem= ' 4096 87380 16777216 '
Net.ipv4.tcp_syn_retries= ' 2 '
Net.ipv4.tcp_synack_retries= ' 2 '
net.ipv4.tcp_wmem= ' 4096 65536 16777216 '
vm.min_free_kbytes= ' 65536 '
Highlight several of these important.

Net.ipv4.ip_local_port_range

To serve downstream clients for upstream applications, Nginx must open two TCP connections, one connection client, one connection application. When a server receives many connections, the system's available ports are quickly depleted. You can change the range of available ports by modifying the Net.ipv4.ip_local_port_range parameter. If the error is found in/var/log/syslog: "Possible SYN flooding on port 80." Sending cookies "means that the system cannot find the available ports. Increasing the Net.ipv4.ip_local_port_range parameter can reduce this error.

Net.ipv4.tcp_tw_reuse

When the server needs to switch between a large number of TCP connections, it produces a large number of connections in the TIME_WAIT state. Time_wait means that the connection itself is closed, but the resource has not been released. Setting the Net_ipv4_tcp_tw_reuse to 1 is to make the kernel recycle connections as securely as possible, much cheaper than to re-establish a new connection.

Net.ipv4.tcp_fin_timeout

This is the minimum time that a connection in the TIME_WAIT state must wait before recycling. It can speed up recycling.

How to check connection status

Use netstat:

Netstat-tan | awk ' {print $} ' | Sort | Uniq-c
or use SS:

Ss-s
NginX
As the load of the Web server increases, we begin to encounter some strange limitations of nginx. The connection is discarded and the kernel keeps reporting SYN flood. At this point, the average load and CPU usage are very small, the server is clearly able to handle the state of more connections, it is frustrating.

After investigation, it was found that there are many connections in the TIME_WAIT state. This is the output of one of the servers:

Ss-s
total:388 (kernel 541)
TCP:   47461 (estab 311, closed 47135, orphaned 4, synrecv 0, timewait 47135/0), Ports 33938
By total     IP        ipv6
*          541      -       -      &NBSP
RAW        0         0         0        
UDP        13   & nbsp;    10        3       &NBSP
TCP        326     & nbsp; 325       1       &NBSP
INET       339       335     & nbsp; 4       &NBSP
Frag       0         0         0
There are 47,135 time_wait connections! And, as you can see from SS, they are all closed connections. This means that the server has consumed most of the available ports, and it also implies that the server is allocating new ports for each connection. Tuning network to this questionThe problem has a little help, but the port is still not enough.

After continuing the study, I found a document on the Uplink connection keepalive directive, which reads:

Sets the maximum number of idle active connections to the upstream server that are kept in the worker process's cache.

Interesting。 In theory, this setting is to minimize the waste of connections by passing requests over a cached connection. The document also mentions that we should set the proxy_http_version to "1.1" and clear the "Connection" head. After further study, I found that this is a good idea, because http/1.1 compared to HTTP1.0, greatly optimizes the TCP connection utilization, and Nginx by default is http/1.0.
As recommended by the document, our uplink configuration file becomes this:

Upstream Backend_nodejs {
Server nodejs-3:5016 max_fails=0 fail_timeout=10s;
Server nodejs-4:5016 max_fails=0 fail_timeout=10s;
Server nodejs-5:5016 max_fails=0 fail_timeout=10s;
Server nodejs-6:5016 max_fails=0 fail_timeout=10s;
KeepAlive 512;
}
I also modified the proxy settings in the server section as recommended. At the same time, a P Roxy_next_upstream is added to skip the failed server, adjust the client's keepalive_timeout, and close the access log. Configuration becomes this:

Server {
Listen
server_name fast.gosquared.com
client_max_body_size 16M;
Keepalive_timeout
Location/{
Proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
Proxy_set_header   Connection "";
Proxy_http_version 1.1;
Proxy_pass Http://backend_nodejs;
}
Access_log off;
Error_log/dev/null Crit
}
with the new configuration, I found that the socket used by the server was reduced by 90%. You can now transfer requests with a much less connection. The new output is as follows:

Ss-s
total:558 (Kernel 604)
tcp:4675 (estab 485, closed 4183, orphaned 0, synrecv 0, timewait 4183/0), Ports 2768
By Total IP IPv6
* 604-
RAW 0 0 0
UDP 13 10 3
TCP 492 491 1
INET 505 501 4
Thanks to event-driven design, you can handle a large number of connections and requests asynchronously with i/o,node.js unpacking. Although there are other ways of tuning, this article will focus on the node.js process.

Node is single-threaded and does not automatically use multi-core. In other words, the application does not automatically acquire the full capabilities of the server.

Implementing Node Process Clustering

We can modify the application so that it fork multiple threads and receive data on the same port, which enables the load to span multiple cores. Node has a cluster module that provides all the tools necessary to achieve this goal, but it takes a lot of physical work to add them to the application. If you're using Express,ebay, there's a module called Cluster2 that you can use.

Prevent context switching

When running multiple processes, you should ensure that each CPU core is busy with only one process at a time. In general, if the CPU has n cores, we should generate N-1 application processes. This ensures that each process has a reasonable time slice, while the remaining one is left to the kernel scheduler to run other tasks. We also want to make sure that the server is basically not performing other than node.js tasks, to prevent the occurrence of CPU contention.

We've made a mistake by deploying two node.js applications on the server, and then each application has a N-1 process. As a result, they snatch the CPU from each other, causing the system to load up. Although our servers are 8-core machines, we still have a clear sense of the performance overhead caused by context switching. Context switching is the phenomenon of the CPU suspending the current task in order to perform other tasks. When switching, the kernel must suspend all states of the current process and then load and execute another process. To solve this problem, we reduced the number of processes open for each application, allowing them to share the CPU fairly, resulting in a reduced system load:

Load-reduction

Note the above figure to see how the system load (blue line) falls below the number of CPU cores (red lines). On other servers, we see the same thing. Since the overall workload remains unchanged, the performance improvement in the above figure can only be attributed to the reduction in context switching.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.