Performance optimization for new features of node. JS V0.12

Source: Internet
Author: User
Tags openssl library

V0.12 's long development cycle (which has been in the past nine months and continues to be the longest ever) gives the core team and contributors ample opportunity to optimize performance. This article describes some of the most notable of these.

Writable stream support for plug-in mode

Now the writable stream can support the plug-in (corked) mode, similar to man tcp the socket options you see when you TCP_CORK execute TCP_NOPUSH .

When plugged in, data written to the stream is queued until the stream is re-opened (uncorked). node. JS can then combine smaller writes into larger ones, reducing system calls and TCP round trips.

The HTTP module has been upgraded to transparently use plug-in mode when sending a chunked request or response body. If you look at the Strace (1) output often, you will find that the Writev (2) system calls more, and write (2) becomes less.

Improve TLS performance

The TLS module has done considerable refactoring in node. js v0.12.

In node. JS v0.10, the TLS module is on top of the net module as a transport stream that transparently encrypts and decrypts network traffic. From an engineering standpoint, such layering is necessary, but it can lead to overhead-more memory transfers and out-of-the-box call-outs in V8 VMS-and hinders optimizations.

So in node. JS v0.12, the TLS module is rewritten directly with LIBUV. Now it pulls the incoming network traffic directly out of the encryption, without passing through the middle tier.

Although this evaluation is less scientific, using a blank password indicates that TLS is now generally 10% faster and uses less memory. (I would say that the reduced memory footprint may be partly due to the refactoring of memory management, which is another optimization for v0.12.) )

(also, if you want to know, the passwords are not encrypted on the load when they are empty; they can be used to measure the cost of schemas and protocols.) )

For the end user, most of the changes to the TLS module are transparent. One of the most conspicuous is that the TLS connection is now being tls.TLSSocket obtained from, not tls.CryptoStream in.

Improve Crypto performance

Several encryption and decryption algorithms should now be faster, sometimes much faster. First introduce the following background:

The addition and decryption in node. JS is implemented using the OpenSSL library. The algorithms in OpenSSL are written in C and are manual assembly versions for various platforms and architectures.

node. JS v0.10 has already used the assembly version for some things, and v0.12 has expanded this range a wide range. In addition, Aes-ni is now available with CPU support, and most of the x86 processors produced in the last three or four years are supported.

On Linux, if the execution grep ^flags /proc/cpuinfo | grep -w aes finds any matching results, it means that your system supports AES-NI. However, hypervisors such as VMware or VirtualBox may have a real ability to hide CPUs from the client operating system, including Aes-ni.

The interesting result of enabling Aes-ni is that industrial-strength decryption like aes128-gcm-sha256 is now much faster than unencrypted ciphers such as NULL-MD5!

Reduced stress response to garbage collection

One side effect of multi-scenario refactoring is that it drastically reduces the number of persistent handles in the node. JS Core.

A durable handle is a strong reference to an object on the V8 heap, preventing the object from being reclaimed by the garbage collector before the reference is removed again (by GC, it is an artificial GC root.) )

node. js caches frequently used values, such as strings or object prototypes, with persistent handles. However, persistent handles require a special post-processing step in the garbage collector, and there is a linear growth overhead based on the number of handles.

Because of the role of multi-scene cleanup, most of the persistent handles are either erased or switched to a lighter mechanism (known as the ' eternity handle '; What does this name imply? )

The end result is that your program takes less time inside the garbage collector, and more time is spent on the actual work. There node –prof should be a lot less occurrences in the output now v8::internal::GlobalHandles::PostGarbageCollectionProcessing() .

Better cluster performance

node. JS v0.10 's Cluster module distributes the access connection evenly to the worker process by the operating system.

It turns out that some workloads are unevenly distributed between worker processes on Solaris and Linux. To alleviate this situation, the node. JS v0.12 has changed, using the rotation method by default. This article is described in more detail.

Faster timers, faster setImmediate(), Faster process.nextTick()

setTimeout()And its friends now use the time source not only faster, but also immune to clock offsets. This optimization is enabled on all platforms, but on Linux we go further and read the current time directly from the VDSO, thus greatly reducing the gettimeofday(2) clock_gettime(2) number of calls to the system.

setImmediate()And process.nextTick() also give a quick path to the usual distribution, you can see the performance adjustment. This means that although these functions are fairly fast, they are now faster.

About the author

This article was originally published by Ben Noordhuis on Strongloop. Ben Noordhuis has developed the core code for node. JS from 2010, following Ryan Dahl. He has been working on coding, debugging, and benchmarking to improve the node core code. As one of the most productive node core developers, Ben wrote many of the code in node. js and Libuv. Strongloop reduces the difficulty of developing APIs in node, and also adds devops capabilities such as monitoring, clustering, and support for private registrations.

Performance optimization for new features of node. JS V0.12

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.