CPU-intensive task of node.js soft Rib

Source: Internet
Author: User
Tags require thread

Node.js on the official web: "A platform built on the chrome JavaScript runtime to build high-speed, scalable web programs." Node.js's event-driven, non-blocking I/O model makes it lightweight and efficient, and is the perfect choice for building data-intensive real-time programs that run on distributed devices. "Web sites have long been not limited to content rendering, and many interactive and collaborative environments have been gradually moved to the Web site, and the demand is growing." This is the so-called data-intensive real-time (data-intensive real-time) applications, such as online collaboration whiteboard, multiplayer online games, etc., which require a platform that can respond to a large number of concurrent user requests in real time to support them, This is precisely the area Node.js is good at.

Using Node.js to handle I/o-intensive tasks is fairly straightforward, just call the asynchronous non-blocking functions it prepares. However, data-intensive real time (data-intensive real-time) applications are not just I/o-intensive tasks, but when encountering CPU-intensive tasks, For example, to decrypt the data (node.bcrypt.js), data compression and decompression (Node-tar), or according to the user's identity of the image to do some personalized processing, this time how to do? Let's take a look at the next Node.js's own programming model.

Congenital conditions of Node.js

Network programming strategy

A famous c10k question was presented in the 90 's. The idea is that when the number of users exceeds 10,000, many of the Web service programs that are not well designed will drop dramatically or even become paralyzed. At this time upgrade hardware also no use, the root of the problem is the system to deal with the request strategy, there is no amount of hardware resources it can not be used up. Later, people summed up four kinds of typical network programming strategies:

The server assigns a thread/process to each client request, using blocking I/O. Java is the strategy, as is Apache, and this strategy is the first choice for many interactive applications. Because of blocking, this strategy is difficult to achieve high-performance, but very simple, can implement complex interactive logic.

The server handles all client requests with a single thread, using non-blocking I/O and event mechanisms. This is the strategy that Node.js uses. This strategy is simple to implement, easy to migrate, and can provide sufficient performance, but not the full use of multi-core CPU resources.

The server allocates multiple threads to process the request, but each thread processes only the requests of one set of clients, using non-blocking I/O and event mechanisms. This is a simple improvement to the second strategy, which is prone to bugs on multithreaded concurrency.

The server allocates multiple threads to process the request, but each thread processes only the requests of one of the groups of clients, using asynchronous I/O. This strategy is very high on operating systems that support asynchronous I/O, but it is difficult to implement and is primarily used on Windows platforms.

Because most Web sites do not have too many calculations on the server side, they simply receive requests, give other services (such as file systems or databases), and wait for the results to return to the client. So the smart Node.js a second strategy for the fact that it does not reproduce a thread for each access request, but instead uses a main thread to process all requests. Avoids the overhead and complexity required to create, destroy, and switch between threads. This main thread is a very fast event loop that receives requests, handing out operations that require lengthy processing, and then continues to receive new requests to serve other users. The following figure depicts the request processing process for the Node.js program:

After the main thread event loop receives a request from the client, the request object, the response object, and the callback function are handed to the function that corresponds to the request. This function can be given to the internal thread pool for long-running I/O or local API calls, after the threads in the thread pool are processed, the result is returned to the main thread through the callback function, and the main thread sends the response to the client. So how does the event loop implement this process? Thanks to the V8 engine and LIBUV of the Node.js platform.

Event Loop and tick

The main thread of each node program has an event Loop,javascript code that runs all the way through this single thread. All I/O operations and calls to the local API are either asynchronous (with the help of the program's platform) or run on another thread. It's all handled through LIBUV. So when there's data on the socket, or when the local API function returns, there's a kind of synchronous way to invoke the JavaScript function that's interested in this particular event that just happened.

It is not safe to call the JS function directly in the thread where the event occurs, because it also encounters problems with conventional multithreaded procedures, race conditions, memory access to non-atomic operations, and so on. So to put events in a queue in a thread-safe way, if written in code, this should be roughly the case:

Lock (queue) {
    Queue.push (event);
}

Then execute JavaScript's mainline thread (the C code of event loop):

while (true) {
    //tick start

    Lock (queue) {
        var tickevents = copy (queue); 
The thread that copies the entries in the current queue has its own memory
        Queue.empty (); Empty shared Queues
    } for

    (var i = 0; i < tickevents.length i++) {
        invokejsfunction (tickevents[i]);
    }

    Tick End
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.