Aforementioned
We all know that the biggest feature of nodejs is single process, non-blocking operation, and asynchronous event-driven. These features of Nodejs can solve some problems well. For example, in server development, concurrent request processing is a big problem. blocking functions can cause resource waste and time delay. Through event registration and asynchronous functions, developers can improve resource utilization and performance. Since Node. js adopts a single-process and single-thread mode, in today's popular multi-core hardware environment, how does Node. js with excellent single-core performance use multi-core CPU? Founder Ryan Dahl suggested running multiple Nodejs processes and using some communication mechanisms to coordinate various tasks. Currently, many third-party Node. js multi-process support modules have been released, while NodeJS 0.6.x and aboveVersionProvidesCluster ModuleAllows you to create a group of processes that share the same socket to share the load. This article describes Node. js programming under multi-core CPU Based on the cluster module.
Cluster module Introduction
The cluster module provided by nodejs is currently in the experimental stage. The release information of the module is shown in the official documentation of v0.10.7:
Stability: 1 - Experimental
For the functions of this module, the source documentation describes "A single instance of Node runs in a single thread. to take advantage of multi-core systems the user will sometimes want to launch a cluster of Node processes to handle the load. it means that the Node example runs in single-process mode. Sometimes, to fully utilize the resources of multi-core systems, users need to run a group of Node processes to share the load.
Cluster usage
First, paste a sample application code for this module, and then perform a detailed analysis. The Code is as follows:
var cluster = require('cluster');var http = require('http');var numCPUs = require('os').cpus().length;if (cluster.isMaster) { require('os').cpus().forEach(function(){ cluster.fork(); }); cluster.on('exit', function(worker, code, signal) { console.log('worker ' + worker.process.pid + ' died'); }); cluster.on('listening', function(worker, address) { console.log("A worker with #"+worker.id+" is now connected to " + address.address + ":" + address.port); }); } else { http.createServer(function(req, res) { res.writeHead(200); res.end("hello world\n"); console.log('Worker #' + cluster.worker.id + ' make a response'); }).listen(8000);}
This code is very simple. The main thread is the currently running js file. The main thread creates sub-processes based on the number of cores in your local system. All processes share a listening port of 8000. When a request is initiated, the main thread randomly allocates the request to a sub-process. Console. log ('worker # '+ cluster. Worker. id + 'Make a response'); this code can print the process that processes the request.
Problem Analysis
When a request is initiated, the system determines the process to which the request is sent for processing. This completely dependent system load balancing has an important defect: On windows, linux, and Solaris, as long as the accept queue of a sub-process is null (usually the created sub-process), the system will allocate multiple connections to the same sub-process, this will cause extremely unbalanced load between processes. Especially when a persistent connection is used, the new coming connection per unit time is not high, and the accept queue of the sub-process is usually empty, which will cause the connection to be allocated to the same process continuously. Therefore, this load balancing relies entirely on the degree of idleness of the accept queue, and can achieve Load Balancing only when short connections are used and concurrency is very high, however, the system load will be very high and the system will become unstable.
Postscript
In the future, I will study the multi-process development under nodejs and share the summary.