This article gives you a detailed description of the method and step for implementing multi-process using the cluster module in nodejs, which is very detailed and comprehensive. If you need it, you can refer
First, solemnly declare:
NodeJS is a single thread! Asynchronous! Non-blocking language!
NodeJS is a single thread! Asynchronous! Non-blocking language!
NodeJS is a single thread! Asynchronous! Non-blocking language!
The important thing is said three times. Because node. js comes with its own buff, it has been sought after by thousands of fans since its birth (I am also loyal to it). However, php laughed at the performance of my big node. js. It is unstable and unreliable. Only single-core CPU can be used. NodeJS.
Success! Success! Success!
Engage in mo shi ~
But the eldest brother is the eldest brother. nodeJS has already added the cluster module in v0.8. Fully face php. Although, php also began to copy nodeJS and exit php7, you will only copy the dregs...
233333
Sorry, it's a part of my self-obscenity ~ The above content is purely a joke. If it is the same, it is a coincidence.
OK ~ Let's officially introduce nodeJS's multi-process ~
Past and present of cluster
In the past, due to the imperfection of the cluster itself, the performance may be poor due to many reasons. The result is the rise of the pm2 package. With pm2, you can start multiple processes to achieve load balancing.
pm2 start app.js
The internal implementation of pm2 and the internal implementation of cluster is actually a truth. Both encapsulate a layer of child_process -- fork. While child_process -- fork encapsulates the fork method of unix systems. Now that everything is here, let's take a look at the official explanations.
fork() creates a new process by duplicating the calling process. The new process is referred to as the child process. The calling process is referred to as the parent process.The child process and the parent process run in separate memory spaces. At the time of fork() both memory spaces have the same content. Memory writes, file mappings (mmap(2)), and unmappings (munmap(2)) performed by one of the processes do not affect the other.
Fork is actually the method for creating sub-processes. The newly created process is considered as a sub-process, and the process that calls fork is the parent process. The child process and parent process are originally in an independent memory space. But when you use fork, the two are in the same scope. However, the memory read/write and the map of the file will not affect the other party.
The above section means that the processes you create can communicate with each other and be managed by the master process.
See the figure ~~~
This is actually what it means.
OK ~ This is only a model for the system to create sub-processes. In NodeJs, how does one implement interaction between processes?
It's easy to listen to ports...
However, it is not very difficult to implement communication. The key is that if requests are allocated, this is a big pitfall for nodeJS.
NodeJS implements the Black History of process allocation
Long time ago
Node. js's master was not God at first. He was just a little eunuch. Every time he requested the master, he only watched several worker emperors compete for each other silently. If a worker wins, the other worker will handle the issue and wait for the next request to come. Therefore, each request may cause a storm. What we know most is the shocking group phenomenon, that is, the CPU burst table.
Use a picture of corner stone to explain it.
Here, the master only binds the port and does not process the incoming requests. Process that passes socket fd to fork. The result was that four men (worker) grabbed a request. There was no need to mention how bloody the scene was.
As mentioned above, cluster is actually a layer encapsulation of child_process, so let's move on to the bottom layer. Implement Multiple cluster processes. First, we need to understand the basic usage of these modules. Net, child_process.
Child_process
This should be the core module of the nodeJS process. There are several basic methods, but here I will only introduce the core ones: spawn, fork, exec. If you are interested, you can go to child_process for reference.
Child_process.spawn (command, args)
This method is used to run the specified program. For example:node app.js
It is an asynchronous command, but does not support callback, but we can use process. on to listen to the results. It comes with three parameters.
Command: Execute the command
Args [Array]: parameters included in the command
Options [Object]: Environment Variable Object
OK ~ Let's take a simple demo: a trial runtouch apawn.js
const spawn = require('child_process').spawn;const touch = spawn('touch',['spawn.js']);touch.stdout.on('data', (data) => { console.log(`stdout: ${data}`);});touch.stderr.on('data', (data) => { console.log(`stderr: ${data}`);});touch.on('close', (code) => { console.log(`child process exited with code $[code]`);});
If it is correct, it should be
Outputchild process exited with code 0
. Then run the directory to generate the pawn. js file. Of course, if you need to run commands with multiple parameters, this is a bit of a headache.
Therefore, nodeJS uses exec to encapsulate it well, and it supports callback functions, which makes us better understand.
Child_process.exec (order, cb (err [, stdout, stderr]);
Order: The command you run. For example:rm spawn.js
Cb: the callback function after successful command execution.
const childProcess = require('child_process');const ls = childProcess.exec('rm spawn.js', function (error, stdout, stderr) { if (error) { console.log(error.stack); console.log('Error code: '+error.code); } console.log('Child Process STDOUT: '+stdout);});
Normally, the spawn. js file is deleted.
The preceding two commands are simple commands for running processes. Finally, (Boss is always the final player). Let's take a look at the use of the fork method.
Fork is actually used to execute processes. For example, spawn ("node", ['app. js']) has the same effect as fork ('app. js. However, when fork starts a sub-process, it also establishes an information channel (Duplex ). process is used between two processes. on ("message", fn) and process. send (...) exchange information.
Child_process.fork (order) // create a sub-process
Worker. on ('message', cb) // listens to message events
Worker. send (mes) // send information
He and spawn both communicate through the returned channel. Here is a demo. Let's take a look at the two files master. js and worker. js.
//master.jsconst childProcess = require('child_process');const worker = childProcess.fork('worker.js');worker.on('message',function(mes){ console.log(`from worder, message: ${mes}`);});worker.send("this is master");//worker.jsprocess.on('message',function(mes){ console.log(`from master, message: ${mes}`);});process.send("this is worker");
Run node app. js and the following result is output:
from master, message: this is masterfrom worker, message: this is worker
Now we have learned how to use child_process to create a basic process.
For details about the net module, refer to the net module.
OK. Now we have officially started to simulate procedure for communication between the nodeJS cluster module.
Out of date cluster
Here we will first introduce a set of mechanisms implemented by the previous cluster. Similarly, place the chart again.
We use net and child_process to simulate it.
// Master. jsconst net = require ('net'); const fork = require ('child _ Process '). fork; var handle = net. _ createServerHandle ('0. 0.0.0 ', 3000); for (var I = 0; I <4; I ++) {fork ('. /worker '). send ({}, handle);} // worker. jsconst net = require ('net'); // listen to the information process sent by the master. on ('message', function (m, handle) {start (handle) ;}); var buf = 'Hello nodejs '; /// return information var res = ['HTTP/1.1 200 OK ', 'content-length:' + buf. length]. join ('\ r \ n') +' \ r \ n \ r \ n' + buf; // nested word function start (server) {server. listen (); var num = 0; // listens to the connection function server. onconnection = function (err, handle) {num ++; console. log ('worker [$ {process. pid}]: $ {num} '); var socket = new net. socket ({handle: handle}); socket. readable = socket. writable = true; socket. end (res );}}
OK ~ Run the program first.node master.js
.
Then use the test tool siege.
siege -c 100 -r 2 http://localhost:3000
OK. Let's see if the load is balanced at this time.
worker[1182]:52worker[1183]:42worker[1184]:90worker[1181]:16
It is found that the efficiency is really low for worker to compete for requests. Every time a request is triggered, a group event may occur. Therefore, the cluster changed the mode and used the master to control the allocation of requests. The official algorithm is actually the round-robin method.
Cluster
This is the actual implementation model.
The master controls the request giving. Create a socket through the listening port to pass the obtained request to the sub-process.
Code demo borrowed from tj:
// Masterconst net = require ('net'); const fork = require ('child _ Process '). fork; var workers = []; for (var I = 0; I <4; I ++) {workers. push (fork ('. /worker ');} var handle = net. _ createServerHandle ('0. 0.0.0 ', 3000); handle. listen (); // move the listener event to handle in the master. onconnection = function (err, handle) {var worker = workers. pop (); // retrieve a pop worker. send ({}, handle); workers. unshift (worker); // put back the pop} // worker. jsconst net = require ('net'); process. on ('message', function (m, handle) {start (handle) ;}); var buf = 'Hello Node. js'; var res = ['HTTP/1.1 200 OK ', 'content-length:' + buf. length]. join ('\ r \ n') +' \ r \ n \ r \ n' + buf; function start (handle) {console. log ('Got a connection on worker, pid = % d', process. pid); var socket = new net. socket ({handle: handle}); socket. readable = socket. writable = true; socket. end (res );}
Here we are going to take control of the whole situation through the master. When an emperor (worker) is flattered, the master will arrange several emperors to queue up. In fact, the handle in the middle will show our specific business logic, like:app.js
.
OK ~ Let's take a look at the specific method of implementing multi-process in the cluster module.
Cluster module implements multi-process
The current cluster has been able to fully achieve load balancing. In cluster, I have already elaborated on it. Let's take a look at the specific implementation.
var cluster = require('cluster');var http = require('http');var numCPUs = require('os').cpus().length;if (cluster.isMaster) { console.log('[master] ' + "start master..."); for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('listening', function (worker, address) { console.log('[master] ' + 'listening: worker' + worker.id + ',pid:' + worker.process.pid + ', Address:' + address.address + ":" + address.port); });} else if (cluster.isWorker) { console.log('[worker] ' + "start worker ..." + cluster.worker.id); var num = 0; http.createServer(function (req, res) { num++; console.log('worker'+cluster.worker.id+":"+num); res.end('worker'+cluster.worker.id+',PID:'+process.pid); }).listen(3000);}
The HTTP module is used here. Of course, it can also be replaced with the socket module. However, the cluster and the single side are confused due to this writing. Therefore, we recommend that you separate the specific business logic.
Var cluster = require ('cluster'); var numCPUs = require ('OS '). cpus (). length; if (cluster. isMaster) {console. log ('[master]' + "start master... "); for (var I = 0; I <numCPUs; I ++) {cluster. fork ();} cluster. on ('listenering', function (worker, address) {console. log ('[master]' + 'listening: worker' + worker. id + ', pid:' + worker. process. pid + ', Address:' + address. address + ":" + address. port) ;}) ;}else if (cluster. isWorker) {require ('app. js');} // app. js is to open the specific business logic // app. js content const net = require ('net'); // automatically create socketconst server = net. createServer (function (socket) {// 'connection' listener socket. on ('end', function () {console. log ('server disconnected') ;}); socket. on ('data', function () {socket. end ('Hello \ r \ n') ;}); // listener server with port enabled. listen (8124, function () {// 'listening' listener console. log ('working ')});
Next we start the service, node master. js
Then perform the test
siege -c 100 -r 2 http://localhost:8124
Long connections are enabled here. The number of persistent connections processed by each worker is limited. Therefore, when there is an additional connection, the worker will disconnect the connection that does not respond to the current issue and try again.
However, we usually use HTTP to enable short connections to quickly process highly concurrent requests.
This is the result after I change to HTTP short connection.
Transactions: 200 hitsAvailability: 100.00 %Elapsed time: 2.09 secsData transferred: 0.00 MBResponse time: 0.02 secsTransaction rate: 95.69 trans/secThroughput: 0.00 MB/secConcurrency: 1.74Successful transactions: 200Failed transactions: 0Longest transaction: 0.05Shortest transaction: 0.02
Then, how can we simulate a large concurrent pipeline?
E...
Solve it by yourself ~
Joke ~ Otherwise, why did I write a blog? It is to spread knowledge.
Before introducing the tool, I would like to introduce a few basic concepts about performance.
QPS (TPS), concurrency, response time, throughput, throughput
Your hen's performance test theories
Since our relationship with the server, there have been a lot of front-end performance tests. But this is also the tip we must know. Originally, the front-end baby only needs to look at the console to see if the webpage is running smoothly. Look at TimeLine and Profile. However, as a child shoes who are pursuing and interested in changing the world...
Md ~ I want to learn more...
OK ~ Now, before entering the subject, I will release another online test result.
Transactions: 200 hitsAvailability: 100.00 %Elapsed time: 13.46 secsData transferred: 0.15 MBResponse time: 3.64 secsTransaction rate: 14.86 trans/secThroughput: 0.01 MB/secConcurrency: 54.15Successful transactions: 200Failed transactions: 0Longest transaction: 11.27Shortest transaction: 0.01
Based on the above data, we can figure out the approximate performance of Your webpage.
En ~ Let's begin
Throughput
There are multiple interpretations of throughput. One is to describe the capability of web servers to process requests per unit time. According to this description, the unit is req/sec. The other is the volume of data transmitted over the network per unit time. According to this description, the unit is MB/sec.
This indicator is the Throughput in the above data. Of course, it must be larger and better.
Throughput
This is somewhat related to the above throughput. Throughput is the total amount of data transmitted during one test without time limit. Therefore, all tests without time conditions are hooligans.
This corresponds to the Data transferred in the above Data.
Transactions & TPS
Familiar with database operations, we should know that the concept of transactions is often mentioned in databases. In a database, a transaction often represents a specific processing process and result. for example, the data I want now is the rank of the final score of the mathematics examination in. this is a specific transaction, so we map it to the database to retrieve the ranking from to, then take the average value, and return the final sorting result. It can be seen that a transaction is not just a single operation. It is actually meaningful to combine one or more operations. So how should we define the front-end test? First, we need to understand that the network communication at the front end is actually the request-response mode. That is to say, every request can be understood as a transaction (trans ).
Therefore, TPS (transaction per second) can be understood as the number of requests that the system can process within 1 sec. his unit is: trans/sec. you can also understand it as seq/sec.
Therefore, TPS should be an identifier that measures the optimal bearing capacity of a system.
The TPS calculation formula is as follows:Transactions / Elapsed time
.
However, nothing is absolute. We should know about this in the future.
Concurrency
That is, the number of connections that can be concurrently processed by the server. The official explanation is as follows:
Concurrency is average number of simultaneous connections, a number which rises as server performance decreases.
Here we can understand that this is a standard for measuring the carrying capacity of the system. The higher the Concurrency, the more the system carries, but the lower the performance.
OK ~ But how can we use this data to determine our concurrency policy? E...
Of course, the results of one or two tests are really useless, so in fact, we need to perform multiple tests and then draw a picture. Of course, some large companies have long had a complete system to calculate the bottleneck of your web server and provide the optimal concurrency strategy.
Let's talk a little bit about it. Let's take a look at how to analyze it to get a better concurrency strategy.
Explore concurrency policies
First, we need to differentiate the concurrency here. One is the number of concurrent requests, and the other is the number of concurrent users. These two have completely different requirements for servers.
Assume that 100 users make 10 requests to the server at the same time, and 1 user makes 1000 consecutive requests to the server. Are the two results the same?
When a user makes 1000 consecutive requests to the server, the server's Nic accepts only one request from the user in the cache at any time, when 100 users send 10 requests to the server at the same time, the server Nic receives up to 100 requests waiting for processing in the buffer zone. Obviously, the server is under more pressure at this time.
So the number of concurrent users and throughput mentioned above are completely different.
But in general, we pay more attention to Concurrency (number of concurrent users), because it can better reflect the capabilities of the system. In general, we will limit the number of concurrent users, such as the maxClients parameter of apache.
OK ~ Let's analyze the instance.
First, we get a test data.
Next, we will analyze the data.
The following figure is drawn based on the relationship between concurrency and throughput.
OK ~ We will find that, starting from around 130 concurrent threads, the throughput rate begins to drop, and the more we drop, the more powerful we get. The main reason is that, as the number of users increases, idle system resources are fully utilized. Of course, just like the positive curve, there will always be a vertex. When it reaches a certain value, the vertex will appear. This is a bottleneck of our system.
Next, we will refine the analysis, response time and the correlation between the number of concurrent users.
In the same sense, when the number of concurrent threads reaches about 130, the response time for each req starts to increase, and the larger the jitter, the more suitable the throughput is related. Therefore, we can draw a conclusion that the number of concurrent connections is preferably set to 100 ~ In the range of 150. Of course, this analysis is superficial, but it is sufficient for our front-end babies to understand it.
Next, we use tools to arm our minds.
Here we mainly introduce a testing tool, siege.
Concurrent testing tools
In fact, concurrency testing tools mainly include three siege, AB, and webbench. I didn't introduce the reason for webbench here, because when I tried to install him, Lao Tzu almost crashed my computer (my MAC pro )... however, I am cleverly recovered by the smart ones ~ Therefore, if other great gods have been successfully installed on MAC x11, you can trust your younger brother. Let me learn.
OK ~ The vomit is complete. Let's talk about siege.
Siege
Install siege and use the MAC artifact homebrew, which is the same as the npm in the js front-end world.
Install ing:
brew install siege
Successfully installed-bingo
Next, let's look at the syntax.
-C NUM: number of concurrent users. eg:-c 100;
-R NUM indicates the number of requests sent in several rounds, that is, the total number of requests is:-cNum*-rNum
However,-r cannot be used with-t (why? You guess). eg:-r 20
-T NUM test duration refers to the time required to run a test. After timeout, the test is completed.
-F file. used to test the url path eg in the file:-f girls.txt
.
-B. Ask whether to enable the benchmark (benchmark ). This parameter is not very important. If you are interested, you can continue to learn.
I will not introduce-c-r. If you are interested, refer to the previous article about network upgrade. Here we will mainly introduce the-f parameter.
Generally, if you want to test multiple pages, you can create a new file and create all the web addresses you want to test in the file.
For example:
// The file name is urls.txt
www.example.comwww.example.org123.45.67.89
Then run the test
siege -f your/file/path.txt -c 100 -t 10s
OK ~ The process and test content is described here.