Express development and deployment best practices

Source: Internet
Author: User
Tags haproxy

This article is translated from the official website of EXPRESSJS, the source address is as follows: Express best practices, from the dev perspective and OPS perspective, discuss how to improve the performance of your express applications and the best way to deploy them.

Questions for developers to be aware of

For an express application, we generally have the following methods to improve the operational efficiency and response rate of the application.

    1. Using gzip compression
    2. Do not use synchronization functions in your code
    3. Processing static files using middleware
    4. A reasonable method of log processing
    5. Correct handling of exceptions

Below we will analyze each sub-project separately.

1. Using gzip compression

Using gzip compression can significantly reduce the size of the response packet, which increases the responsiveness of the client, and we can handle gzip compression by using the compression middleware. For a large number of users of the site, the best way is to set the compression method on the reverse proxy side. You can refer to one of my other articles that specifically describes how to configure Nginx to handle compressed and static files. In this case, we do not have to call the compression middleware at the end of the code, Nginx will help us to do this work.

var compression = require (' compression '); var express = require (' Express '); var app = Express (); App.use (compression ());

  

2. Do not use the Sync function

We know that the node master process is a single-threaded program (asynchronous processing is multithreaded). If we call some of the synchronization functions in the main thread, and these synchronization functions take a long time, it will affect the execution wait time of the subsequent programs. The increase in access latency for other users is shown for the web side. So in a production environment, even if a few subtle programs are returned, access to a large number of users will create a cumulative effect. So try to write the code in an asynchronous way.

If you use node. js 4.0+ or Io.js 2.1.0+, you can use the parameter--trace-sync-io to print the alarm information for the sync function.

3. Using middleware to handle static files

We sometimes call Res.sendfile () to handle static files, but not in a production environment, which reads the file for every request, not only inefficient but also affects overall performance, and can be processed by using serve-static middleware. But the better way we recommend it is to use a reverse proxy static file like Nginx.

4. Reasonable log processing method

We sometimes mark some points or debug the output through Console.log or console.err in the development environment. But these functions are synchronous, output output to the terminal and output to the file is the same reason, so in the production environment do not do so. Unless you have to say that the output is passed to another program. We can use the Debug module to implement the output, the module will determine whether the environment variable is the development environment, execute the debug output, to ensure that your program asynchronous processing. For logging, you can refer to a comparison article for the log system to compare Winston and Bunyan.

5. Correct handling of exceptions

First of all, for the node program, once the unhandled exception is encountered, the entire process will be down, if we configure a process management tool such as PM2 or forever, he will help us to deal with our program failure self-initiated.

For the handling of exceptions in code we generally use the method:

    1. Using Try-catch
    2. Using promises

Here is an article in more detail on how to build a robust program to handle error exceptions, for reference to the link address

Do not use Uncaughtexception to handle all exceptions, although to a certain extent can cause your program to be uninterrupted, but the program will contain the unstable code continues to run, so that the code runs online can cause more serious consequences, It has even been suggested to remove the node kernel from this error handling method.

Also do not use domains to handle errors, the module has been marked for removal.

Using Try-catch is a relatively simple way to handle errors, such as the following code:

App.get ('/search ', function (req, res) {//simulating async operation Setimmediate (function () {   var jsonstr = Req.qu Ery.params;   try {     var jsonobj = Json.parse (JSONSTR);     Res.send (' Success ');   } catch (e) {     res.status (+). Send (' Invalid JSON string ');}})   ;

  

But we know that try-catch can only be applied to synchronous code, and we cannot handle it in this way for asynchronous code processing. For exception handling of asynchronous code we can use promises to do it. You only need to add a catch () method to capture the exception of the entire block of code in the process.

App.get ('/', function (req, res, next) {  //do some sync stuff  querydb ()    . Then (data) {      //handle Data      return makecsv (data)    })    . Then (function (CSV) {      //handle CSV    })    . catch (next);}); App.use (function (err, req, res, next) {  //Handle error});

  

Of course, we need to add promises to each block of code to return. For more information, please refer to the link asynchronous Error handling in Express with Promises, generators and ES7.

Production Environment Installation Deployment

The following is a discussion of the issues that need to be noted for installation deployment of express applications in a production environment.

    1. Set Node_env to "production"
    2. Ensure automatic restart of your app
    3. Deploy your app in a cluster
    4. Cache Request Results
    5. Using load Balancing
    6. Using reverse proxies
1. Set the Run environment variable

In general, we set the node environment variable to have two kinds, namely development and production. Setting the environment variable to production will make the Express application

    1. Cached View Templates
    2. Caching CSS Files
    3. Generate fewer redundant error messages

In addition, if you are interested, you can view this article environment variable test, here the author has done some performance comparison before and after setting this variable, very detailed.

If we use upstart to manage the application, we need to add environment variables to the configuration file

#/etc/init/env.confenv Node_env=production

  

If you are using SYSTEMD to manage, then modify the configuration file as follows:

#/etc/systemd/system/myservice.serviceenvironment=node_env=production

  

2. Ensure self-priming

The self-priming here is not just about how to start after an abnormal termination of the program but also to ensure that the program is self-booting after the operating system restarts. Here we describe the next two situations.

    1. Using a Process Manager

Process Manager can generally help us to get the running performance of the process and the consumption of resources, dynamic modification configuration to improve performance, cluster control. Here I recommend the general can use Strongloop Process Manager or PM2 and forever, the same detailed reference to the following link process Manager comparison by the above comparison, we can see that the Strongloop process Manager supports a richer number of features , especially to see CPU consumption, integrate operating system scripts, manage clusters remotely, and more.

    1. Programs that start with the system are self-booting

With the system-initiated program, we can use the previous process manager, forever should not be supported. The other can generate the corresponding startup script, when the operating system starts, the process manager starts, and drives the program to start. Or we can directly configure the use of SYSTEMD and other ways to manage the boot-up process. Here we simply introduce how to set up a program to boot with the system by introducing SYSTEMD. SYSTEMD is a service manager for a Linux system. A systemd configuration file is called a unit file. The. Service suffix.

[Unit] Description=awesome Express app[service]type=simpleexecstart=/usr/local/bin/node/projects/myapp/ index.jsworkingdirectory=/projects/myappuser=nobodygroup=nogroup# Environment variables:Environment=NODE_ENV= production# allow many incoming connectionslimitnofile=infinity# allow core dumps for debugginglimitcore= infinitystandardinput=nullstandardoutput=syslogstandarderror=syslogrestart=always[install]wantedby= Multi-user.target

  

3. Run the app in a cluster

Applications running on multi-core processing can be run on different processors by starting multiple instances using the cluster module. and "Load balancing" is implemented on multiple instances. However, for different instances, due to the isolation of the memory space, all the program objects are local and cannot be shared, but we can use the tools such as Redis to realize the sharing of objects. And for a process terminal does not affect the processing of other processes, only need to write code when the terminal and generate a new instance.

We can use node's cluster module (which requires code implementation) or the Strongloop process Manager to handle it, without the need to modify the code. Strongloop pm will automatically generate multiple processes based on the number of CPUs, and you can adjust this value manually.

4. Caching requests

With caching, you can greatly increase the response speed without having to repeat the request for duplicate requests. We can use the Nginx cache configuration to configure the cache

5. Using Load Balancing

A single express process service, no matter how optimized, cannot achieve a high performance requirement, especially for a Web application with many users. We can do the horizontal scaling of the application by using a load balancer. For example, use Nginx or haproxy to complete load balancing. When using load balancing, we may want to make sure that each request associated with the corresponding session ID falls on the same process. Here is an article for you to see how to configure load Balancing Socket.io Configure load Balancing

In addition Strongloop pm can be very good with nginx to set load balancing.

6. Reverse Proxy Service

Reverse proxy server is generally set at the request entrance, complete error page processing, compression processing, caching and static file processing, load balancing operation and so on. Concrete can refer to the configuration of Nginx or Haproxy to complete the reverse proxy service construction.

Express development and deployment best practices

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.