Configure Nginx as a static resource server.

Source: Internet
Author: User
Tags sendfile website performance

Configure Nginx as a static resource server.
1. root directory and index file

The root command declares the directory of the file to be searched. Nginx adds the requested URI to the path specified by the root command to obtain the directory corresponding to the request file. The root command can be placed anywhere in the http, server, or location context.

The root command in the following example is defined in the server. All locations that do not re-implement the root command will use the directory specified by the root command in the server:

server {    root /www/data;    location / {    }    location /images/ {    }    location ~ \.(mp3|mp4) {        root /www/media;    }}

If the URI ends with an mp3 or mp4 suffix, Nginx searches for files in the/www/media/directory. Otherwise, search in the/www/data directory.

If the request ends with a slash (/), Nginx treats the request as a directory request and tries to find the index file in the directory. The index Command defines the name of the index file (index.html file is used by default ). For example, in the above configuration, if the request is/images/some/path/, Nginx will try to find and return the file/www/data/images/some/path/index.html, if the file does not exist, 404 is returned.

If the autoindex command is set to on, Nginx returns the automatically generated directory list:

location /images/ {    autoindex on;}

The index Command can list multiple files. Nginx searches for files in order and returns the first file found.

location / {    index index.$geo.html index.htm index.html;}

The $ geo variable used here is the custom variable set through the geo command. The variable value is determined by the Client IP address.

To return the index file, Nginx first checks whether the file exists, then adds the index file name to the requested URI to form a new URI, and finally redirects it to the new URI internally. Internal redirection will cause the location to be searched again and may end in another location, as shown in the following example:

location / {    root /data;    index index.html index.php;}location ~ \.php {    fastcgi_pass localhost:8000;    ...}

If the URI in the request is/path/and the/data/path/index.html file does not exist, and/data/path/index. php file exists, for/path/index. the internal redirection of the php file will be matched by the second location, in which the request is processed through FastCGI specified by fastcgi_pass.

2. Check whether the file exists (try_files command)

The try_files command can check whether the specified file or directory exists, so as to execute internal redirection or return the specified HTTP status code when the file does not exist.

For example, use the try_files command and $ uri variable to check whether the URI-related files in the request exist:

server {    root /www/data;    location /images/ {        try_files $uri /images/default.gif;    }}

The file is specified as a URI and processed using the root or alias command set in the context of the current location or server. If the file specified by the source URI does not exist, Nginx will internally redirect to the URI specified by the last parameter and return/www/data/images/default.gif.

The last parameter can also be a status code (an equal sign must be added before) or a location name. In the following example, if the specified file or directory does not exist in the try_files command, Error 404 is returned:

location / {    try_files $uri $uri/ $uri.html =404;}

In the following example, if neither the original URI nor the file or directory specified by the URI with an additional slash exists, the request will be redirected to the location with the specified name:

location / {    try_files $uri $uri/ @backend;}location @backend {    proxy_pass https://backend.example.com;}

For more information, see Content Caching, learn how to improve website performance, and learn more about Nginx cache functions.

3. Optimize Nginx (Optimizing NGINX Speed for Serving Content)

Loading Speed is a key indicator of the server. Minor optimizations to Nginx configurations may increase productivity and achieve optimal performance.

Enable sendfile command

By default, Nginx processes file transmission and copies the file to the buffer before sending the file. Enable the sendfile command to remove the steps for copying data to the buffer and allow direct copying of data from one file descriptor to another. To prevent a fast connection from occupying the worker process, you can define the sendfile_max_chunk command to limit the data volume transmitted in a single sendfile call:

location /mp3 {    sendfile           on;    sendfile_max_chunk 1m;    ...}
Enable the tcp_nopush command

The tcp_nopush command must be used with the sendfile command.

If the tcp_nopush and sendfile commands are enabled at the same time, Nginx immediately sends an HTTP Response Header in a data packet after obtaining the data block through sendfile.

location /mp3 {    sendfile   on;    tcp_nopush on;    ...}
Enable the tcp_nodelay command

The tcp_nodelay option allows the algorithm to overwrite the Nagle. It was originally designed to solve the problem of small data packets in a slow network. This algorithm combines multiple small data packets into large data packets and sends data packets at a latency of 200 milliseconds. Nowadays, data can be sent immediately regardless of the packet size when a large static file is provided. Latency also affects online applications (ssh, online games, and online transactions ). By default, the tcp_nodelay command is enabled to disable the Nagle algorithm. This option is only used to maintain the connection:

location /mp3  {    tcp_nodelay       on;    keepalive_timeout 65;    ...}
Optimize the Backlog Queue)

An important indicator is the speed at which Nginx can process incoming connections. When a connection is established, it is put into the "listen" queue of the listening socket. Under normal load, the queue is very short or there is no queue at all. However, in high loads, the queue may increase sharply, which may lead to unbalanced performance, connection loss, and latency.

Measurement listening Queue (Measuring the Listen Queue)

Run the following command to measure the listening Queue (the netstat command in Linux does not support the-L parameter. You need to use the command ss-l, refer to here ):

netstat -Lan

The output is as follows:

Current listen queue sizes (qlen/incqlen/maxqlen)Listen         Local Address         0/0/128        *.12345            10/0/128        *.80       0/0/128        *.8080

The above output shows that there are 10 unacceptable connections in the listening queue on port 80, and the maximum number of connections is 128. This is normal.

However, if the output is as follows:

Current listen queue sizes (qlen/incqlen/maxqlen)Listen         Local Address         0/0/128        *.12345            192/0/128        *.80       0/0/128        *.8080

The above shows 10 unaccepted connections, exceeding the maximum limit of 128. This is common when the website traffic is large. To achieve optimal performance, you can modify the operating system and Nginx configuration to increase the maximum number of connections in the queue that Nginx can wait.

Adjust the Operating System (Linux, FreeBSD)

You can increase the value of the net. core. somaxconn parameter (128 by default) to cope with high concurrency traffic:

Run sudo sysctl kern. ipc. somaxconn = 4096 for FreeBSD. Run sudo sysctl-w net. core. somaxconn = 4096 for Linux.

Open the file/etc/sysctl. conf and add this line: net. core. somaxconn = 4096

Adjust Nginx

If the somaxconn value is greater than 512, you need to change the backlog parameter in the Nginx configuration file to match this setting:

server {    listen 80 backlog=4096;    # The rest of server configuration}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.