Nginx basic configuration, Performance optimization guide

Source: Internet
Author: User
Tags epoll sendfile

Reprinted from: http://www.chinaz.com/web/2015/0424/401323.shtml

Most Nginx installation guides tell you the basics--by apt-get, or Yum installation, modifying a few lines here or there, well, you already have a Web server! And, in most cases, a regular installation of Nginx will work well for your site. However, if you really want to squeeze out nginx performance, you have to go deeper. In this guide, I'll explain how Nginx's settings can be fine-tuned to optimize performance when dealing with a large number of clients. It is important to note that this is not a comprehensive tuning guide. This is a simple preview--An overview of those that can be tweaked to improve performance settings. Your situation may be different.

Basic (optimized) configuration

The only file we will modify is nginx.conf, which contains all the settings of Nginx's different modules. You should be able to find nginx.conf in the /etc/nginx directory of the server. First, we'll talk about some of the global settings and then go through the modules in the file and talk about which settings give you good performance when you have a lot of client access, and why they can improve performance. There is a complete configuration file at the end of this article.

High-level configuration

In the nginx.conf file, Nginx has a few advanced configurations above the module section.

  • User Www-data;
  • Pid/var/run/nginx.pid;
  • Worker_processes Auto;
  • Worker_rlimit_nofile 100000;

user and pid should be set by default – we do not change this because there is no difference in whether or not changes are made.

worker_processes defines the number of worder processes when Nginx provides Web services externally. The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks that store the data, and the load pattern. When unsure, setting it to the number of available CPU cores will be a good start (set to "Auto" will try to automatically detect it).

worker_rlimit_nofile Changes the maximum number of open file limits for a worker process. If not set, this value is the limit of the operating system. After setting up your operating system and Nginx can handle more files than "Ulimit-a", so set this value high, so nginx will not have "too many open files" problem.

Events Module

The events module contains settings for all processing connections in Nginx.

  • Events {
  • Worker_connections 2048;
  • Multi_accept on;
  • Use Epoll;
  • }

worker_connections Sets the maximum number of connections that can be opened concurrently by a worker process. If the above mentioned Worker_rlimit_nofile is set, we can set this value very high.

Keep in mind that the maximum number of customers is also limited by the number of socket connections available to the system (~ 64K), so setting unrealistic high is no good.

multi_accept tells Nginx to accept as many connections as possible after receiving a new connection notification.

The use setting is used to reuse the polling method of the client thread. If you use Linux 2.6+, you should use Epoll. If you use *BSD, you should use Kqueue. Want to know more about event polling? Take a look at Wikipedia (note that if you want to know everything, you may need to neckbeard and the operating system course basis)

(It's worth noting that if you don't know which polling method to use for Nginx, it will choose the one that best suits your operating system.)

HTTP Module

The HTTP module controls all the core features of Nginx HTTP processing. Because there are only a few configurations here, we only extract a small subset of the configuration. All of these settings should be in the HTTP module, and you won't even notice this setting in particular.

  • HTTP {
  • Server_tokens off;
  • Sendfile on;
  • Tcp_nopush on;
  • Tcp_nodelay on;
  • }

Server_tokens does not allow Nginx to execute faster, but it can turn off the Nginx version number on the error page, which is good for security.

sendfile can make Sendfile () play a role. Sendfile () can copy data (or any two file descriptors) between a disk and a TCP socket. Pre-sendfile is the data buffer that is requested in the user space before the data is transferred. The data is then copied from the file to this buffer with read (), and write () writes the buffer data to the network. Sendfile () is to immediately read data from disk to the OS cache. Because this copy is done in the kernel, sendfile () is more efficient than combining read () and write () and turning off the discard buffer (more about Sendfile)

Tcp_nopush tells Nginx to send all header files in one packet without sending them one after the other.

Tcp_nodelay tells Nginx not to cache the data, but a paragraph of the send-when it is necessary to send the data in a timely manner, you should set this property to the application, so that when sending a small piece of data information can not immediately get the return value.

  • Access_log off;
  • Error_log/var/log/nginx/error.log Crit;

access_log Sets whether Nginx will store access logs. Turn off this option to make the read disk IO operation faster (Aka,yolo).

Error_log tells Nginx to log only serious errors.

  • Keepalive_timeout 10;
  • Client_header_timeout 10;
  • Client_body_timeout 10;
  • Reset_timedout_connection on;
  • Send_timeout 10;

keepalive_timeout assigns a keep-alive link time-out to the client. The server will close the link after this timeout period. We set it down to allow Ngnix to continue working for a longer period of time.

client_header_timeout and client_body_timeout set the time-out for the request header and the request body (respectively). We can also lower this setting.

reset_timeout_connection tells Nginx to close the unresponsive client connection. This will release the memory space that the client occupies.

Send_timeout Specifies the response time-out for the client. This setting is not used for the entire forwarder, but between two client read operations. If, during this time, the client does not read any data, Nginx closes the connection.

  • Limit_conn_zone $binary _remote_addr zone=addr:5m;
  • Limit_conn addr 100;

limit_conn Sets the maximum number of connections for a given key. Here key is addr, we set the value is 100, that is to say we allow each IP address at most simultaneously open has 100 connections.

limit_conn_zone Sets the parameters for storing shared memory for various keys, such as the current number of connections. 5m is 5 megabytes, this value should be set large enough to store (32k*5) 32byte State or (16k*5) 64byte state.

  • Include/etc/nginx/mime.types;
  • Default_type text/html;
  • CharSet UTF-8;

include is simply a directive that contains the contents of another file in the current file. Here we use it to load a series of MIME types that will be used later.

default_type Sets the default Mime-type used by the file.

CharSet Sets the default character set in our header file.

The following two points for performance improvements are explained in the great webmasters Stackexchange.

  • Gzip_disable "Msie6";

    # gzip_static on;
    Gzip_proxied any;
    Gzip_min_length 1000;
    Gzip_comp_level 4;

    Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml Application/xml+rss Text/javascript;

gzip is to tell Nginx to send data in the form of gzip compression. This will reduce the amount of data we send.

gzip_disable Disables the gzip feature for the specified client. We set it to IE6 or lower to make our solution broadly compatible.

gzip_static tells Nginx to find out if there are pre-gzip-processed resources before compressing the resources. This requires you to pre-compress your files (commented out in this example), allowing you to use the maximum compression ratio so that Nginx does not have to compress the files (for more detailed gzip_static information, please click here).

gzip_proxied allows or suppresses the compression of response streams based on requests and responses. We set it to any, which means that all requests will be compressed.

gzip_min_length Sets the minimum number of bytes to enable compression on the data. If a request is less than 1000 bytes, we'd better not compress it, because compressing these small data reduces the speed of all processes that handle this request.

Gzip_comp_level Sets the compression level of the data. This level can be any number between 1-9, 9 is the slowest but the maximum compression ratio. We set it to 4, which is a more eclectic setting.

Gzip_type Sets the data format that needs to be compressed. There are already some of the above examples, and you can add more formats.

  • # cache informations about file descriptors, frequently accessed files
    # can boost performance, but do need to test those values
    Open_file_cache max=100000 inactive=20s;
    Open_file_cache_valid 30s;
    Open_file_cache_min_uses 2;
    Open_file_cache_errors on;
    ##
    # Virtual Host configs
    # aka Our settings for specific servers
    ##
    include/etc/nginx/conf.d/*.conf;
    include/etc/nginx/sites-enabled/*;

Open_file_cache Opening the cache also specifies the maximum number of caches, as well as the time of the cache. We can set a relatively high maximum time so that we can erase them after they are inactive for more than 20 seconds.

open_file_cache_valid Specifies the time interval in open_file_cache for the correct information to be detected.

open_file_cache_min_uses defines the minimum number of files during the inactivity period of the instruction parameter in Open_file_cache.

open_file_cache_errors Specifies whether to cache error messages when searching for a file, including adding files to the configuration again. We also include server modules, which are defined in different files. If your server module is not in these locations, you will have to modify this line to specify the correct location.

A complete configuration

  • User Www-data;
    Pid/var/run/nginx.pid;
    Worker_processes Auto;
    Worker_rlimit_nofile 100000;

    Events {
    Worker_connections 2048;
    Multi_accept on;
    Use Epoll;
    }

    HTTP {
    Server_tokens off;
    Sendfile on;
    Tcp_nopush on;
    Tcp_nodelay on;

    Access_log off;
    Error_log/var/log/nginx/error.log Crit;

    Keepalive_timeout 10;
    Client_header_timeout 10;
    Client_body_timeout 10;
    Reset_timedout_connection on;
    Send_timeout 10;

    Limit_conn_zone $binary _remote_addr zone=addr:5m;
    Limit_conn addr 100;

    Include/etc/nginx/mime.types;
    Default_type text/html;
    CharSet UTF-8;

    gzip on;
    Gzip_disable "Msie6";
    Gzip_proxied any;
    Gzip_min_length 1000;
    Gzip_comp_level 6;
    Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml Application/xml+rss Text/javascript;

    Open_file_cache max=100000 inactive=20s;
    Open_file_cache_valid 30s;
    Open_file_cache_min_uses 2;
    Open_file_cache_errors on;

    include/etc/nginx/conf.d/*.conf;
    include/etc/nginx/sites-enabled/*;
    }

When you are finished editing the configuration, confirm that the settings will take effect after you restart Nginx.

  • sudo service nginx restart

Nginx basic configuration, Performance optimization guide

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.