Tips for optimizing Nginx servers

Source: Internet
Author: User
Tags sendfile

Tips for optimizing Nginx servers

This article mainly introduces some tips for optimizing the Nginx server, including the configuration suggestions for the HTTP module and the Events module. For more information, see

Most Nginx installation guides tell you the following basic knowledge-install through apt-get and modify the configurations here or there. Well, you already have a Web server! In addition, in most cases, a regular nginx installation can work well for your website. However, if you really want to squeeze out nginx performance, you must go deeper. In this guide, I will explain the Nginx settings that can be fine-tuned to optimize the performance when processing a large number of clients. Note that this is not a comprehensive fine-tuning guide. This is a simple preview-an overview of performance settings that can be tuned to improve performance. Your situation may be different.

Basic (optimized) Configuration

The only file to be modified is nginx. conf, which contains all the settings of different Nginx modules. You should be able to find nginx. conf in the/etc/nginx directory on the server. First, we will talk about some global settings. Then, we will talk about the settings that allow you to have good performance when accessing a large number of clients, and why they will improve the performance. There is a complete configuration file at the end of this article.

High-level Configuration

In the nginx. conf file, Nginx has a few advanced configurations on the module section.

The Code is as follows:

User www-data;

Pid/var/run/nginx. pid;

Worker_processes auto;

Worker_rlimit_nofile 100000;

The user and pid should follow the default settings-we will not change the content, because the change is no different.

Worker_processes defines the number of worder processes when nginx provides external web services. The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks for storing data, and the load mode. If you are unsure, setting it to the number of available CPU cores is a good start (setting it to "auto" will try to detect it automatically ).

Worker_rlimit_nofile: Change the maximum number of files opened by the worker process. If this parameter is not set, the value is an operating system restriction. After setting, your operating system and Nginx can process more files than "ulimit-a", so set this value to high, in this way, nginx will not have the "too open files" problem.

Events Module

The events module contains all the settings for handling connections in nginx.

The Code is as follows:

Events {

Worker_connections 2048;

Multi_accept on;

Use epoll;

}

Worker_connections sets the maximum number of connections that can be opened by a worker process at the same time. If the worker_rlimit_nofile mentioned above is set, we can set this value to a high value.

Remember, the maximum number of customers is also limited by the number of available socket connections (~ 64 K), so it is no good to set an unrealistic height.

Multi_accept tells nginx to receive as many connections as possible after receiving a new connection notification.

Use setting is used to reuse the polling method of client threads. If you use Linux 2.6 +, you should use epoll. If you use * BSD, you should use kqueue. Want to know more about event polling? Take a look at Wikipedia (Note that if you want to know everything, you may need the neckbeard and operating system course basics)

(It is worth noting that if you do not know which polling method should be used for Nginx, it will select the one that best suits your operating system ).

HTTP Module

The HTTP module controls all core features of nginx http processing. Because there are only a few configurations, we only extract a small part of the configuration. All these settings should be in the http module, and you will not even pay special attention to this setting.

The Code is as follows:

Http {

Server_tokens off;

Sendfile on;

Tcp_nopush on;

Tcp_nodelay on;

}

Server_tokens does not accelerate nginx execution, but it can disable the nginx version number on the error page, which is good for security.

Sendfile can make sendfile () take effect. Sendfile () can copy data (or any two file descriptors) between the disk and the TCP socket ). Pre-sendfile is used to request a data buffer in the user space before data transmission. Then, use read () to copy data from the file to the buffer zone, and write () to write the buffer zone data to the network. Sendfile () immediately reads data from the disk to the OS cache. Because this copy is completed in the kernel, sendfile () is more effective than combining read () and write () and turning on and off the discard buffer (more about sendfile)

Tcp_nopush tells nginx to send all header files in a data packet, instead of sending them one by one.

Tcp_nodelay tells nginx not to cache data, but to send data for a period of time-this attribute should be set for the application when data needs to be sent in a timely manner, in this way, the returned value cannot be obtained immediately when a small piece of data is sent.

The Code is as follows:

Access_log off;

Error_log/var/log/nginx/error. log crit;

Access_log: Set whether nginx will store access logs. Disabling this option can make disk read IO operations faster (aka, YOLO ).

Error_log indicates that nginx can only record serious errors.

The Code is as follows:

Keepalive_timeout 10;

Client_header_timeout 10;

Client_body_timeout 10;

Reset_timedout_connection on;

Send_timeout 10;

Keepalive_timeout: allocate the keep-alive link timeout time to the client. The server will close the link after the timeout. We set it to a lower level so that ngnix can continue to work for a longer time.

Client_header_timeout and client_body_timeout are used to set the timeout time for the request header and request body. We can also lower this setting.

Reset_timeout_connection tells nginx to close the client connection that does not respond. This will release the memory space occupied by the client.

Send_timeout specifies the response timeout of the client. This setting is not used for the entire forwarder, but between two client read operations. If the client does not read any data during this period, nginx closes the connection.

Copy the Code as follows:

Limit_conn_zone $ binary_remote_addr zone = addr: 5 m;

Limit_conn addr 100;

Limit_conn sets the maximum number of connections for a given key. Here the key is addr, and the value we set is 100, that is, we allow each IP address to open up to 100 connections at the same time.

Limit_conn_zone sets the parameters used to save the shared memory of various keys (such as the current number of connections. 5 MB is 5 MB. This value should be large enough to store (32 K * 5) 32 byte status or (16 K * 5) 64 byte status.

The Code is as follows:

Include/etc/nginx/mime. types;

Default_type text/html;

Charset UTF-8;

Include is only an instruction that contains the content of another file in the current file. Here we use it to load a series of MIME types that will be used later.

Default_type sets the default MIME-type used by the file.

Charset sets set the default character set in our header files.

The following two points explain the performance improvement in the great WebMasters StackExchange.

The Code is as follows:

Gzip on;

Gzip_disable "msie6 ";

# Gzip_static on;

Gzip_proxied any;

Gzip_min_length 1000;

Gzip_comp_level 4;

Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript;

Gzip tells nginx to send data in the form of gzip compression. This will reduce the amount of data we send.

Gzip_disable disables the gzip function for the specified client. We set it to IE6 or a later version to make our solution widely compatible.

Gzip_static tells nginx to check whether resources that have been pre-processed by gzip are available before compressing resources. This requires that you pre-compress your file (commented out in this example) to allow you to use the highest compression ratio, in this way, nginx does not need to compress these files (Click here for more detailed gzip_static information ).

Gzip_proxied allows or disables compression of response streams based on requests and responses. We set it to any, which means all requests will be compressed.

Gzip_min_length sets the minimum number of bytes for enabling data compression. If a request is smaller than 1000 bytes, we 'd better not compress it because compressing the small data will speed down all processes processing the request.

Gzip_comp_level sets the Data Compression level. This level can be any number between 1 and 9, 9 is the slowest but the compression ratio is the largest. We set it to 4, which is a relatively discounted setting.

Gzip_type: Set the data format to be compressed. Some of the above examples are available. You can also add more formats.

The Code is as follows:

# Cache informations about file descriptors, frequently accessed files

# Can boost performance, but you need to test those values

Open_file_cache max = 100000 inactive = 20 s;

Open_file_cache_valid 30 s;

Open_file_cache_min_uses 2;

Open_file_cache_errors on;

##

# Virtual Host Configs

# Aka our settings for specific servers

##

Include/etc/nginx/conf. d/*. conf;

Include/etc/nginx/sites-enabled /*;

Open_file_cache specifies the maximum number of caches and the cache time when the cache is enabled. We can set a relatively high maximum time so that we can clear them after they are not active for more than 20 seconds.

Open_file_cache_valid specifies the interval between checking correct information in open_file_cache.

Open_file_cache_min_uses defines the minimum number of files in open_file_cache during the inactive period of command parameters.

Open_file_cache_errors specifies whether an error message is cached when a file is searched. It also includes adding a file to the configuration. We also include server modules, which are defined in different files. If your server module is not in these locations, you must modify this line to specify the correct location.

A complete configuration

The Code is as follows:

User www-data;

Pid/var/run/nginx. pid;

Worker_processes auto;

Worker_rlimit_nofile 100000;

Events {

Worker_connections 2048;

Multi_accept on;

Use epoll;

}

Http {

Server_tokens off;

Sendfile on;

Tcp_nopush on;

Tcp_nodelay on;

Access_log off;

Error_log/var/log/nginx/error. log crit;

Keepalive_timeout 10;

Client_header_timeout 10;

Client_body_timeout 10;

Reset_timedout_connection on;

Send_timeout 10;

Limit_conn_zone $ binary_remote_addr zone = addr: 5 m;

Limit_conn addr 100;

Include/etc/nginx/mime. types;

Default_type text/html;

Charset UTF-8;

Gzip on;

Gzip_disable "msie6 ";

Gzip_proxied any;

Gzip_min_length 1000;

Gzip_comp_level 6;

Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript;

Open_file_cache max = 100000 inactive = 20 s;

Open_file_cache_valid 30 s;

Open_file_cache_min_uses 2;

Open_file_cache_errors on;

Include/etc/nginx/conf. d/*. conf;

Include/etc/nginx/sites-enabled /*;

}

After editing the configuration, restart nginx to make the configuration take effect.

?

1

Sudo service nginx restart

Postscript

That's it! Your Web server is now ready, and you have been troubled by the many visitors. This is not the only way to speed up your website. Soon I will write more articles about other methods to speed up your website.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.