O & M Science: Nginx Optimization Guide-battle preparation)

Source: Internet
Author: User
Tags sendfile
BattlereadyNginx-anoptimizationguide most Nginx installation guides tell you the following basic knowledge & mdash; install it through apt-get and modify the configuration lines here or there, you already have a Web server! In most cases, <scripttype = "t original: Battle ready Nginx-an optimization guide

Most Nginx installation guides tell you the following basic knowledge-install through apt-get and modify the configurations here or there. well, you already have a Web server! In addition, in most cases, a regular nginx installation can work well for your website. However, if you really want to squeeze out nginx performance, you must go deeper. In this guide, I will explain the Nginx settings that can be fine-tuned to optimize the performance when processing a large number of clients. Note that this is not a comprehensive fine-tuning guide. This is a simple preview-an overview of performance settings that can be tuned to improve performance. Your situation may be different.

Basic (optimized) configuration

The only file we will modify isNginx. confContains all the settings for different Nginx modules. You should be able/Etc/nginxFind nginx. conf in the directory. First, we will talk about some global settings. then, we will talk about the settings that allow you to have good performance when accessing a large number of clients, and why they will improve the performance. There is a complete configuration file at the end of this article.

 

High-level configuration

In the nginx. conf file, Nginx has a few advanced configurations on the module section.

user www-data;pid /var/run/nginx.pid;worker_processes auto;worker_rlimit_nofile 100000;

UserAndPidYou should follow the default settings-we will not change the content, because the change is no different.

Worker_processesDefines the number of worder processes when nginx provides external web services. The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks for storing data, and the load mode. If you are unsure, setting it to the number of available CPU cores is a good start (setting it to "auto" will try to detect it automatically ).

Worker_rlimit_nofileChange the maximum number of files opened by a worker process. If this parameter is not set, the value is an operating system restriction. After setting, your operating system and Nginx can process more files than "ulimit-a", so set this value to high, in this way, nginx will not have the "too open files" problem.

 

Events Module

The events module contains all the settings for handling connections in nginx.

 

events {    worker_connections 2048;    multi_accept on;    use epoll;}


Worker_connectionsSets the maximum number of connections that can be opened by a worker process at the same time. If the worker_rlimit_nofile mentioned above is set, we can set this value to a high value.

Remember, the maximum number of customers is also limited by the number of available socket connections (~ 64 K), so it is no good to set an unrealistic height.

Multi_acceptTell nginx to accept as many connections as possible after receiving a new connection notification.

UseSets the polling method used to reuse client threads. If you use Linux 2.6 +, you should use epoll. If you use * BSD, you should use kqueue. Want to know more about event polling? Take a look at Wikipedia (note that if you want to know everything, you may need the neckbeard and operating system course basics)

(It is worth noting that if you do not know which polling method should be used for Nginx, it will select the one that best suits your operating system)

 

HTTP Module

The HTTP module controls all core features of nginx http processing. Because there are only a few configurations, we only extract a small part of the configuration. All these settings should be in the http module, and you will not even pay special attention to this setting.

http {    server_tokens off;    sendfile on;    tcp_nopush on;    tcp_nodelay on;    ...}
Server_tokensIt does not speed up nginx execution, but it can disable the nginx version number on the error page, which is good for security.

SendfileSendfile () can be used. Sendfile () can copy data (or any two file descriptors) between the disk and the TCP socket ). Pre-sendfile is used to request a data buffer in the user space before data transmission. Then, use read () to copy data from the file to the buffer zone, and write () to write the buffer zone data to the network. Sendfile () immediately reads data from the disk to the OS cache. Because this copy is completed in the kernel, sendfile () is more effective than combining read () and write () and turning on and off the discard buffer (more about sendfile)

 

Tcp_nopushTells nginx to send all header files in one packet, instead of sending them one by one.

Tcp_nodelayTell nginx not to cache data, but to send data for a period of time-when you need to send data in time, you should set this attribute for the application, so that when sending a small piece of data information, you cannot get the return value immediately.

access_log off;error_log /var/log/nginx/error.log crit;

Access_logSet whether nginx will store access logs. Disabling this option can make disk read IO operations faster (aka, YOLO)

Error_logTell nginx that only serious errors can be recorded

 

keepalive_timeout 10;client_header_timeout 10;client_body_timeout 10;reset_timedout_connection on;send_timeout 10;

Keepalive_timeoutAllocate the keep-alive link timeout time to the client. The server will close the link after the timeout. We set it to a lower level so that ngnix can continue to work for a longer time.

Client_header_timeoutAndClient_body_timeoutSet the timeout time for the request header and request body. We can also lower this setting.

 

Reset_timeout_connection tells nginx to close the client connection that does not respond. This will release the memory space occupied by the client.

Send_timeout specifies the response timeout of the client. This setting is not used for the entire forwarder, but between two client read operations. If the client does not read any data during this period, nginx closes the connection.

limit_conn_zone $binary_remote_addr zone=addr:5m;limit_conn addr 100;
Limit_conn_zone sets the parameters used to save the shared memory of various keys (such as the current number of connections. 5 MB is 5 MB. this value should be large enough to store (32 K * 5) 32 byte status or (16 K * 5) 64 byte status.

Limit_conn sets the maximum number of connections for a given key. Here the key is addr, and the value we set is 100, that is, we allow each IP address to open up to 100 connections at the same time.

 

include /etc/nginx/mime.types;default_type text/html;charset UTF-8;
Include is only an instruction that contains the content of another file in the current file. Here we use it to load a series of MIME types that will be used later.

Default_type sets the default MIME-type used by the file.

Charset: set the default character set in our header file

 

The following two points explain the performance improvement in the great WebMasters StackExchange.
gzip on;gzip_disable "msie6";# gzip_static on;gzip_proxied any;gzip_min_length 1000;gzip_comp_level 4;gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

Gzip tells nginx to send data in the form of gzip compression. This will reduce the amount of data we send.

Gzip_disable disables the gzip function for the specified client. We set it to IE6 or a later version to make our solution widely compatible.

Gzip_static tells nginx to check whether resources that have been pre-processed by gzip are available before compressing resources. This requires that you pre-compress your file (commented out in this example) to allow you to use the highest compression ratio, in this way, nginx does not need to compress these files (Click here for more detailed gzip_static information ).

Gzip_proxied allows or disables compression of response streams based on requests and responses. We set it to any, which means all requests will be compressed.

Gzip_min_length sets the minimum number of bytes for enabling data compression. If a request is smaller than 1000 bytes, we 'd better not compress it because compressing the small data will speed down all processes processing the request.

Gzip_comp_level sets the data compression level. This level can be any number between 1 and 9, 9 is the slowest but the compression ratio is the largest. We set it to 4, which is a relatively discounted setting.

Gzip_type: Set the data format to be compressed. Some of the above examples are available. you can also add more formats.

 

# cache informations about file descriptors, frequently accessed files# can boost performance, but you need to test those valuesopen_file_cache max=100000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2;open_file_cache_errors on;### Virtual Host Configs# aka our settings for specific servers##include /etc/nginx/conf.d/*.conf;include /etc/nginx/sites-enabled/*;

Open_file_cache specifies the maximum number of caches and the cache time when the cache is enabled. We can set a relatively high maximum time so that we can clear them after they are not active for more than 20 seconds.

Open_file_cache_validSpecify the interval between checking the correct information in open_file_cache.

Open_file_cache_min_usesDefines the minimum number of files in open_file_cache during the inactive period of command parameters.

Open_file_cache_errorsSpecifies whether an error message is cached when a file is searched. This also includes adding a file to the configuration. We also include server modules, which are defined in different files. If your server module is not in these locations, you must modify this line to specify the correct location.

 

A complete configuration
user www-data;pid /var/run/nginx.pid;worker_processes auto;worker_rlimit_nofile 100000;events {    worker_connections 2048;    multi_accept on;    use epoll;}http {    server_tokens off;    sendfile on;    tcp_nopush on;    tcp_nodelay on;    access_log off;    error_log /var/log/nginx/error.log crit;    keepalive_timeout 10;    client_header_timeout 10;    client_body_timeout 10;    reset_timedout_connection on;    send_timeout 10;    limit_conn_zone $binary_remote_addr zone=addr:5m;    limit_conn addr 100;    include /etc/nginx/mime.types;    default_type text/html;    charset UTF-8;    gzip on;    gzip_disable "msie6";    gzip_proxied any;    gzip_min_length 1000;    gzip_comp_level 6;    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;    open_file_cache max=100000 inactive=20s;     open_file_cache_valid 30s;     open_file_cache_min_uses 2;    open_file_cache_errors on;    include /etc/nginx/conf.d/*.conf;    include /etc/nginx/sites-enabled/*;}
After editing the configuration, restart nginx to make the configuration take effect.

 

 

sudo service nginx restart
Postscript

That's it! Your Web server is now ready, and you have been troubled by the many visitors. This is not the only way to speed up your website. soon I will write more articles about other methods to speed up your website.

Source: http://www.oschina.net/translate/nginx-setup

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.