Prepare for nginx

Source: Internet
Author: User
Tags sendfile

Nginx preparation-Optimization


Most books on nginx only tell you the most basic part: Yum install (I still like to compile and install) A package, modify some lines here and there, then we get a web server! In most cases, a common nginx installation can also provide good services for your website. However, if you want to further squeeze nginx performance, you must go further. I will explain how to handle a large number of client requests by adjusting which nginx settings can achieve better performance. Note that this is not a complete tuning guide. Just a brief overview of some settings that can be adjusted to improve performance. Be sure to take the specific action as appropriate.


(Optimized) basic configuration


In the nginx. conf file, there are some top-level configurations out of the module.



User WWW;


Pid/var/run/nginx. PID;


 


Worker_processes auto;



Worker_rlimit_nofile 8192;


User and PID should use the default value -- we will not modify this part because it has no impact on our purpose.

Worker_processes defines the number of worker processes when nginx provides services for your website. The optimization value is affected by many factors, including (not limited to) the number of CPU cores, the number of disks that store data, and the load value. If you are not sure, setting it to the number of available CPU cores is a good start (set it to "Auto" and will try to automatically detect available values ).

Worker_rlimit_nofile modifies the maximum number of files opened by the worker process. If this parameter is not set, the operating system limits it. When the operating system and nginx process files that exceed "ulimit-N", a report is generated. Therefore, this value is increased, in this way, nginx will not encounter the problem of "too program open files.


Sudo sh-C ulimit-HSN 100000

To make this setting permanently effective (after restart), you need to modify the system configuration file. Add (if any, modify) the following rows

> Sudo nano/etc/security/limits. conf



* Soft nofile 200000


* Hard nofile 200000

Then, make sure to restart or make limits take effect in other ways.

Events Module
The events module includes all the settings for handling links in nginx.



Events {


Worker_connections 2048;


 


Multi_accept on;


 


Use epoll;


}

Worker_connections sets the number of connections that a worker process can open at the same time. Worker_rlimit_nofile has been adjusted, so you can safely adjust this value greatly.

Multi_accept tells nginx to accept the link whenever possible when receiving a notification from the request for a new link.

Use sets the pooling method used by the thread to multiple clients. If you are using Linux 2.6 +, set it to epoll. If you use * BSD, set it to kqueue. Want to know more about event pools? Let Wikipedia be your guide (warning: If you want to understand everything, a pair of Marxist-style bearded and an operating system course are required ).

(Note that if no pooling method is specified for nginx, it will select the most appropriate one based on the operating system)

HTTP Module

The HTTP module controls the core functions of nginx HTTP processing. Although there are only a few configuration items here, we still need a brief understanding. Each configuration segment should be placed in the HTTP module, which is not described later.



HTTP {


 


Sendfile on;


 


Tcp_nopush on;


Tcp_nodelay on;


 


...


}

Sendfile activates sendfile (). Sendfile () Copies data between the disk and the TCP port (or any two file descriptors. Before the emergence of sendfile, a data cache must be allocated to the user space to transmit such data. Use read () to read data from the source file to the cache, and then use write () to write the cache to the network. Sendfile () reads data directly from the disk to the operating system buffer. Because this operation is completed in the kernel, sendfile () is better than read () and write () with context switching/buffering garbage () use together is more efficient (learn more about sendfile ).

Tcp_nopush tells nginx to send all header files in a package, instead of sending them one by one.

Tcp_nodelay tells nginx not to cache data, and small data should be sent quickly-this should only be used to frequently send small fragments without immediately obtaining a response, applications that need to transmit data in real time.



Access_log off;


 


Error_log/var/log/nginx/error. Log crit;

Access_log determines whether nginx saves access logs. Setting this parameter to disable can reduce disk Io and increase the speed (in other words, you only have one life ).

Error_log indicates that nginx should record critical errors.



Keepalive_timeout 20;


 


Client_header_timeout 20;


Client_body_timeout 20;


 


Reset_timedout_connection on;


 


Send_timeout 20;

Keepalive_timeout specifies the timeout time for the link to the client's keep-alive. The server will close the link after this time. We will decrease this value to avoid busy worker for a long time.

Client_header_timeout and client_body_timeout (respectively) set the request header and request body timeout time. This value should also be set to a lower value.

Reset_timedout_connection tells nginx to close the link when the client loses the connection. This will release all memory allocated to the client.

Send_timeout specifies the timeout time for the response client. This time is not the entire transmission time, but the interval between the two read operations on the client. If the client is not ready to read data again within this time period, nginx will close the link.



Limit_conn_zone $ binary_remote_addr zone = ADDR: 5 m;


 


Limit_conn ADDR 100;

Limit_conn_zone sets the parameter of the shared memory area (such as the current number of links) where data can be stored by key. 5 M indicates 5 MB. It should be sufficient to store (32 K * 5) 32 bytes or (16 K * 5) 64 bytes.

Limit_conn sets the maximum number of connections for the specified key value. The key here is ADDR and the value is 100. Therefore, only 100 concurrent connections are allowed for each IP address.



Include/etc/nginx/mime. types;


 


Default_type text/html;


 


Charset UTF-8;

Include directly contains the content of other files to the current file. Here, the mime list is loaded for use.

Default_type sets the default MIME type of the file.

Charset sets set the default character set contained in the header.

The following two performance improvement options are described in this great question of webmasters stackexchange.



Gzip on;


 


# Gzip_static on;


 


Gzip_proxied any;


 


Gzip_min_length 256;


 


Gzip_comp_level 4;


 


Gzip_types text/plain text/CSS application/JSON application/X-JavaScript text/XML application/XML + RSS text/JavaScript;

Gzip tells nginx gzip to compress the data sent. This reduces the amount of data to be sent.

Gzip_static tells nginx to tell nginx to find the gzip content with the same name before performing gzip on the content. This requires you to pre-compress the file (which has been commented out in this example), but you can use the highest compression ratio, at the same time, nginx no longer compresses these files (learn more about gzip_static ).

Gzip_proxied allows or disables request/response-based compression. Set any to gzip all requests.

Gzip_min_length sets the minimum number of bytes of gzip data. Gzip compression reduces the request processing speed. Therefore, if a request is smaller than 1000 bytes, it will not be compressed.

Gzip_comp_level sets the Data Compression level. The level can be any value ranging from 1 to 9. 9 indicates the slowest but highest compression ratio. Setting 4 is a good compromise.

Gzip_types: specifies the type of gzip. But you can add more.



# Cache informations about file descriptors, frequently accessed files


# Can boost performance, but you need to test those values


Open_file_cache max = 100000 inactive = 20 s;


Open_file_cache_valid 30 s;


Open_file_cache_min_uses 2;


Open_file_cache_errors on;


 


##


# Virtual host configs


# AKA our settings for specific servers


##


 


Include/etc/nginx/CONF. d/*. conf;


Include/etc/nginx/sites-enabled /*;


}

Open_file_cache enables the buffer and specifies the number of entities in the cache area and the cache duration. The maximum value is set to the largest possible value. If they are not used for more than 20 seconds, they are removed from the buffer.

Open_file_cache_valid specifies the interval for checking the validity of information in open_file_cache.

Open_file_cache_min_uses defines the minimum number of idle files in open_file_cache at a specified interval.

Open_file_cache_errors specifies whether a cache error occurs when you search for a file.

Include adds some files to the configuration again. Service modules defined in different files are introduced. If your service module is not in this path, you should modify these lines to the correct path.


Complete configuration file:



User www-data;


Pid/var/run/nginx. PID;


Worker_processes auto;


Worker_rlimit_nofile 8192;


 


Events {


Worker_connections 2048;


Multi_accept on;


Use epoll;


}


 


HTTP {


Sendfile on;


Tcp_nopush on;


Tcp_nodelay on;


 


Access_log off;


Error_log/var/log/nginx/error. Log crit;


 


Keepalive_timeout 20;


Client_header_timeout 20;


Client_body_timeout 20;


Reset_timedout_connection on;


Send_timeout 20;


 


Limit_conn_zone $ binary_remote_addr zone = ADDR: 5 m;


Limit_conn ADDR 100;


 


Include/etc/nginx/mime. types;


Default_type text/html;


Charset UTF-8;


 


Gzip on;


Gzip_proxied any;


Gzip_min_length 256;


Gzip_comp_level 4;


Gzip_types text/plain text/CSS application/JSON application/X-JavaScript text/XML application/XML + RSS text/JavaScript;


 


Open_file_cache max = 100000 inactive = 20 s;


Open_file_cache_valid 30 s;


Open_file_cache_min_uses 2;


Open_file_cache_errors on;


 


Include/etc/nginx/CONF. d/*. conf;


Include/etc/nginx/sites-enabled /*;


}

After editing the configuration, make sure to reload/restart nginx to use the new configuration file.



Service nginx reload


This article is from the shiningliliang blog, please be sure to keep this source http://shiningliliang.blog.51cto.com/4984800/1567499

Prepare for nginx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.