The Nginx (pronunciation engine x) server is widely used by more and more companies and individuals due to its excellent performance, stability, simple configuration, and cross-platform performance. It has become the second largest Web server in the market following Apache. Forums and blogs on websites of various sizes also describe various configurations of Nginx from installation to optimization. However, after reading a lot of these Nginx documents, I found a big problem, that is, these documents are basically started from two aspects. One is to modify the Nginx configuration file, the second is to adjust the relevant kernel parameters of the operating system. The documentation is not clear enough and lacks comparative system-level optimization. This article will start from compiling and installing Nginx source code, to modifying the configuration file, adjusting system kernel parameters and architecture.
1. Installation
(1) streamlined modules
As Nginx is constantly adding new features, more and more modules are attached. Many operating system vendors have added rpm, deb, or other software packages in their own formats to facilitate installation and management. These software packages can be installed locally or online. However, I do not recommend using this installation method. This simplifies installation, online installation, and even automatically solves software dependencies. However, after installation, the software file layout is too scattered, which makes it inconvenient to manage and maintain the software; the dependency between software packages also causes security vulnerabilities or other problems, if you want to update the Nginx new version, you will find that the yum and deb sources have not yet released the new version (generally lags behind the software version released on the official website ). The most important thing is to use non-source code compilation and installation. Many modules will be added by default, such as email-related, uwsgi, and memcache. These modules are not used when many websites run, although the resources used at ordinary times are very small, it may still be a straw of camels. Various non-essential modules are installed and run by default, which brings security risks to the Web system. We recommend that you use the source code to compile and install common server software .. I usually use the following compilation parameters. The PHP-related module fastcgi is reserved for later optimization instructions ,:
./Configure \
"-- Prefix =/App/nginx "\
-- With-http_stub_status_module "\
-- Without-http_auth_basic_module "\
-- Without-http_autoindex_module "\
-- Without-http_browser_module "\
-- Without-http_empty_gif_module "\
-- Without-http_geo_module "\
-- Without-http_limit_conn_module "\
-- Without-http_limit_req_module "\
-- Without-http_map_module "\
-- Without-http_memcached_module "\
-- Without-http_proxy_module "\
-- Without-http_referer_module "\
-- Without-http_scgi_module "\
-- Without-http_split_clients_module "\
-- Without-http_ssi_module "\
-- Without-http_upstream_ip_hash_module "\
-- Without-http_upstream_keepalive_module "\
-- Without-http_upstream_least_conn_module "\
-- Without-http_userid_module "\
-- Without-http_uwsgi_module "\
-- Without-mail_imap_module "\
-- Without-mail_pop3_module "\
-- Without-mail_smtp_module "\
-- Without-poll_module "\
-- Without-select_module "\
"-- With-cc-opt = '-O2 '"
The compilation parameters are added or reduced based on the principle of whether the website is actually used. For example, if our company needs to use the ssi module to access the shtml page, we can delete row 17th, nginx will be installed by default. You can run "./configure -- help" to check the compilation help and decide whether to install the modules.
(2) GCC Compilation parameter optimization [optional]
GCC provides a total of 5 compilation optimization levels:
-O0: No optimization.
-O and-O1: optimizations that reduce the size and execution time of the target code and do not significantly increase the compilation time. Compilation of large programs significantly increases the memory usage during compilation.
-O2: including-O1 optimization and added the optimization that does not require compromise between the target file size and execution speed. The compiler does not execute loop expansion and function inline. This option increases the compilation time and execution performance of the target file.
-OS: it can be viewed as-O2.5, which is used to optimize the size of the target file, execute all-O2 optimization options without increasing the size of the target file, and execute optimization options specifically to reduce the size of the target file. This method is applicable when the disk space is insufficient. However, there may be unknown problems. Moreover, the current hard disk capacity is very large and it is not necessary to use common programs.
-O3: added-finline-functions,-funswitch-loops, and-fgcse-after-reload optimization options to enable all-O2 optimization options. Compared with-O2, the performance has not been significantly improved, and the compilation time is also the longest. The generated target file also occupies more memory, and sometimes the performance decreases without increasing performance, it even produces unpredictable problems (including errors), so it is not recommended to install most software unless you are absolutely sure to use this optimization level.
Modify GCC Compilation parameters to improve the compilation optimization level. This method applies to all programs compiled and installed through GCC, not just Nginx. To be secure, use-O2, which is also the recommended optimization level for most software compilation. Check the Nginx source code file auto/cc/gcc and search for NGX_GCC_OPT. The default GCC Compilation parameter is-O. You can directly change the content to NGX_GCC_OPT = "-O2" or in. /add the -- with-cc-opt = '-O2' option when configuring configure.
II. Configuration
The performance optimization of the application server mainly involves four aspects: CPU usage, memory usage, disk I/O, and network I/O. Now we start with the Nginx configuration file nginx. conf:
(1) select the number of worker processes
Command: worker_processes
Defines the number of worker processes that Nginx uses to provide external web services. The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks for storing data, and the load mode. If you are unsure, setting it to the number of available CPU cores is a good start (setting it to "auto" will try to detect it automatically ). Run the Shell command ps ax | grep "nginx: worker process" | grep-v "grep" to check the number of running Nginx worker processes. We recommend that you set this parameter to the number of server logical cores, shell Command cat/proc/cpuinfo | grep processor | wc-l can detect the total number of logic cores on the server. If you are lazy, you can directly write auto, Nginx adaptive.
(2) whether to bind CPU
Command: worker_cpu_affinity
Bind the worker process to the corresponding CPU core. By default, the CPU binding is not enabled for Nginx. The current server is generally a multi-core CPU. When the concurrency is large, the CPU usage of each server may be severely unbalanced. In this case, you can consider using CPU binding, to achieve a relatively even CPU usage, give full play to the advantages of multi-core CPU. Top, htop, and other programs can view the usage of all CPU cores. Binding example:
Worker_processes 4;
Worker_cpu_affinity 0001 0010 0100 1000;
(3) limit on the number of opened files
Command: worker_rlimit_nofile
The maximum number of files opened by each Nginx worker is set, which is limited by the number of files opened by the system user process. If not set, the default value is used. Theoretically, it should be set to the maximum number of opened files of the current Shell startup process divided by the number of Nginx worker processes. Since the number of files opened by the Nginx working process is not completely uniform, you can set it to the maximum number of files opened by the Shell startup process. Run the command ulimit-n in Shell to check the maximum number of opened files in the current logon Shell session. By default, a user process in Linux can open up to 1024 files at the same time. If the value is too small, "too open files" is reported when the traffic is slightly higher ". Run the Shell command to modify the limit on the number of files opened by the user:
Echo "*-nofile 65536">/etc/security/limits. conf
Add the following two lines to/etc/profile to modify the limit on the number of files opened by all Shell and processes started by Shell:
Echo "ulimit-n 65536">/etc/profile
Run the Shell command to make the current temporary Shell session take effect immediately:
Ulimit-n 65536
(4) group surprise
Command: accept_mutex
If the value of the accept_mutex command is on, a worker process will be awakened in turn to receive and process new connections, and other worker processes will continue to sleep. If the value is off, the system will wake up all working processes. The network I/O model specified by the use command will be scheduled by the system to determine which working process to process, and the working process that does not receive the connection request will continue to sleep, this is the so-called "surprise group Problem ". There are a large number of Apache processes on the Web server, and hundreds of thousands of processes are also common issues. This is also particularly evident in the "shocking group Problem. To ensure the stability of Nginx, the parameter value is conservatively set to on. You can set it to Off to improve performance and throughput, but this will also result in more consumption of other resources, such as increased context switching or increased load.
(5) network I/O model
Command: use
Defines the polling method (also called multiplexing network I/O model) for Nginx settings to reuse client threads ). This is naturally the preferred choice for higher efficiency. epoll is recommended for Linux 2.6 + kernel, and kqueue is recommended for FreeBSD, which is automatically selected for Nginx installation.
(6) number of connections
Command: worker_connections
Defines the maximum number of simultaneous connections of a working process in Nginx, not limited to client connections, including connections to the backend proxy server and other connections. The official website also pointed out that the value of this parameter cannot exceed the value of worker_rlimit_nofile. Therefore, we recommend that you set it to be equal to the value of worker_rlimit_nofile.
(7) open the file cache
Command: open_file_cache
Enable or disable file caching. The default value is off. It is strongly recommended that you enable this function to avoid system overhead caused by re-opening the same file and save response time. To enable the function, set the value of max = number to the maximum number of cached elements. When the cache overflows, the LRU (least recently used) algorithm is used to delete the elements in the cache. The optional parameter inactive = time setting times out. If no elements are accessed during this period, will be deleted from the cache. Example: open_file_cache max = 65536 inactive = 60 s.
Command: open_file_cache_valid
Set the time interval for checking the elements cached by open_file_cache.
Command: open_file_cache_min_uses
Set the minimum number of times the file should be accessed within the time-out period configured by the inactive parameter of the open_file_cache command. If the number of visits is greater than or equal to this value, the file descriptor will be kept in the cache; otherwise, it will be deleted from the cache.
(8) Log-related
Command: access_log and error_log
When the concurrency is large, the storage of Nginx access logs and error logs will certainly result in a large number of reads and writes to the disk, which will also affect the performance of Nginx. The higher the concurrency, the higher the IO. In this case, you can disable access logs and error logs, save the logs to the tmpfs file system, or reduce the storage access log entries and error log levels to avoid disk IO impact. Disable access_log off for logs. If logs must be saved, they can be cut every day or every hour or other time period, which can also reduce IO. Although the effect may not be very large, the log file size is much smaller, you can also easily view or archive analysis logs. We recommend that you set the error log to error or crit in the online environment. Customize access log entries and error log levels. For more information, see the official website or other documents on the Internet.
(9) hide the Nginx version number
Command: server_tokens
Enable or disable the Nginx version output in the "Server" response header. The recommendation is set to off, and the version number of the response header is disabled, which has minor benefits for performance improvement. It is mainly for the sake of security that hackers will not find the vulnerability corresponding to the version number and thus be attacked.
(10) compression problems
Command: gzip
By default, Nginx enables gzip compression. Many people may think that enabling gzip compression will increase the CPU processing time and load. However, after tests on our website, we found that although Nginx with the gzip compression function disabled reduces CPU computing and server response time, the overall response time of the website page is extended, the reason is that the data transmission time of static files such as js, css, xml, json, and html exceeds the response time saved by the server. After gzip on enables compression, the file size can be reduced by about 75%, which not only saves a lot of bandwidth traffic, but also increases the overall response time of the page. It is recommended that you enable all the configurations. Of course, not all static files need to be compressed, such as static images, PDF files, and videos. The files themselves should be compressed and saved to the server. These files are compressed using gzip again, and the compression ratio is not high, or even counterproductive. After compression, the file size increases. It is not cost-effective to compress these static files when the server response time is mostly higher than the data transfer time when the file size is reduced. Whether to enable compression for the Web site and filter and compress the files, you can use HttpWatch, Firebug, and other network analysis tools to compare and test.
Command: gzip_comp_level
Specify the compression level. The value ranges from 1 to 9. The larger the number, the higher the compression ratio, the more CPU consumption, and the higher the load. Level 9 undoubtedly has the highest compression rate, and the size of the compressed file is also the smallest, but it is also the most CPU resource-consuming, with the highest load and the slowest speed, which is sometimes intolerable for user access. It is generally recommended to use 1-4 levels, which is a compromise. Our company's website use level 2.
Command: gzip_min_length
Specify the minimum size of the compressed file, in bytes. If the size is lower than this value, the file is not compressed. If the size exceeds this value, the file is compressed. Our website is set to 1 k, so there is no need to compress files that are too small. Compressing small files causes increased CPU consumption time and reduced file size data download time to offset each other, and may increase the overall response time.
Command: gzip_types
Specify the file types that can be compressed. The mime. types file under the Nginx configuration directory conf stores the file types supported by Nginx, text/html files, and the file suffix is html htm shtml, which is compressed by default. Recommended configuration: gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript.
(11) browser cache
Command: expires
Set the "Expires" and "Cache-Control" headers in the HTTP response. "Expires" is generally used in combination with "Last-Modified. When a reasonable expires configuration is set, when the browser first accesses the Web page element, the static files on the page will be downloaded to the local temporary cache directory. When you access the same URL for the second and later times, a request indicating the "If-Modified-Since" and the local cache file Time attribute value will be sent to the server. The server compares the time attribute value of the local file on the server, if not modified, the server returns the http 304 status code, and the browser directly calls the locally cached file. If the time attribute value is changed, resend the new file. This avoids the re-transmission of file content from the server, reduces the server pressure, saves bandwidth, and increases the user access speed. After the command is followed by a number plus a time unit, that is, the cache expiration time;-1 indicates that the cache will always expire and will not be cached. We strongly recommend that you add the expires configuration to analyze the Expiration Time. Some Nginx configurations in our company are as follows:
Location ~ . + \. (Gif | jpg | jpeg | png | bmp | swf) $
{
Expires 30d;
}
Location ~ . + \. (Js | css | xml | javascript | txt | csv) $
{
Expires 30d;
}
You can also place static files in a fixed directory and then perform location and expires on the directory. For example:
Location/static/
{
Expires 30d;
}
(12) persistent connection
Command: keepalive_timeout
Enable the Keepalive attribute of the persistent connection of Http, reuse the previously established TCP connection to receive requests and send responses, and reduce the resource time overhead for re-establishing the TCP connection. We recommend that you enable persistent connections when the content of the website page is static. If the content is dynamic and cannot be converted to static pages, disable persistent connections. Followed by digits and time unit symbols. Positive value: enable persistent connection, and disable 0.
(13) reduce the number of HTTP requests
A website page contains a large number of static elements such as images, scripts, style sheets, and Flash. The biggest advantage of reducing the number of access requests is to reduce the loading time of the first page access. You can merge files of the same type into one file to reduce the number of requests. This is actually in the scope of Web front-end optimization. The Web front-end engineers should plan and manage relevant static files, rather than O & M engineers. However, Nginx can also implement the file merging function by installing the Concat or Google PageSpeed module provided by Alibaba. Our company does not use the merge function. For more information about installation and configuration, see relevant online documents. Concat source code URL: Consumer.
(14) PHP problems
Nginx cannot directly parse the PHP code file. You need to call the FastCGI interface to transfer it to the PHP interpreter for execution, and then return the result to Nginx. This article does not introduce PHP optimization. Nginx can enable the FastCGI cache function to improve performance.
Command: fastcgi_temp_path
Defines the temporary PATH for saving the FastCGI cache file.
Command: fastcgi_cache_path
Defines the storage path and other parameters of the FastCGI cache file. Cached data is stored as binary data files. Both the cached file name and key are obtained through MD5 calculation for the access URL. The cache file is saved to the temporary directory specified by fastcgi_temp_path, and then moved to the cache directory specified by fastcgi_cache_path through the rename operation. Levels specifies the directory structure, with 16 sub-directories as the base. keys_zone specifies the shared memory partition name and size to save the cache key and data information. inactive specifies the cache data storage time, when this period of time is not accessed, it will be removed; max_size specifies the maximum disk space used by the cache. When the capacity exceeds the limit, the minimum recently used data will be deleted. It is recommended that fastcgi_temp_path and fastcgi_cache_path be set to the same partition, so moving the same partition is more efficient. Example:
Fastcgi_temp_path/tmp/fastcgi_temp;
Fastcgi_cache_path/tmp/fastcgi_cache levels = keys_zone = cache_fastcgi: 16 m inactive = 30 m max_size = 1g;
In this example,/tmp/fastcgi_temp is used as the temporary directory of FastCGI cache;/tmp/fastcgi_cache is used as the final directory saved by FastCGI cache; the first-level subdirectory is 16 at a time, the second-level sub-directories are 256 to the power of 16. The shared memory zone is named cache_fastcgi, which occupies 128 MB of memory. The cache expiration time is 30 minutes; the maximum size of cached data stored on the disk is 1 GB.
Command: fastcgi_cache_key
Defines the FastCGI cache keyword. This configuration must be added to enable FastCGI cache. Otherwise, all requests to access PHP are the result of accessing the first PHP file URL.
Command: fastcgi_cache_valid
Specifies the cache time for the specified Http status code.
Command: fastcgi_cache_min_uses
Specify the number of requests that the same URL will be cached.
Command: fastcgi_cache_use_stale
Specify the situations in which expired data is used to respond when an error occurs when the FastCGI server is connected.
Command: fastcgi_cache
The shared memory used by the cache.
I often use nginx. conf templates. You can modify them as needed:
User nginx;
Worker_processes auto;
Error_log logs/error. log error;
Pid logs/nginx. pid;
Worker_rlimit_nofile 65536;
Events
{
Use epoll;
Accept_mutex off;
Worker_connections 65536;
}
Http
{
Include mime. types;
Default_type text/html;
Charset UTF-8;
Server_names_hash_bucket_size 128;
Client_header_buffer_size 4 k;
Large_client_header_buffers 4 32 k;
Client_max_body_size 8 m;
Open_file_cache max = 65536 inactive = 60 s;
Open_file_cache_valid 80 s;
Open_file_cache_min_uses 1;
Log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request "'
'$ Status $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" "$ http_x_forwarded_for "';
Access_log logs/access. log main;
Sendfile on;
Server_tokens off;
Fastcgi_temp_path/tmp/fastcgi_temp;
Fastcgi_cache_path/tmp/fastcgi_cache levels = keys_zone = cache_fastcgi: 128 m inactive = 30 m max_size = 1g;
Fastcgi_cache_key $ request_method: // $ host $ request_uri;
Fastcgi_cache_valid 200 302 1 h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1 m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout http_500 http_503 invalid_header;
Keepalive_timeout 60;
Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 64 k;
Gzip_http_version 1.1;
Gzip_comp_level 2;
Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript;
Server
{
Listen 80;
Server_name localhost;
Index index.html;
Root/App/web;
Location ~ . + \. (Php | php5) $
{
Fastcgi_pass unix:/tmp/php. sock;
Fastcgi_index index. php;
Include fastcgi. conf;
Fastcgi_cache cache_fastcgi;
}
Location ~ . + \. (Gif | jpg | jpeg | png | bmp | swf | txt | csv | doc | docx | xls | xlsx | ppt | pptx | flv) $
{
Expires 30d;
}
Location ~ . + \. (Js | css | html | xml) $
{
Expires 30d;
}
Location/nginx-status
{
Stub_status on;
Allow 192.168.1.0/24;
Allow 127.0.0.1;
Deny all;
}
}
}
III. Kernel
Some default values of Linux kernel parameters are not suitable for high concurrency. Generally, you can adjust the/Proc file system or directly modify the/etc/sysctl. conf configuration file to save it permanently. It is not recommended to adjust the/Proc file system and restore it to the default value after the system is restarted. Linux kernel optimization mainly involves the optimization of network, file system, memory, etc. Below is my common kernel optimization configuration:
Grep-q "net. ipv4.tcp _ max_tw_buckets"/etc/sysctl. conf | cat>/etc/sysctl. conf <EOF
########################################
Net. core. rmem_default = 262144
Net. core. rmem_max = 16777216
Net. core. wmem_default = 262144
Net. core. wmem_max = 16777216
Net. core. somaxconn = 262144
Net. core. netdev_max_backlog = 262144
Net. ipv4.tcp _ max_orphans = 262144
Net. ipv4.tcp _ max_syn_backlog = 262144
Net. ipv4.tcp _ max_tw_buckets = 10000
Net. ipv4.ip _ local_port_range = 1024 65500
Net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ syncookies = 1
Net. ipv4.tcp _ synack_retries = 1
Net. ipv4.tcp _ syn_retries = 1
Net. ipv4.tcp _ fin_timeout = 30
Net. ipv4.tcp _ keepalive_time = 600
Net. ipv4.tcp _ keepalive_intvl = 30
Net. ipv4.tcp _ keepalive_probes = 3
Net. ipv4.tcp _ mem = 786432 1048576 1572864
Fs. aio-max-nr = 1048576
Fs. file-max = 6815744
Kernel. sem = 250 32000 100 128
Vm. swappiness = 10
EOF
Sysctl-p
IV. Architecture
The biggest advantage of Nginx is its processing of static files and proxy forwarding, which supports layer-7 load balancing and fault isolation. Static and dynamic separation is the inevitable result of the development of each website to a certain scale. For static requests, it is best to split them and enable independent domain names, which is convenient for management and quick support for CDN in the future. If the performance of an Nginx server cannot be met, you can consider adding LVS server load balancer to the Nginx front-end, or F5 or other hardware server load balancer (expensive, suitable for local companies ), multiple Nginx servers share website requests. You can also consider caching static files with Varnish or Squid to implement similar CDN functions. The new version of Nginx currently supports direct read/write to Memcache. You can select to add such modules during compilation and installation, which saves the processing time transferred to dynamic program servers such as PHP or JPS, and improves the efficiency, reduces the load on the Dynamic Server.