Nginx optimization and nginx optimization grapenvine

Source: Internet
Author: User

Nginx optimization and nginx optimization grapenvine

Making full use of Nginx's efficiency and stability is very important for Nginx optimization. The following describes compilation and installation, third-party plug-ins, and system kernel.

Compilation and Installation Process Optimization

1. Reduce the size of Nginx compiled files

During Nginx compilation, the default mode is debug. In debug mode, a lot of tracing and ASSERT information will be inserted. After compilation, an Nginx should have several megabytes. The debug mode is canceled before compilation, and Nginx is only several hundred kilobytes after compilation. Therefore, before compilation, modify the source code and cancel the debug mode.

In the Nginx source code file, find the auto/cc/gcc file under the source code directory, and find the following lines after opening:

Comment or delete the two lines to cancel the debug mode.

2. Specify the CPU compilation type for a specific CPU for compilation optimization.

During Nginx compilation, the default GCC compilation parameter is "-O". You can use these two parameters to optimize GCC compilation.

-- With-cc-opt = "-O3"

-- With-cpu-opt = CPU # Specific CPU compilation, valid values include Pentium, pentiumpro, pentium3, Pentium4, athlon, Opteron, amd64, sparc32, sparc64, ppc64

To determine the CPU type, run the following command:

Cat/proc/cpuinfo | grep "model name"

3. Hide the version number and software name

Vi src/core/nginx. h

  1. # Define nginx version "7.0"
  2. # DefineNGINX_VER "IIS/" NGINX_VERSION # Here you modify the name of the software you want to modify

Modify the connection field in the HTTP header to prevent the specific version from being displayed.

Common http headers include headers supported by both request and response messages. Common headers include Cache-Control, Connection, Date, Pragma, Transfer-Encoding, Upgrade, and. Both parties need to support the extension of the common header. If a common header is not supported, it is generally processed as the object header. That is to say, some devices or software can obtain the connection, and some cannot. To hide the connection, you must be thorough!

Vi src/http/ngx_http_header_filter_module.c

  1. Staticchar ngx_http_server_string [] = "Server: IIS" CRLF # mainly here

Http Error Code returned

Sometimes, when a page program encounters an error, Nginx will return the corresponding error code on our behalf. During echo, nginx and the version number will be included, and we will hide it.

Vi src/http/ngx_http_special_response.c

  1. Static u_char ngx_http_error_tail [] =
  2. "<Hr> <center> IIS </center>" CRLF
  3. "</Body>" CRLF
  4. "</Html>" CRLF
  5. ;

Use TCMalloc to optimize Nginx Performance

TCMalloc is an open-source tool developed by Google. Compared with the standard glibc library's malloc, The TCMalloc library has much higher memory allocation efficiency and speed, which improves the performance of the server in high concurrency and reduces the system load.

You need to install google-perftools and libunwind (32-bit systems do not need to be installed.

Install libunwind Library

Http://download.savannah.gnu.org/releases/libunwind/select an appropriate region.

  wget http://download.savannah.gnu.org/releases/libunwind/libunwind-0.99-beta.tar.gz

CFLAGS =-fPIC./configure & make CFLAGS =-fPIC install

Install google-perftools

  wget https://github.com/gperftools/gperftools/releases/download/gperftools-2.5/gperftools-2.5.tar.gz 

  ./confighre  && make && make install 

Echo "/usr/local/lib">/etc/ld. so. conf. d/usr_local_lib.conf

Ldconfig

After re-compilation, create a thread directory for Google-perftools, put the file under/tmp/tcmalloc, and grant the permission 777

Add

  

Restart Nginx.

You can view lsof-n | grep tcmalloc.

Nginx kernel optimization parameters

The Kernel Parameter Optimization of Nginx mainly applies to Nginx applications in Linux. For reference only:

Net. ipv4.tcp _ max_tw_buckets = 6000

The number of timewait instances. The default value is 180000.

Net. ipv4.ip _ local_port_range = 1024 65000

Port range that can be opened by the system.

Net. ipv4.tcp _ tw_recycle = 1

Enable timewait quick recovery.

Net. ipv4.tcp _ tw_reuse = 1

Enable reuse. Allow TIME-WAIT sockets to be re-used for a New TCP connection.

Net. ipv4.tcp _ syncookies = 1

Enable SYN Cookies. When a SYN wait queue overflow occurs, enable cookies for processing.

Net. core. somaxconn = 262144

By default, the backlog of the listen function in web applications limits the net. core. somaxconn of kernel parameters to 128. nginx defines NGX_LISTEN_BACKLOG as 511 by default, so it is necessary to adjust this value.
You can also add this backlog parameter after linsten 80 in the nginx configuration file, but it cannot exceed the value set in the kernel, as shown below:

  Linsten 80 backlog=65533;
  net.core.netdev_max_backlog = 262144

The maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets.

  net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. You cannot rely too much on it or artificially reduce the value. You should also increase this value (if the memory is increased ).

  net.ipv4.tcp_max_syn_backlog = 262144

The maximum number of connection requests that have not received confirmation from the client. For systems with 1024 MB of memory, the default value is 128, while for systems with small memory, the value is.

  net.ipv4.tcp_timestamps = 0

Timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.

  net.ipv4.tcp_synack_retries = 1

To enable the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.

  net.ipv4.tcp_syn_retries = 1

Number of SYN packets sent before the kernel disconnects the connection.

  net.ipv4.tcp_fin_timeout = 1

If the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even become an unexpected machine. The default value is 60 seconds. 2.2 The kernel value is usually 180 seconds. You can follow this setting, but remember that even if your machine is a lightweight WEB server, there is also a risk of memory overflow due to a large number of dead sockets. The risk of FIN-WAIT-2 is smaller than that of FIN-WAIT-1, because it can only eat K of memory at most, however, they have a longer lifetime.

  net.ipv4.tcp_keepalive_time = 30

The frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.

A complete kernel Optimization Configuration

  

net.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.shmmax = 68719476736kernel.shmall = 4294967296net.ipv4.tcp_max_tw_buckets = 6000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 4096        87380   4194304net.ipv4.tcp_wmem = 4096        16384   4194304net.core.wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.core.netdev_max_backlog = 262144net.core.somaxconn = 262144net.ipv4.tcp_max_orphans = 3276800net.ipv4.tcp_max_syn_backlog = 262144net.ipv4.tcp_timestamps = 0net.ipv4.tcp_synack_retries = 1net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927000000net.ipv4.tcp_fin_timeout = 1net.ipv4.tcp_keepalive_time = 30net.ipv4.ip_local_port_range = 1024    65000

A simple nginx Optimization Configuration File

user  www www;worker_processes 8;worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;error_log  /www/log/nginx_error.log  crit;pid        /usr/local/nginx/nginx.pid;worker_rlimit_nofile 204800;events{  use epoll;  worker_connections 204800;}http{  include       mime.types;  default_type  application/octet-stream;  charset  utf-8;  server_names_hash_bucket_size 128;  client_header_buffer_size 2k;  large_client_header_buffers 4 4k;  client_max_body_size 8m;  sendfile on;  tcp_nopush     on;  keepalive_timeout 60;  fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2                keys_zone=TEST:10m                inactive=5m;  fastcgi_connect_timeout 300;  fastcgi_send_timeout 300;  fastcgi_read_timeout 300;  fastcgi_buffer_size 16k;  fastcgi_buffers 16 16k;  fastcgi_busy_buffers_size 16k;  fastcgi_temp_file_write_size 16k;  fastcgi_cache TEST;  fastcgi_cache_valid 200 302 1h;  fastcgi_cache_valid 301 1d;  fastcgi_cache_valid any 1m;  fastcgi_cache_min_uses 1;  fastcgi_cache_use_stale error timeout invalid_header http_500;    open_file_cache max=204800 inactive=20s;  open_file_cache_min_uses 1;  open_file_cache_valid 30s;    tcp_nodelay on;    gzip on;  gzip_min_length  1k;  gzip_buffers     4 16k;  gzip_http_version 1.0;  gzip_comp_level 2;  gzip_types       text/plain application/x-javascript text/css application/xml;  gzip_vary on;  server  {    listen       80 backlog=65533;    server_name  www.linuxyan.com;    index index.php index.htm;    root  /www/html/;    location /status    {        stub_status on;    }    location ~ .*\.(php|php5)?$    {        fastcgi_pass 127.0.0.1:9000;        fastcgi_index index.php;        include fcgi.conf;    }    location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$    {      expires      30d;    }    log_format  access  '$remote_addr - $remote_user [$time_local] "$request" '              '$status $body_bytes_sent "$http_referer" '              '"$http_user_agent" $http_x_forwarded_for';    access_log  /www/log/access.log  access;      }}

Several commands about FastCGI

  fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m;

This command specifies a path for the FastCGI cache, the directory structure level, the storage time of the keyword region, and the non-active deletion time.

  fastcgi_connect_timeout 300;

Specify the timeout time for connecting to the backend FastCGI.

  fastcgi_send_timeout 300;

The timeout time for sending a request to FastCGI. This value refers to the timeout time for sending a request to FastCGI after two handshakes are completed.

  fastcgi_read_timeout 300;

The timeout time for receiving the FastCGI response. This value refers to the timeout time for receiving the FastCGI response after two handshakes are completed.

  fastcgi_buffer_size 16k;

Specify the buffer size required to read the first part of the FastCGI response. Here, you can set the buffer size specified by the fastcgi_buffers command, the above command specifies that it will use a 16 k buffer to read the first part of the response, that is, the response header. In fact, this response header is usually very small (no more than 1 k ), however, if you specify the buffer size in the fastcgi_buffers command, it will also allocate a buffer size specified by fastcgi_buffers to the cache.

  fastcgi_buffers 16 16k;

Specify how many buffers are needed locally to buffer FastCGI responses. As shown above, if the page size generated by a php script is 256 kb, 16 16 k buffers will be allocated for the cache. If the buffer size is greater than 256 k, the portion that is increased to k will be cached in the path specified by fastcgi_temp, of course, this is an unwise solution for server load because the data processing speed in the memory is faster than that in the hard disk, generally, this value should be set to the median value of the page size generated by a php script in your site, for example, if the page size generated by most scripts on your site is k, you can set this value to 16 k, 4 64 k, or 64 4 k, but obviously, the last two methods are not good, because if the generated page is only 32 k, if 4 64 k is used, it will allocate one 64 k buffer to cache, if you use 64 K, it will allocate 8 4 K buffer to cache, and if you use 16 16 K, it will allocate 2 16 K to cache the page, which seems more reasonable.

  fastcgi_busy_buffers_size 32k;

I don't know what this command is for. I only know that the default value is twice that of fastcgi_buffers.

  fastcgi_temp_file_write_size 32k;

The size of the data block to be used when writing fastcgi_temp_path. The default value is twice that of fastcgi_buffers.

  fastcgi_cache TEST

Enable FastCGI cache and specify a name for it. I personally think it is very useful to enable cache, which can effectively reduce the CPU load and prevent 502 errors. However, this cache will cause many problems because it caches dynamic pages. The specific use must also be based on your own needs.

  fastcgi_cache_valid 200 302 1h;  fastcgi_cache_valid 301 1d;  fastcgi_cache_valid any 1m;

Specify the cache time for the specified response code. In the preceding example, 200,302 response is cached for one hour, 301 response is cached for one day, and others are cached for one minute.

  fastcgi_cache_min_uses 1;

The minimum number of times the file is cached in the inactive parameter value of the fastcgi_cache_path command. In the preceding example, if a file is not used once in five minutes, the file will be removed.

  fastcgi_cache_use_stale error timeout invalid_header http_500;

I don't know the role of this parameter. I guess it is useless to let nginx know which types of cache. The above are the FastCGI parameters in nginx. In addition, FastCGI also has some configuration needs to be optimized. If you use php-fpm to manage FastCGI, you can modify the following values in the configuration file:

60

Number of concurrent requests simultaneously, that is, it will enable a maximum of 60 sub-threads to process concurrent connections.

  102400

Maximum number of opened files.

  204800

The maximum number of requests that a process can perform before resetting.

 

  

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.