Nginx. conf file details and optimization

Source: Internet
Author: User
Tags epoll sendfile
# Info: Theconffornginx # Author: dingtm # CTime: 2010.07.01userwwwwww; # Number of Users and Groups running NGINX worker_processes4; # Number of nginx processes. we recommend that you specify the number of CPUs, it is generally a multiple of each...

# Info: The conf for nginx
# Author: dingtm
# CTime: 2010.07.01
User www; # Users and Groups used to run NGINX
Worker_processes 4; # Number of nginx processes. it is recommended to specify the number of CPUs. Generally, this parameter is a multiple of these processes. each process consumes about 10 MB of memory.
Error_log/data/logs/nginx/error. log crit;
Pid/elain/apps/nginx. pid;
Worker_rlimit_nofile 65535; # the maximum number of handles that can be opened by nginx. it is best to keep the value consistent with that of ulimit-n and set it using ulimit-SHn 65535.
Events {
Use epoll; # use the epoll I/O model
Connections 20000; # maximum number of connections allowed by each process
Worker_connections 65535; # This value is limited by the maximum number of files opened by system processes. you need to run the command ulimit-n to view the current settings.
Maxclients = 65535*2
}
Http {
Include mime. types; # define the image of each file type in mine. types
Types {
Text/html;
Image/gif;
Image/jpeg jpg;
Image/png;
}
Default_type application/octet-stream; # sets the default type as a binary stream. if it is not set, for example, if PHP is not loaded, it will not be parsed. if it is accessed through a browser, a download window will appear.
Server_names_hash_bucket_size 128; # The unit cannot be included! This value must be set when you configure hosts. otherwise, Nginx cannot be run or the test fails. this setting and server_names_hash_max_size jointly control the HASH table for saving server names, the hash bucket size is always equal to the size of the hash table, and is a multiple of the cache size of a single processor. If the size of the hash bucket is equal to the cache size of one processor, the number of times the cache is searched in the memory is 2 in the worst case when the search key is used. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit. If the hash max size or hash bucket size prompt is displayed, we need to increase the value of server_names_hash_max_size.
Client_header_buffer_size 128 k; # set the buffer size of the client request header according to the system page size. the page size can be obtained using the getconf PAGESIZE command.
Large_client_header_buffers 4 128 k; #4 indicates the number and k indicates the size. the default value is 4 k. Apply for 4 128 KB. When the http URI is too long or the request header is too large, 414 Request URI too large or 400 bad request will be reported. this is probably because the value written in the cookie is too large, because the size of other parameters in the header is generally relatively fixed, only the cookie may be written into large data, you can increase the above two values, and the maximum number of cookie bytes in the corresponding browser will increase.
Client_max_body_size 8 m; # maximum HTTP Request BODY limit value. if the limit value is exceeded, 413 Request Entity Too Large is reported.
Open_file_cache max = 65535 inactive = 20 s; # max specifies the cache quantity. it is recommended that the number of files be the same as the number of files opened. inactive indicates how long the file has not been requested to delete the cache.
Open_file_cache_valid 30 s; # indicates how long it takes to check the cache's valid information
Open_file_cache_min_uses 1; # The minimum number of files used in the inactive parameter time in the open_file_cache command. if this number is exceeded, the file descriptor is always opened in the cache. for example, if a file is not used once in the inactive time, it will be removed.
Server_tokens off; # Nginx version displayed when the error is disabled
# Improving file transmission performance
Sendfile on; # enable the system function sendfile ().
Tcp_nopush on; # enable TCP_CORK in linux. this parameter is valid only when sendfile is enabled. it is used to reduce the number of packet segments.
Keepalive_timeout 60; # keepalive timeout
Tcp_nodelay on; # enable TCP_NODELAY to be valid only when keepalive is included.
Fastcgi_connect_timeout 300; # specify the timeout value for connecting to the backend FastCGI
Fastcgi_send_timeout 300; # The timeout time for sending a request to FastCGI. this value refers to the timeout time for sending a request to FastCGI after two handshakes have been completed.
Fastcgi_read_timeout 300; # timeout time for receiving a FastCGI response. this value refers to the timeout time for receiving a FastCGI response after two handshakes have been completed.
Fastcgi_buffer_size 64 k; # The buffer size specified by the fastcgi_buffers command can be set here.
Fastcgi_buffers 16 16 k; # specify the number of local buffers and the number of large buffers to buffer the FastCGI response.
Fastcgi_busy_buffers_size 128 k; # It is recommended to double the value of fastcgi_buffers.
Fastcgi_temp_file_write_size 128 k; # the size of the data block to be used when writing fastcgi_temp_path. the default value is twice that of fastcgi_buffers. if the above value is set too long, 502 Bad Gateway may be reported during load.
Fastcgi_cache dingtm # enable the FastCGI cache and name it to effectively reduce CPU load and prevent 502 errors
Fastcgi_cache_valid 200 302 1 h; # specify the response code cache time as 1 hour
Fastcgi_cache_valid 301 1d; #1 day
Fastcgi_cache_valid any 1 m; # the other value is 1 minute.
Fastcgi_cache_min_uses 1; # The minimum number of times that the cache is used within the inactive value of the fastcgi_cache_path command. f
Gzip on; # enable GZIP compression to compress output data streams in real time
Gzip_min_length 1 k; # obtain the verification value from the Content-Length value. if the value is smaller than 1 k, the pressure increases.
Gzip_buffers 4 16 k; # compress the result stream cache with 4 times of requested memory in 16 k
Gzip_http_version 1.1;
Gzip_comp_level 3; # The compression ratio is 1-9, and the minimum compression ratio is the fastest. The maximum compression ratio is 9, but the processing speed is the slowest and CPU consumption is
Gzip_types text/plain application/x-javascript text/css application/xml; # compression type
Include vhosts/*. conf; # virtual host
}
# VM
Server {
Listen 80;
Server_name www.elain.org; # multiple domain names are separated by spaces
Index. php index.html index.shtml;
Root/elain/data/htdocs/elain;
# Limit_conn connlimit 20; # a maximum of 20 connections can be initiated for a single IP address. if the maximum number of connections is exceeded, 503 Service unavailable is reported to prevent malicious connections.
Access_log/elain/logs/nginx/access_www.elain.org.log access;
Error_log/elain/logs/nginx/error_www.elain.org.log;
Location /{
Ssi on; # support for opening SSI in the root directory of WEB documents
Ssi_types text/html;
Ssi_silent_errors off; # Do not prompt when processing SSI errors
}
Location ~ . *. (Gif | jpg | jpeg | png | bmp | swf) $ {
Access_log off;
Expires 30d;
}
Location ~ . *. (Js | css )? $ {
Expires 1 h;
Add_header Cache_Control private;
}
Location ~ /. Ht {
Deny all;
}
Location/NginxStatus {# set the address for viewing the Nginx status
Stub_status on;
Access_log off;
Auth_basic "NginxStatus"; # ID
Auth_basic_user_file conf/. htpasswd; # webpage encryption, prompt logon box, enter the user name and password to view
}
Location ~ . *. (Php | php5 )? $ {# Matching file suffixes php and php5
# Fastcgi_pass unix:/tmp/php-cgi.sock; # SOCKET mode transfer fastcgi processing
Fastcgi_pass 127.0.0.1: 9000; # port 9000 mode fastcgi
Fastcgi_index index. php;
Include fastcgi_params; # include fastcgi configuration
# Fastcgi_param SCRIPT_FILENAME $ document_root $ fastcgi_script_name;
}
}
This article is from the "elain technical blog" blog

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.