Nginx server configuration details

Source: Internet
Author: User
Tags sendfile

Nginx configuration file details


User nginx;

# User


Worker_processes 8;

# Working process, adjusted based on hardware, greater than or equal to the number of CPU Cores


Error_log logs/nginx_error.log crit;

# Error Log


PID logs/nginx. PID;

# PID placement location


Worker_rlimit_nofile 204800;

# Specify the maximum descriptor that a process can open

This command indicates the maximum number of file descriptors opened by an nginx process. The theoretical value is the maximum number of opened files.

The number of parts (ulimit-N) is the same as the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to keep the same as the value of ulimit-n.

Currently, the number of files opened in the Linux 2.6 kernel is 65535, and worker_rlimit_nofile should be set to 65535.

This is because the allocation of requests to processes during nginx scheduling is not so balanced, so if you enter 10240, when the total concurrency reaches 3 to 4, 10240, there may be more than 502 processes, and a error will be returned.


Events


{

Use epoll;

# Use the I/O model of epoll

Note:

Similar to Apache, nginx has different event models for different operating systems.

A) standard event model

Select and poll are standard event models. If the current system does not have a more effective method, nginx will select or poll

B) Efficient event model

Kqueue: Used in FreeBSD 4.1 +, OpenBSD 2.9 +, NetBSD 2.0 and MacOS X. Using a dual-processor MacOS X System Using kqueue may cause kernel crashes.

Epoll: Used in Linux kernel version 2.6 and later.

/Dev/poll: Used in Solaris 7 11/99 +, HP/UX 11.22 + (eventport), IRIX 6.5.15 +, and Tru64 UNIX 5.1a +.

Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.



Worker_connections 204800;

# The maximum number of connections of worker processes. It can be used together with the previous Worker Process Based on hardware adjustment. The maximum number of connections is as large as possible, but do not run CPU to 100%.

The maximum number of connections allowed by each process. Theoretically, the maximum number of connections per nginx server is worker_processes * worker_connections.


Keepalive_timeout 60;


Keepalive timeout.


Client_header_buffer_size 4 K;


The buffer size of the client request header, which can be set based on your system page size. Generally, the size of a request header does not exceed 1 K, but generally the system page size is greater than 1 K, set the page size here.

You can use the getconf pagesize command to obtain the page size.

[[Email protected] ~] # Getconf pagesize

4096

However, if the client_header_buffer_size exceeds 4 K, the value of client_header_buffer_size must be an integer multiple of "system page size.


Open_file_cache max = 65535 inactive = 60 s;


This will specify the cache for files to be opened. By default, the cache is not enabled. Max specifies the number of files to be opened. It is recommended that the cache be deleted when the file is not requested.


Open_file_cache_valid 80 s;


This refers to how long it takes to check the cache's valid information.


Open_file_cache_min_uses 1;


The inactive parameter in the open_file_cache command specifies the minimum number of times files are used. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once in the inactive time, it will be removed.



}


# Set the HTTP server and use its reverse proxy function to provide Load Balancing support

HTTP

{

Include mime. types;

# Set the MIME type, which is defined by the mime. type file.

Default_type application/octet-stream;

Log_format main '$ host $ status [$ time_local] $ remote_addr [$ time_local] $ request_uri'

'"$ Http_referer" "$ http_user_agent" "$ http_x_forwarded_for "'

'$ Bytes_sent $ request_time $ sent_http_x_cache_hit ';

Log_format log404 '$ status [$ time_local] $ remote_addr $ host $ request_uri $ sent_http_location ';

$ Remote_addr and $ http_x_forwarded_for are used to record the Client IP address;

$ Remote_user: used to record the client user name;

$ Time_local: used to record the access time and time zone;

$ Request: used to record the request URL and HTTP protocol;

$ Status: used to record the Request status; successful is 200,

$ Body_bytes_s Ent: record the size of the content sent to the client file body;

$ Http_referer: used to record access from that page Link;

$ Http_user_agent: records information about the customer's virus and browser;

Generally, the web server is placed behind the reverse proxy so that the customer's IP address cannot be obtained. The IP address obtained through $ remote_add is the IP address of the reverse proxy server. The reverse proxy server can add x_forwarded_for to the HTTP header of the forwarding request to record the IP address of the original client and the server address of the original client request;

Access_log/dev/NULL;

# After the log_format command is used to set the log format, you must use the access_log command to specify the log file storage path;

# Access_log/usr/local/nginx/logs/access_log main;

Server_names_hash_bucket_size 128;

# The hash table that stores the server name is controlled by the command server_names_hash_max_size and server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table, and is a multiple of the cache size of a single processor. After reducing the number of accesses in the memory, it is possible to accelerate the search for hash table key values in the processor. If the size of the hash bucket is equal to the cache size of one processor, the number of times the hash bucket is searched in the memory is 2 in the worst case when the search key is used. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit. Therefore, if nginx prompts you to increase the hash Max size or hash bucket size, increase the size of the previous parameter first.


Client_header_buffer_size 4 K;

The buffer size of the client request header, which can be set based on the size of your system page. Generally, the size of the header of a request does not exceed 1 K, but generally the system page size is greater than 1 K, set the page size here. You can use the getconf pagesize command to obtain the page size.


Large_client_header_buffers 8 128 K;

Customer Request Header Buffer size
Nginx uses the buffer client_header_buffer_size by default to read the header value. If

The header is too large and will be read using large_client_header_buffers.
If the HTTP header or cookie is too small, the system reports the 400 Error nginx 400 bad request.
If the request line exceeds the buffer, an HTTP 414 error (URI Too long) will be reported)
The maximum size of the HTTP header accepted by nginx must be larger than one of its buffer; otherwise, 400

HTTP Error (Bad request ).

Open_file_cache max102400

Use Field: HTTP, server, and location to specify whether the cache is enabled. If enabled, the following information is recorded: · opened file descriptor, size information, and modification time. · existing directory information. · error message during file search-the file cannot be correctly read without it. For details, refer to open_file_cache_errors Command Options:
· Max-specifies the maximum number of caches. If the cache overflows, the files (LRU) that have been used for the longest time will be removed.
Example: open_file_cache max = 1000 inactive = 20 s; open_file_cache_valid 30 s; open_file_cache_min_uses 2; open_file_cache_errors on;

Open_file_cache_errors
Syntax: open_file_cache_errors on | off default value: open_file_cache_errors off field used: HTTP, server, location this command specifies whether to search a file is a cache error record.

Open_file_cache_min_uses

Syntax: open_file_cache_min_uses number default value: open_file_cache_min_uses 1 field used: HTTP, server, location this command specifies the minimum number of files that can be used within a certain time range in parameters invalid in open_file_cache, if a larger value is used, the file descriptor is always open in the cache.
Open_file_cache_valid

Syntax: open_file_cache_valid time default value: open_file_cache_valid 60 field used: HTTP, server, location this command specifies when to check the effective information of cached items in open_file_cache.


Client_max_body_size 300 m;

Set the size of the File Uploaded through nginx


Sendfile on;

# The sendfile command specifies whether nginx calls the sendfile function (zero copy mode) to output files,
For common applications, it must be set to on.
If it is used for application disk I/O heavy load applications such as downloading, you can set it to off to balance the disk and network I/O processing speed and reduce the system uptime.

Tcp_nopush on;

This option allows or disables the socke tcp_cork option. This option is only used when sendfile is used.


Proxy_connect_timeout 90;
# Timeout value for backend server connection _ timeout value for initiating a handshake and waiting for response

Proxy_read_timeout 180;

# After successful connection _ Wait for the response time of the backend server _ in fact, it has entered the backend queue to wait for processing (it can also be said that the time when the backend server processes the request)

Proxy_send_timeouts 180;

# Backend server data return time _ indicates that the backend server must transmit all data within the specified time

Proxy_buffer_size 256 K;

# Set the buffer size for the first part of the response read from the proxy server. Normally, this part of the response contains a small response header, by default, this value is the size of a buffer specified in the instruction proxy_buffers, but it can be set to a smaller value.

Proxy_buffers 4 256 K;

# Set the number and size of the buffer used to read the response (from the proxy server). The default value is the page size, which may be 4 K or 8 K Depending on the operating system.

Proxy_busy_buffers_size 256 K;


Proxy_temp_file_write_size 256 K;

# Set the data size when writing proxy_temp_path to prevent a working process from blocking too long when passing files

Proxy_temp_path/data0/proxy_temp_dir;

# The paths specified by proxy_temp_path and proxy_cache_path must be in the same partition.
Proxy_cache_path/data0/proxy_cache_dir levels = keys_zone = cache_one: 200 m inactive = 1D max_size = 30g;
# Set the memory cache size to 200 MB. The content that is not accessed within one day is automatically cleared, and the hard disk cache size to 30 GB.


Keepalive_timeout 120;

Keepalive timeout.

Tcp_nodelay on;

Client_body_buffer_size 512 K;
If you set it to a relatively large value, such as 256 kb, it is normal to submit any image smaller than kb in Firefox or IE. If you comment out this command, use the default client_body_buffer_size setting, that is, twice the size of the operating system page, 8 K or 16 K. The problem occurs.
Whether firefox4.0 or ie8.0 is used, if you submit an image of about 500 kb, the error internal server error is returned.

Proxy_intercept_errors on;

Enables nginx to block a 400 or higher HTTP response code.


Upstream img_relay {

Server 127.0.0.1: 8027;

Server 127.0.0.1: 8028;

Server 127.0.0.1: 8029;

Hash $ request_uri;

}

Currently, nginx upstream supports four allocation methods.

1. Round Robin (default)

Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.

2. Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
For example:
Upstream bakend {
Server 192.168.0.14 Weight = 10;
Server 192.168.0.15 Weight = 10;
}

2. ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
For example:
Upstream bakend {
Ip_hash;
Server 192.168.0.14: 88;
Server 192.168.0.15: 80;
}

3. Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
Upstream backend {
Server server1;
Server server2;
Fair;
}

4. url_hash (third-party)

Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when it is cached.

For example, add a hash statement to upstream. Other parameters such as weight cannot be written to server statements. hash_method is the hash algorithm used.

Upstream backend {
Server squid1: 3128;
Server squid2: 3128;
Hash $ request_uri;
Hash_method CRC32;
}

TIPS:

Upstream bakend {# define the IP address and device status of the Server Load balancer Device
Ip_hash;
Server 127.0.0.1: 9090 down;
Server 127.0.0.1: 8080 Weight = 2;
Server 127.0.0.1: 6060;
Server 127.0.0.1: 7070 backup;
}
Add
Proxy_pass http: // bakend /;

The status of each device is set:
1. Down indicates that the server before a ticket is not involved in the load
2. The default weight value is 1. The larger the weight value, the larger the load weight.
3. max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
4. fail_timeout: The pause time after max_fails fails.
5. Backup: Requests the backup machine when all other non-Backup machines are down or busy. Therefore, this machine is under the least pressure.

Nginx supports setting multiple groups of Server Load balancer instances for unused servers.

Client_body_in_file_only is set to on. You can use the client post data record in the file for debugging.
Client_body_temp_path: Set the directory of the record file to a maximum of three levels.

Location matches the URL. You can perform redirection or perform new proxy load balancing.


Server

# Configuring virtual machines

{

Listen 80;

# Configure the listening port

SERVER_NAME image. ***. com;

# Configure the access domain name

Location ~ * \. (MP3 | exe) $ {

# Load Balancing the addresses ending with "MP3 or EXE"

Proxy_pass http: // img_relay $ request_uri;

# Set the port, socket, and URL of the proxy server

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

# The above three lines aim to transmit the user information received by the proxy server to the Real Server

}

Location/face {

If ($ http_user_agent ~ * "Xnp "){

Rewrite ^ (. *) $ http: // 211.151.188.190: 8080/face.jpg redirect;

}

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

Error_page 404 502 = @ fetch;

}

Location @ fetch {

Access_log/data/logs/face. Log log404;

# Set access logs for the current server

Rewrite ^ (. *) $ http: // 211.151.188.190: 8080/face.jpg redirect;

}


Location/image {

If ($ http_user_agent ~ * "Xnp "){

Rewrite ^ (. *) $ http: // 211.151.188.190: 8080/face.jpg redirect;

}

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

Error_page 404 502 = @ fetch;

}

Location @ fetch {

Access_log/data/logs/image. Log log404;

Rewrite ^ (. *) $ http: // 211.151.188.190: 8080/face.jpg redirect;

}

}


Server

{

Listen 80;

SERVER_NAME *. ***. com *. ***. cn;


Location ~ * \. (MP3 | exe) $ {

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

}

Location /{

If ($ http_user_agent ~ * "Xnp "){

Rewrite ^ (. *) $ http: // i1. *** img.com/help/noimg.gif redirect;

}

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

# Error_page 404 http: // i1. *** img.com/help/noimg.gif;

Error_page 404 502 = @ fetch;

}

Location @ fetch {

Access_log/data/logs/baijiaqi. Log log404;

Rewrite ^ (. *) $ http: // i1. *** img.com/help/noimg.gif redirect;

}

# Access_log off;

}


Server

{

Listen 80;

SERVER_NAME *. *** img.com;


Location ~ * \. (MP3 | exe) $ {

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

}


Location /{

If ($ http_user_agent ~ * "Xnp "){

Rewrite ^ (. *) $ http: // i1. *** img.com/help/noimg.gif;

}

Proxy_pass http: // img_relay $ request_uri;

Proxy_set_header host $ host;

Proxy_set_header X-real-IP $ remote_addr;

Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;

# Error_page 404 http: // i1. *** img.com/help/noimg.gif;

Error_page 404 = @ fetch;

}

# Access_log off;

Location @ fetch {

Access_log/data/logs/baijiaqi. Log log404;

Rewrite ^ (. *) $ http: // i1. *** img.com/help/noimg.gif redirect;

}

}


Server

{

Listen 8080;

SERVER_NAME ngx-ha. *** img.com;

Location /{

Stub_status on;

Access_log off;

}

}

Server {

Listen 80;

SERVER_NAME imgsrc1. ***. net;

Root HTML;

}

Server {

Listen 80;

SERVER_NAME ***. com W. ***. com;

# Access_log/usr/local/nginx/logs/access_log main;

Location /{

Rewrite ^ (. *) $ http: // www. ***. com /;

}

}

Server {

Listen 80;

SERVER_NAME *******. com W. ********. com;

# Access_log/usr/local/nginx/logs/access_log main;

Location /{

Rewrite ^ (. *) $ http: // www. ********. com /;

}

}

Server {

Listen 80;

SERVER_NAME ******. com;

# Access_log/usr/local/nginx/logs/access_log main;

Location /{

Rewrite ^ (. *) $ http: // www. ******. com /;

}

}

Location/nginxstatus {
Stub_status on;
Access_log on;
Auth_basic "nginxstatus ";
Auth_basic_user_file CONF/htpasswd;
}

# Set the address for viewing nginx status


Location ~ /\. Ht {
Deny all;
}

# Prohibit access to the. htxxx File


}


Note: Variables

The ngx_http_core_module module supports built-in variables. Their names are the same as those of Apache built-in variables.

The first step is to describe the rows in the client request title, such as $ http_user_agent and $ http_cookie.

There are other variables

$ ARGs this variable is equal to the parameter in the request line

$ Content_length is equal to the value of "content_length" of the request line.

$ Content_type is equivalent to the "content_type" value in the request header.

$ Document_root is equivalent to the value specified by the root command of the current request.

$ Document_uri is the same as $ Uri.

$ Host is the same as the value specified by the "host" line in the request header or the name of the server to which the request arrives (no host line ).

$ Limit_rate the allowed connection rate

$ Request_method is equivalent to the Request Method, usually "get" or "Post"

$ Remote_addr Client IP

$ Remote_port client port

$ Remote_user is equivalent to the user name and is authenticated by ngx_http_auth_basic_module.

$ Request_filename: the path name of the file currently requested. It is composed of root, alias, and URI request.

$ Request_body_file

$ Request_uri: Complete initial URI with Parameters

$ QUERY_STRING is the same as $ ARGs

$ Sheeme the http mode (HTTP, https) must be evaluated, for example

Rewrite ^ (. +) $ sheme: // example.com $; Redirect;

$ Server_protocol is equivalent to the request protocol/

The IP address of the server to which $ server_addr request arrives. Generally, the value of this variable is obtained for system calls. To avoid system calls, it is necessary to specify the IP address in the listen command and use the BIND parameter.

$ SERVER_NAME name of the server to which the request arrives

$ Server_port indicates the port number of the server to which the request arrives.

$ URI is equivalent to the URI in the current request, not the initial value, for example, internal redirection or index

Nginx server configuration details

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.