Nginx configuration file Detailed

Source: Internet
Author: User
Tags epoll sendfile

Nginx configuration file Detailed

User Nginx;

#用户

Worker_processes 8;

#工作进程, based on hardware adjustment, greater than or equal to CPU cores

Error_log Logs/nginx_error.log Crit;

#错误日志

PID Logs/nginx.pid;

#pid放置的位置

Worker_rlimit_nofile 204800;

#指定进程可以打开的最大描述符

This instruction refers to the maximum number of file descriptors opened by an nginx process, and the theoretical value should be the most open

The number of pieces (ulimit-n) is divided by the number of nginx processes, but the Nginx allocation request is not uniform, so it is best to keep the value of ulimit-n consistent.

Now the number of open files opened in the Linux 2.6 kernel is 65535,worker_rlimit_nofile 65535 should be filled accordingly.

This is because Nginx dispatch when the allocation request to the process is not so balanced, so if you fill 10240, total concurrency reached 340,000, there is a process may exceed 10240, this will return 502 error.

Events

{

Use Epoll;

#使用epoll的I/O Model

Additional notes:

Similar to Apache, Nginx has different event models for different operating systems

A) Standard event model

Select, poll belongs to the standard event model, and if the current system does not have a more efficient method, Nginx chooses Select or poll

B) Efficient Event model

Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using the kqueue with a dual-processor MacOS X system can cause the kernel to crash.

Epoll: Used in Linux kernel version 2.6 and later systems.

/dev/poll: Used for Solaris 7 11/99+, Hp/ux 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1a+.

Eventport: Used for Solaris 10. To prevent kernel crashes, it is necessary to install security patches

Worker_connections 204800;

#工作进程的最大连接数量, according to the hardware adjustment, and the previous work process together with, as large as possible, but don't run the CPU to 100% on the line

The maximum number of connections allowed per process, theoretically worker_processes*worker_connections per nginx server

Keepalive_timeout 60;

KeepAlive time-out period.

Client_header_buffer_size 4k;

Client request the buffer size of the head, this can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.

The paging size can be obtained with the command getconf PAGESIZE.

[Email protected] ~]# getconf PAGESIZE

4096

But there are also cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size the value must be set to the integer multiple of system paging size.

Open_file_cache max=65535 inactive=60s;

This will specify the cache for the open file, which is not enabled by default, max Specifies the number of caches, the recommended and the number of open files, and inactive refers to how long the file has not been requested to delete the cache.

Open_file_cache_valid 80s;

This refers to how long it takes to check the cache for valid information.

Open_file_cache_min_uses 1;

The minimum number of times the file is used in the inactive parameter time in the Open_file_cache directive, if this number is exceeded, the file descriptor is always opened in the cache, as in the previous example, if a file is not used once in inactive time, it will be removed.

}

#设定http服务器, using its reverse proxy function to provide load balancing support

http

{

Include Mime.types;

#设定mime类型, the type is defined by the Mime.type file

Default_type Application/octet-stream;

Log_format Main ' $host $status [$time _local] $remote _addr [$time _local] $request _uri '

' "$http _referer" "$http _user_agent" "$http _x_forwarded_for"

' $bytes _sent $request _time $sent _http_x_cache_hit ';

Log_format log404 ' $status [$time _local] $remote _addr $host $request_uri $sent _http_location ';

$remote _addr and $http_x_forwarded_for to record the IP address of the client;

$remote _user: Used to record the client user name;

$time _local: Used to record access time and time zone;

$request: The URL used to record the request and the HTTP protocol;

$status: Used to record the status of the request; success is 200.

$body _bytes_s ent: Record the size of the principal content sent to the client file;

$http _referer: Used to record from that page link access;

$http _user_agent: Record the client's poison, browser-related information;

Usually the Web server is placed behind the reverse proxy, so that the client's IP address cannot be obtained, the IP address obtained through $remote_add is the IP address of the reverse proxy server. In the HTTP header information of the forwarding request, the reverse proxy server can increase the X_forwarded_for information to record the IP address of the original client and the server address of the original client's request.

Access_log/dev/null;

#用了log_format指令设置了日志格式之后, you need to use the access_log instruction to specify the log file storage path;

# Access_log/usr/local/nginx/logs/access_log Main;

Server_names_hash_bucket_size 128;

Controlled by the #保存服务器名字的hash表是由指令server_names_hash_max_size and Server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the lookup of hash table key values in the processor. If a hash bucket size equals the size of a processor cache, the worst case lookup of a key is 2 in memory. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit. Therefore, if Nginx gives the hint to increase the hash max size or hash bucket size, it is important to increase the size of the previous parameter.

Client_header_buffer_size 4k;

Client request the buffer size of the head, this can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size. The paging size can be obtained with the command getconf pagesize.

Large_client_header_buffers 8 128k;

Client request Header Buffer size
Nginx will use client_header_buffer_size this buffer to read the header value by default, if

The header is too large, it will use Large_client_header_buffers to read
If you set a small HTTP header/cookie A 400 error in the General Assembly report Nginx Bad Request
If the buffer is exceeded, the HTTP 414 error (URI Too Long) will be reported.
Nginx accepts the longest HTTP header size must be larger than one of the buffer, or it will be reported 400

HTTP error (Bad Request).

Open_file_cache Max 102400

Using fields: HTTP, server, location This directive specifies whether the cache is enabled and, if enabled, the following information is logged: • Open file descriptor, size information, and modified time. • directory information that exists. • Error message during file Search--No this file, not read correctly, refer to the open_file_cache_errors Directive option:
Max-Specifies the maximum number of caches, and if the cache overflows, the longest used file (LRU) will be removed
Example: Open_file_cache max=1000 inactive=20s; Open_file_cache_valid 30s; Open_file_cache_min_uses 2; Open_file_cache_errors on;

Open_file_cache_errors
Syntax: Open_file_cache_errors on | Off default: open_file_cache_errors off use field: HTTP, server, location This directive specifies whether the search for a file is a record cache error.

Open_file_cache_min_uses

Syntax: open_file_cache_min_uses number default: Open_file_cache_min_uses 1 Using fields: HTTP, server, location This directive specifies the Open_file_ The minimum number of files that can be used in a certain time range in an invalid parameter of the cache instruction, and if a larger value is used, the file descriptor is always open in the cache.
Open_file_cache_valid

Syntax: Open_file_cache_valid time default: Open_file_cache_valid 60 using fields: HTTP, server, location This directive specifies when to check for Open_file_ Valid information for cached items in the cache.

Client_max_body_size 300m;

Setting the size of files uploaded via nginx

Sendfile on;

#sendfile指令指定 Nginx calls the Sendfile function (zero copy mode) to output the file,
For normal applications, it must be set to ON.
If you are using a disk IO heavy load application for download, you can set it off to balance disk and network IO processing speed and reduce system uptime.

Tcp_nopush on;

This option allows or disables the use of Socke's tcp_cork option, which is used only when using Sendfile

Proxy_connect_timeout 90;
#后端服务器连接的超时时间_发起握手等候响应超时时间

Proxy_read_timeout 180;

#连接成功后_等候后端服务器响应时间_其实已经进入后端的排队之中等候处理 (can also be said to be the time the backend server processes the request)

Proxy_send_timeout 180;

#后端服务器数据回传时间_就是在规定时间之内后端服务器必须传完所有的数据

Proxy_buffer_size 256k;

#设置从被代理服务器读取的第一部分应答的缓冲区大小, typically this part of the answer contains a small answer header, which, by default, is the size of a buffer specified in instruction Proxy_buffers, but can be set to a smaller

Proxy_buffers 4 256k;

The number and size of buffers #设置用于读取应答 (from the proxy server), which by default is paging size, may be 4k or 8k depending on the operating system

Proxy_busy_buffers_size 256k;

Proxy_temp_file_write_size 256k;

#设置在写入proxy_temp_path时数据的大小 to prevent a worker process from blocking too long when passing files

Proxy_temp_path/data0/proxy_temp_dir;

#proxy_temp_path和proxy_cache_path指定的路径必须在同一分区
Proxy_cache_path/data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;
#设置内存缓存空间大小为200MB, content that has not been accessed for 1 days is automatically cleared and the size of the hard disk cache space is 30GB.

Keepalive_timeout 120;

KeepAlive time-out period.

Tcp_nodelay on;

Client_body_buffer_size 512k;
If it is set to a larger value, such as 256k, then, whether using Firefox or IE browser, to submit any image less than 256k is normal. If you comment on the instruction, using the default Client_body_buffer_size setting, which is twice times the size of the operating system page, 8k or 16k, the problem arises.
Whether using firefox4.0 or IE8.0, submit a larger, 200k or so picture, all returned to the Internal Server error Error

Proxy_intercept_errors on;

Represents an answer that causes Nginx to block HTTP answer codes of 400 or higher.

Upstream Img_relay {

Server 127.0.0.1:8027;

Server 127.0.0.1:8028;

Server 127.0.0.1:8029;

Hash $request _uri;

}

Nginx's upstream currently supports 4 different ways of distribution

1. Polling (default)

Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.

2, Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:
Upstream Bakend {
Server 192.168.0.14 weight=10;
Server 192.168.0.15 weight=10;
}

2, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:
Upstream Bakend {
Ip_hash;
Server 192.168.0.14:88;
Server 192.168.0.15:80;
}

3. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
Upstream Backend {
server Server1;
Server Server2;
Fair
}

4. Url_hash (third Party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm

Upstream Backend {
Server squid1:3128;
Server squid2:3128;
Hash $request _uri;
Hash_method CRC32;
}

Tips

Upstream bakend{#定义负载均衡设备的Ip及设备状态
Ip_hash;
Server 127.0.0.1:9090 down;
Server 127.0.0.1:8080 weight=2;
Server 127.0.0.1:6060;
Server 127.0.0.1:7070 backup;
}
In servers that need to use load balancing, add
Proxy_pass http://bakend/;

The status of each device is set to:
1.down indicates that the server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
4.fail_timeout:max_fails the time of the pause after the failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Nginx supports multiple sets of load balancing at the same time, which is used for unused servers.

Client_body_in_file_only set to On can speak the client post data logged to the file to do debug
Client_body_temp_path setting a directory of record files can be set up to 3 levels of directories

The location matches the URL. Can redirect or perform new proxy load balancing

Server

#配置虚拟机

{

Listen 80;

#配置监听端口

server_name image.***.com;

#配置访问域名

Location ~* \. (Mp3|exe) $ {

Load balance #对以 address at the end of "MP3 or EXE"

Proxy_pass Http://img_relay$request_uri;

#设置被代理服务器的端口或套接字, and URL

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

#以上三行, the purpose is to upload the information received by the proxy server to the real server

}

Location/face {

if ($http _user_agent ~* "Xnp") {

Rewrite ^ (. *) $ http://211.151.188.190:8080/face.jpg redirect;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Error_page 404 502 = @fetch;

}

Location @fetch {

Access_log/data/logs/face.log log404;

#设定本服务器的访问日志

Rewrite ^ (. *) $ http://211.151.188.190:8080/face.jpg redirect;

}

Location/image {

if ($http _user_agent ~* "Xnp") {

Rewrite ^ (. *) $ http://211.151.188.190:8080/face.jpg redirect;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Error_page 404 502 = @fetch;

}

Location @fetch {

Access_log/data/logs/image.log log404;

Rewrite ^ (. *) $ http://211.151.188.190:8080/face.jpg redirect;

}

}

Server

{

Listen 80;

server_name *.***.com *.***.cn;

Location ~* \. (Mp3|exe) $ {

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

}

Location/{

if ($http _user_agent ~* "Xnp") {

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

#error_page 404 Http://i1.***img.com/help/noimg.gif;

Error_page 404 502 = @fetch;

}

Location @fetch {

Access_log/data/logs/baijiaqi.log log404;

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

}

#access_log off;

}

Server

{

Listen 80;

server_name *.***img.com;

Location ~* \. (Mp3|exe) $ {

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

}

Location/{

if ($http _user_agent ~* "Xnp") {

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

#error_page 404 Http://i1.***img.com/help/noimg.gif;

Error_page 404 = @fetch;

}

#access_log off;

Location @fetch {

Access_log/data/logs/baijiaqi.log log404;

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

}

}

Server

{

Listen 8080;

server_name ngx-ha.***img.com;

Location/{

Stub_status on;

Access_log off;

}

}

server {

Listen 80;

server_name imgsrc1.***.net;

root HTML;

}

server {

Listen 80;

server_name ***.com w.***.com;

# Access_log/usr/local/nginx/logs/access_log Main;

Location/{

Rewrite ^ (. *) $ http://www.***.com/;

}

}

server {

Listen 80;

server_name *******.com w.*******.com;

# Access_log/usr/local/nginx/logs/access_log Main;

Location/{

Rewrite ^ (. *) $ http://www.*******.com/;

}

}

server {

Listen 80;

server_name ******.com;

# Access_log/usr/local/nginx/logs/access_log Main;

Location/{

Rewrite ^ (. *) $ http://www.******.com/;

}

}

Location/nginxstatus {
Stub_status on;
Access_log on;
Auth_basic "Nginxstatus";
Auth_basic_user_file conf/htpasswd;
}

#设定查看Nginx状态的地址

Location ~/\.ht {
Deny all;
}

#禁止访问. htxxx file

}

NOTES: Variables

The Ngx_http_core_module module supports built-in variables, and their names are consistent with Apache's built-in variables.

The first is a description of the line in the customer request title, such as $http_user_agent, $http _cookie, and so on.

In addition, there are some other variables

$args This variable is equal to the parameter in the request row

$content _length equals the value of the "content_length" of the request line.

$content _type equivalent to the "Content_Type" value of the request header

$document _root equivalent to the value specified by the root instruction of the current request

$document _uri the same as $uri.

$host the same as the value specified by the "host" row in the request header or the name of the server to which you requested it (no host row)

$limit _rate Allow Limited connection rates

$request _method is equivalent to the method of request, usually "GET" or "POST"

$remote _ADDR Client IP

$remote _port Client Port

$remote _user is equivalent to the user name and is certified by Ngx_http_auth_basic_module

$request _filename The path name of the currently requested file, which is composed of root or alias and URI request

$request _body_file

$request _uri full initial URI with parameters

$query _string the same as $args.

$sheeme http mode (HTTP,HTTPS) is required to evaluate for example

Rewrite ^ (. +) $ $sheme://example.com$; Redirect;

$server _protocol equivalent to the request protocol, use "http/or" http/

$server _addr request arrives at the IP of the server, the value of this variable is generally obtained in order to make a system call. To avoid system calls, it is necessary to specify the IP in the Listen directive and use the bind parameter.

The name of the server $server _name request arrives

Port number of the server $server _port request arrives

$uri is equivalent to the URI in the current request, which can be different from the initial value, such as internal redirection or using index

Nginx Chinese Wikipedia http://wiki.nginx.org/NginxChs

Http://www.queryer.cn/DOC/nginxCHS/index.html

Nginx configuration file Detailed

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.