Nginx configuration file (nginx.conf) configuration detailed (summary) _nginx

Source: Internet
Author: User
Tags epoll sendfile nginx server to domain

Now often encounter a new user to ask some very basic questions, recently sorted out, nginx configuration file nginx.conf configuration details as follows:

User Nginx Nginx;

Nginx Users and Groups: User groups. Not specified under Window

Worker_processes 8;

Worker process: number. Depending on hardware tuning, it is usually equal to the number of CPUs or twice times the CPU.

Error_log Logs/error.log; 

Error_log Logs/error.log notice; 

Error_log Logs/error.log Info; 

Error log: Store path.

PID Logs/nginx.pid;

PID (process identifier): Store path.

Worker_rlimit_nofile 204800;

Specifies the maximum descriptor that a process can open: number.

This instruction is when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) and the number of nginx process, but the Nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Now in the Linux 2.6 kernel open file opening number for 65535,worker_rlimit_nofile should be filled in 65535.

This is because the allocation request to the Nginx scheduling is not so balanced, so if you fill in 10240, the total concurrency reached 340,000 when the process may be more than 10240, then return 502 errors.

Events

{use

epoll;

Use the epoll I/O model. Linux recommended EPOLL,FREEBSD recommended Kqueue,window not specified.

Supplementary Note:

Similar to Apache, Nginx has different event models for different operating systems

A) Standard event model

Select, poll belong to the standard event model, and if the current system does not have a more efficient method, Nginx selects Select or poll

B. High-efficiency event model

Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS x. Using a dual-processor MacOS x system with Kqueue can cause a kernel crash.

Epoll: Used in Linux kernel version 2.6 and future systems.

/dev/poll: Used in Solaris 7 11/99+,hp/ux 11.22+ (eventport), IRIX 6.5.15+ and Tru64 UNIX 5.1a+.

Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.

Worker_connections 204800;

Maximum number of connections for no worker process. According to the hardware adjustment, and the previous work process together to use, as large as possible, but do not run the CPU to 100% on the line. The maximum number of connections allowed per process, theoretically each Nginx server has the largest number of connections. Worker_processes*worker_connections

Keepalive_timeout 60;

KeepAlive timeout time.

Client_header_buffer_size 4k;

Client requests the size of the head buffer. This can be based on your system paging size to set, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.

The paging size can be obtained using the command getconf PAGESIZE.

[root@web001 ~]# getconf PAGESIZE

4096

However, there are client_header_buffer_size over 4k, but client_header_buffer_size this value must be set to the integer multiple of the system paging size.

This will specify caching for open files, which are not enabled by default, max Specifies the number of caches, and the recommended number of files is the same, inactive refers to how long the file has not been requested to delete the cache.

Open_file_cache_valid 80s;

This refers to how long it is to check the cache for valid information.

Open_file_cache_min_uses 1;

The minimum number of times a file is used in a inactive parameter in the open_file_cache instruction, and if this number is exceeded, the file descriptor is always open in the cache, as in the example above, if a file is not used once in inactive time, it will be removed.

# #设定http服务器, leveraging its reverse proxy capabilities to provide load balancing support

HTTP

{

include mime.types;

Set MIME type, type is defined by Mime.type file

Default_type Application/octet-stream;

Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _referer "

c3/> ' "$http _user_agent" "$http _x_forwarded_for";

Log_format log404 ' $status [$time _local] $remote _addr $host $request_uri $sent _http_location ';

Log format settings.

$remote _addr and $http_x_forwarded_for are used to record the IP address of the client;

$remote _user: Used to record the client user name;

$time _local: Used to record access time and time zone;

$request: The URL used to record the request and the HTTP protocol;

$status: Used to record request status; Success is 200,

$body _bytes_sent: Records are sent to the client file body content size;

$http _referer: Used to record links from the page to visit;

$http _user_agent: Record the relevant information of the customer's browser;

Usually the Web server is placed behind the reverse proxy, so that the client's IP address cannot be obtained, and the IP address obtained through $remote_add is the IP address of the reverse proxy server. In the HTTP header information of the forwarding request, the reverse proxy server can increase the X_forwarded_for information to record the IP address of the original client and the server address of the original client's request.

Access_log Logs/host.access.log main;

Access_log Logs/host.access.404.log log404;

after using the log_format instruction to set the log format, it is necessary to use the access_log instruction to specify the storage path of the log file ;

Server_names_hash_bucket_size 128;

Controlled by #保存服务器名字的hash表是由指令server_names_hash_max_size and Server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to accelerate the lookup of hash key values in the processor. If the hash bucket is equal to the size of the processor cache at the same time, the number of times to find the key in memory is 2. The first time is to determine the address of the storage unit, and the second is to locate the key value in the storage cell. Therefore, if nginx gives hints that need to increase the size of the hash max size or hash bucket, it is important to increase the previous parameter.

Client_header_buffer_size 4k;

Client requests the size of the head buffer. This can be set according to your system paging size, generally one request head size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size. The paging size can be obtained using the command getconf pagesize.

Large_client_header_buffers 8 128k;

Client request header buffer size. Nginx By default will use client_header_buffer_size this buffer to read the header value, if

Header is too large, it will be read using Large_client_header_buffers.

Open_file_cache max=102400 inactive=20s;

This directive specifies whether the cache is enabled.

Cases:

Open_file_cache max=1000 inactive=20s; 

Open_file_cache_valid 30s; 

Open_file_cache_min_uses 2; 

Open_file_cache_errors on;

Open_file_cache_errors

Syntax: Open_file_cache_errors on | Off default: Open_file_cache_errors off using fields: HTTP, server, location This instruction specifies whether to search for a file is a record cache error.

Open_file_cache_min_uses

Syntax: open_file_cache_min_uses number defaults: Open_file_cache_min_uses 1 Using fields: HTTP, server, location This directive specifies the Open_file_ The minimum number of files that can be used within a certain time range in a parameter that is not valid for the cache instruction, and the file descriptor is always turned on in the cache if a larger value is used.

Open_file_cache_valid

Syntax: Open_file_cache_valid time default: Open_file_cache_valid 60 using fields: HTTP, server, location This directive specifies when to check Open_file_ Cache the valid information for the item.

Client_max_body_size 300m;

set through Nginx size of uploaded file

Sendfile on;

The sendfile instruction specifies whether Nginx calls the Sendfile function (zero copy) to output the file, which must be set to on for normal applications. If used for downloading applications such as disk IO heavy load applications, it can be set to off to balance disk with network IO processing speed and reduce system uptime.

Tcp_nopush on;

This option allows or disables the use of the Socke tcp_cork option, which is used only when using the Sendfile

 
 

Back-End server Connection Timeout _ handshake waiting Response Timeout

Proxy_read_timeout 180;

After successful connection _ wait for back-end server response time _ actually entered the back-end of the queue waiting for processing (also can be said to be the backend server processing request time)

Proxy_send_timeout 180;

Back-end server data return time _ is within the specified time the back-end server must pass through all the data

Proxy_buffer_size 256k;

Sets the buffer size for the first part of the answer read from the proxy server, which typically contains a small answer header, which, by default, is the size of a buffer specified in instruction Proxy_buffers, but can be set to a smaller

Proxy_buffers 4 256k;

Sets the number and size of buffers used to read the answer (from the proxy server), and the default is paging size, which may be 4k or 8k depending on the operating system

Proxy_busy_buffers_size 256k;

Proxy_temp_file_write_size 256k;

Sets the size of the data when writing to Proxy_temp_path, preventing a worker process from blocking too long when passing files

Proxy_temp_path/data0/proxy_temp_dir;

The paths specified by Proxy_temp_path and Proxy_cache_path must be in the same partition

Proxy_cache_path/data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;

#设置内存缓存空间大小为200MB, 1 days without access to the content automatically cleared, hard disk cache space size of 30GB.

Keepalive_timeout 120;

KeepAlive timeout time.

Tcp_nodelay on;

Client_body_buffer_size 512k;

If you set it to a larger number, such as 256k, then, whether using Firefox or IE browser, to submit any pictures less than 256k, is normal. If you annotate this directive, the problem arises when you use the default Client_body_buffer_size setting, which is twice times the size of the operating system page, 8k or 16k.
whether using firefox4.0 or IE8.0, submit a larger, 200k or so picture, all return the Internal Server error Error

Proxy_intercept_errors on;

Indicates that the Nginx block HTTP answer code is 400 or higher

Upstream Bakend {

server 127.0.0.1:8027;

Server 127.0.0.1:8028;

Server 127.0.0.1:8029;

Hash $request _uri;

}

Nginx's upstream currently supports 4 different ways of allocating

1. Polling (default)

Each request is assigned to a different back-end server in chronological order, and can be automatically removed if the backend server is down.

2, Weight

Specifies the polling probability, proportional to the weight and the access ratio, for the performance of the backend server.

For example:

Upstream Bakend {
server 192.168.0.14 weight=10;
Server 192.168.0.15 weight=10;
}

2, Ip_hash

Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to a back-end server that resolves the session's problem.

For example:

Upstream Bakend {
ip_hash;
Server 192.168.0.14:88;
Server 192.168.0.15:80;
}

3, Fair (third party)

The response time of the backend server is allocated to the request, and the response time is short for priority assignment.

Upstream backend {
server server1;
Server Server2;
Fair;
}

4, Url_hash (third party)

The request is allocated by the hash result of the access URL, which directs each URL to the same back-end server, which is more efficient when cached.

Example: Add hash statement in upstream, server statement can not write weight and other parameters, Hash_method is the hash algorithm used

Upstream backend {
server squid1:3128;
Server squid2:3128;
Hash $request _uri;
Hash_method crc32;
}

Tips

Upstream bakend{#定义负载均衡设备的Ip及设备状态} {
ip_hash;
Server 127.0.0.1:9090 down;
Server 127.0.0.1:8080 weight=2;
Server 127.0.0.1:6060;
Server 127.0.0.1:7070 backup;



Increase in servers that need to use load balancing

Proxy_pass http://bakend/;

The status of each device is set to:

1.down indicates that the server before the single is temporarily not participating in the load

The larger the 2.weight weight, the heavier the weight of the load.

3.max_fails: The number of allowed requests failed defaults to 1. Returns the error Proxy_next_upstream module definition when the maximum number of times is exceeded

4.fail_timeout:max_fails after a failure, the time of the pause.

5.backup: All other non-backup machines request backup machines when down or busy. So this machine will be the lightest pressure.

Nginx supports the simultaneous setting of multiple sets of load balancing to be used for unused servers.

Client_body_in_file_only set to on can be used to speak the client post from the data log to the file to do debug
Client_body_temp_path set up a directory of record files can be set up to 3 levels of directory

Location the URL. Can be redirected or new agent load balancing

# #配置虚拟机

Server

{

Listen 80;

Configuring the Listening port

server_name image.***.com;

Configuring access to Domain names

Location ~* \. (Mp3|exe) $ {

Load balancing for addresses ending with "MP3 or EXE"

Proxy_pass Http://img_relay$request_uri;

Sets the port or socket of the proxy server, and the URL

Proxy_set_header Host $host;

Proxy_set_header x-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Above three lines to upload information from the proxy server to the real server

Location/face {

if ($http _user_agent ~* "Xnp") {

rewrite ^ (. *) $ yun_qi_img/face.jpg redirect;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header x-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Error_page 404 502 = @fetch;

}

Location @fetch {

Access_log/data/logs/face.log log404;

Rewrite ^ (. *) $ yun_qi_img/face.jpg redirect;

}

Location/image {

if ($http _user_agent ~* "Xnp") {

rewrite ^ (. *) $ yun_qi_img/face.jpg redirect;

}

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header x-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Error_page 404 502 = @fetch;

}

Location @fetch {

Access_log/data/logs/image.log log404;

Rewrite ^ (. *) $ yun_qi_img/face.jpg redirect

}

}

# #其他举例

server {Listen 80;

server_name *.***.com *.***.cn; Location ~* \.

(Mp3|exe) $ {proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

} location/{if ($http _user_agent ~* "XNP") {rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

} Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

#error_page 404 Http://i1.***img.com/help/noimg.gif;

Error_page 404 502 = @fetch;

} location @fetch {Access_log/data/logs/baijiaqi.log log404;

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

} server {Listen 80;

server_name *.***img.com; Location ~* \.

(Mp3|exe) $ {proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $PROxy_add_x_forwarded_for;

} location/{if ($http _user_agent ~* "XNP") {rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif;

} Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

#error_page 404 Http://i1.***img.com/help/noimg.gif;

Error_page 404 = @fetch;

} #access_log off;

Location @fetch {Access_log/data/logs/baijiaqi.log log404;

Rewrite ^ (. *) $ http://i1.***img.com/help/noimg.gif redirect;

} server {listen 8080;

server_name ngx-ha.***img.com;

Location/{stub_status on;

Access_log off;

} server {Listen 80;

server_name imgsrc1.***.net;

root HTML;

} server {Listen 80;

server_name ***.com w.***.com;

# Access_log/usr/local/nginx/logs/access_log Main;

Location/{Rewrite ^ (. *) $ http://www.***.com/;

} server {Listen 80;

server_name *******.com w.*******.com;

# Access_log/usr/local/nginx/logs/access_log Main; LOcation/{Rewrite ^ (. *) $ http://www.*******.com/;

} server {Listen 80;

server_name ******.com;

# Access_log/usr/local/nginx/logs/access_log Main;

Location/{Rewrite ^ (. *) $ http://www.******.com/; 
} location/nginxstatus {stub_status on; Access_log on; Auth_basic "Nginxstatus"; auth_basic_user_file conf/htpasswd;

} #设定查看Nginx状态的地址 Location ~/\.ht {deny all;} 

 #禁止访问. htxxx file}

Comments: Variables

The Ngx_http_core_module module supports built-in variables, and their names are consistent with Apache's built-in variables.

The first is to explain that the customer requested the line in title, such as $http_user_agent, $http _cookie, and so on.

In addition, there are some other variables

$args This variable is equal to the parameter in the request line

$content _length equals the value of the "content_length" of the request line.

$content _type equivalent to the value of the "Content_Type" of the requested header

$document _root equivalent to the value specified by the root directive of the current request

$document _uri and $uri.

$host the value specified by the "Host" line in the header or the name of the server to which the request arrives (no Host row)

$limit _rate allow restricted connection rates

$request _method is equivalent to request method, usually "get" or "POST"

$remote _ADDR Client IP

$remote _port Client Port

$remote _user equivalent to user name, by Ngx_http_auth_basic_module certification

$request _filename The path name of the currently requested file, combined by root or alias and URI request

$request _body_file

$request _uri The complete initial URI containing the parameter

$query _string and $args.

$sheeme http mode (HTTP,HTTPS) is required to evaluate for example

Rewrite ^ (. +) $ $sheme://example.com$; Redirect;

$server _protocol equivalent to the request agreement, using the "http/or" http/

$server the IP of the server to which _ADDR request arrives, the value of this variable is generally obtained to make system calls. In order to avoid system calls, it is necessary to indicate the IP in the Listen directive and use the bind parameter.

$server the name of the server to which the _name request arrives

$server the port number of the server to which the _port request arrives

The $uri is equivalent to the URI in the current request, and can be different from the initial value, such as internal redirection or the use of the index

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.