First contact with Nginx

Source: Internet
Author: User
Tags epoll sendfile nginx server nginx reverse proxy

A while ago, boarded the company's website testing server, Inadvertently see a access.log.gz package, Curiosity drove me to download it from the remote server to the local, and then unzip, open to see, is a log of access, before always hear their operation of the reference to access log access logs, the brain only have an impression of it, but do not know what exactly, now know, then, then do not understand to ask AH , I learned a server software called Nginx. The use of spare time simple understanding, I would like to be able to install a nginx on their own computer, usually development debugging, you can also monitor the use of the most several ports, although the feeling does not have any meaning, but also can be regarded as a kind of learning, after all, it is more than just reading books or information experience will be more profound. Today, only the configuration, with the depth of learning will also be exposed to load balancing, reverse proxy, optimization and so on, not speaking the correct place to welcome correction, mutual learning and common progress!

Nginx compared to APACEH, and so on, its advantages of a lot of information on the above, not too much emphasis, nothing is high concurrent connection, low memory consumption, low cost, simple configuration files and so on.

(i) Installation

Installing Nginx on the Ubuntu system is simple, and a single command can be done.

sudo apt-get install Nginx

By the way: if you are wrong in the installation times, the terminal prompt "cannot parse or open the list of packages or status files", as follows:

E:encountered a section with no Package:header

E:problem with Mergelist/var/lib/apt/lists/cn.archive.ubuntu.com_ubuntu_dists_natty_main_i18n_translation-en

E: Unable to parse or open the list of packages or state files.

Workaround:

sudo rm/var/lib/apt/lists/*-VF//If not deleted, you can use force Delete, add parameter-R

sudo apt-get update

Another point is that if you have Apache installed on your computer and you are already running it, stop Apache because the default port for Apache and Nginx is 80.

After successful installation, there will be an executable command to open the Terminal Input command nginx-h, some command parameter information will appear.

Nginx-h Viewing command Help

Nginx-v displaying version information

Nginx-v displaying version information and configuration options

NGINX-T Test configuration file

NGINX-T test configuration file and dump

NGINX-Q Suppress non-error messages during configuration testing

Nginx-s signal sends a signal to the main program, where the signal has a stop, stops nginx;quit, exits; reopen, re-opens; reload, reloads.

Nginx-p prefix set prefix path, default is/usr/share/nginx/

NGINX-C filename Setting configuration file, default is/etc/nginx/nginx.conf

NGNIX-G Directives setting global directives beyond the scope of the configuration file

Note: If the use of these instructions is wrong, there may be a permissions issue, switch to root execution can be.

(ii) configuration files

Master config file is nginx.conf, default path is/etc/nginx/

Related to PHP is Fastcgi_params, which is related to Python uwsgi_params

The configuration file parameters and meanings are as follows:

User www www;

Nginx users and groups. Not specified under Window

Worker_processes 8;

Number of worker processes. According to hardware adjustment, it is usually equal to twice times the total number of cores or total cores.

Error_log/var/logs/error.log Crit;

Error log storage path and level, level can be [Debug|info|notice|warn|error|crdit]

For each error log level, refer to the blog post http://blog.csdn.net/solmyr_biti/article/details/50634533

Pid/run/nginx.pid;

The PID process identifier holds the path. The PID file is a text file with only one line of content, and the ID of the process is recorded. The role of the PID file is to prevent the process from starting multiple replicas. Only the process that obtains the PID file (fixed path fixed file name) Write permission (F_WRLCK) can start normally and write its own PID to the file. Extra processes for the same program are automatically exited.

Use the Nginx PID file to stop, restart, and smooth restart the nginx.

The command format is as follows:

Kill-Signal type ' cat/run/nginx.pid '

The main types of signal are:

Term,int fast closing;

QUIT calmly close

HUP smooth shutdown, reload configuration file

USER1 reopen log file for large use when cutting logs

USER2 Smooth Upgrade executable file

WINCH gracefully close the work process


Worker_rlimit_nofile 51200;

Specifies the maximum number of descriptors that a process can open.

This instruction refers to the maximum number of file descriptors opened by an nginx process, the theoretical value should be the number of open files (ulimit-n) and the number of nginx processes, but the Nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.

Now the number of open files opened in the Linux 2.6 kernel is 65535,worker_rlimit_nofile 65535 should be filled accordingly.

This is because Nginx dispatch when the allocation request to the process is not so balanced, so if you fill 10240, total concurrency reached 340,000, there is a process may exceed 10240, this will return 502 error.

Events

{

Use Epoll;

Use the Epoll network I/O model. Linux recommendation Epoll,freebsd is recommended to use Kqueue,window under unspecified.

about what when Epoll, select, Kqueue can check relevant information.

Worker_connections 204800;

The maximum number of connections per worker process. According to the hardware adjustment, and the previous work process together with, as large as possible, but do not run the CPU to 100% on the line. The maximum number of connections allowed per process, theoretically worker_processes*worker_connections per nginx server

Keepalive_timeout 60;

KeepAlive time-out period.

Client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.

The paging size can be obtained with the command getconf PAGESIZE.

But there are also cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size the value must be set to the integer multiple of system paging size.

Open_file_cache max=65535 inactive=60s;

This will specify the cache for the open file, which is not enabled by default, max Specifies the number of caches, the recommended and the number of open files, and inactive refers to how long the file has not been requested to delete the cache.

Open_file_cache_valid 80s;

This refers to how long it takes to check the cache for valid information.

Open_file_cache_min_uses 1;

The minimum number of times the file is used in the inactive parameter time in the Open_file_cache directive, if this number is exceeded, the file descriptor is always opened in the cache, as in the previous example, if a file is not used once in inactive time, it will be removed.

}

# #下面是设定http服务器, using its reverse proxy feature to provide load balancing support

http

{

Include Mime.types;

Set MIME type, type defined by Mime.type file

Default_type Application/octet-stream;


Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" '

' $status $body _bytes_sent ' $http _referer '

' "$http _user_agent" "$http _x_forwarded_for";

Log_format log404 ' $status [$time _local] $remote _addr $host $request_uri $sent _http_location ';

Log format settings.

$remote _addr and $http_x_forwarded_for to record the IP address of the client;

$remote _user: Used to record the client user name;

$time _local: Used to record access time and time zone;

$request: The URL used to record the request and the HTTP protocol;

$status: Used to record the status of the request; success is 200.

$body _bytes_sent: Record the size of the principal content sent to the client file;

$http _referer: Used to record from that page link access;

$http _user_agent: Record information about the customer's browser;

Usually the Web server is placed behind the reverse proxy, so that the client's IP address cannot be obtained, the IP address obtained through $remote_add is the IP address of the reverse proxy server. In the HTTP header information of the forwarding request, the reverse proxy server can increase the X_forwarded_for information to record the IP address of the original client and the server address of the original client's request.

Access_log Logs/host.access.log Main;

Access_log Logs/host.access.404.log log404;

After using the Log_format command to set the log format, you need to use the access_log instruction to specify the log file storage path;

Gzip on:

Turn on gzip compression output to reduce network transmission.

Gzip_min_length 1k

Sets the minimum number of bytes of pages allowed to compress, and the number of page bytes is obtained from the header content-length. The default value is 20. It is recommended to set the number of bytes greater than 1k, which may be more or less larger than 1k.

Gzip_buffers 4 16k

Set up the system to get the compressed result data stream for several units of cache used to store gzip. 4 16k represents a 16k unit, with the original data size of 4 times times 16k of memory.

Gzip_http_version 1.0

To identify the version of the HTTP protocol, the early browser does not support Gzip compression, users will see garbled, so in order to support the previous version added this option, if you use the Nginx reverse proxy and expect to also enable Gzip compression, because the terminal communication is http/1.0, so please set to 1.0.

Gzip_comp_level 6

gzip compression ratio, 1 compression than the minimum processing speed, 9 compression than the maximum but the slowest processing speed (transmission is faster but the CPU is more expensive)

Gzip_types

Matching MIME types is compressed, and the "text/html" type is always compressed, regardless of whether it is specified.

Gzip_proxied any

Nginx as a reverse proxy when enabled, the decision to turn on or off the backend server returns whether the results are compressed, matching the premise that the backend server must return a header containing "Via".

gzip_vary on

With the HTTP header, a vary:accept-encoding is added to the response header, allowing the front-end cache server to cache the gzip-compressed pages, for example, using squid to cache the Nginx compressed data.

Server_names_hash_bucket_size 128;

The hash table that holds the server name is controlled by the command server_names_hash_max_size and Server_names_hash_bucket_size. The parameter hash bucket size is always equal to the size of the hash table and is a multiple of the processor cache size. After reducing the number of accesses in memory, it is possible to speed up the lookup of hash table key values in the processor. If a hash bucket size equals the size of a processor cache, the worst case lookup of a key is 2 in memory. The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit. Therefore, if Nginx gives the hint to increase the hash max size or hash bucket size, it is important to increase the size of the previous parameter.

Client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size, generally a request for the head size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size. The paging size can be obtained with the command getconf pagesize.

Large_client_header_buffers 8 128k;

Client request header buffer size. Nginx will use client_header_buffer_size this buffer to read the header value by default, if

The header is too large and it will be read using Large_client_header_buffers.

Open_file_cache max=102400 inactive=20s;

This directive specifies whether the cache is enabled. It also specifies the maximum number of caches, as well as the time of the cache. We can set a relatively high maximum time so that we can erase them after they are inactive for more than 20 seconds.

open_file_cache_errors on | off

Default: Open_file_cache_errors off use field: HTTP, server, location, this instruction specifies whether to search for a file is a record cache error.

Open_file_cache_min_uses

Syntax: open_file_cache_min_uses number default: Open_file_cache_min_uses 1 Using fields: HTTP, server, location This directive specifies the Open_file_ The minimum number of files that can be used in a certain time range in an invalid parameter of the cache instruction, and if a larger value is used, the file descriptor is always open in the cache.

Open_file_cache_valid

Syntax: Open_file_cache_valid time default: Open_file_cache_valid 60 using fields: HTTP, server, location This directive specifies when to check for Open_file_ Valid information for cached items in the cache.

Client_max_body_size 300m;

Setting the size of files uploaded via Nginx

Sendfile on;

Open efficient file transfer mode, the sendfile directive specifies whether Nginx calls the Sendfile function to output files, reducing user space to the kernel space of the context switch. For normal applications set to ON, if used for downloading applications such as disk IO heavy load applications, can be set to off to balance disk and network I/O processing speed, reduce the load on the system.

Tcp_nopush on;

This option allows or disables the use of Socke's tcp_cork option, which is used only when using Sendfile

Proxy_connect_timeout 90;

Timeout for backend server connection, initiating handshake waiting for response time-out

Proxy_read_timeout 180;

Waiting for the backend server response time after successful connection has actually entered the back-end queue waiting for processing (also can be said that the back-end server processing request time)

Proxy_send_timeout 180;

Back-end server data return time, that is, within the specified time, the backend server must pass all the data

Proxy_buffer_size 4k;

Sets the buffer size that is answered by the first part read from the proxy server, typically with a small answer header, which, by default, is the size of a buffer specified in instruction Proxy_buffers, but can be set to a smaller

Proxy_buffers 4 32k;

Sets the number and size of buffers to be used for read replies (from the proxy server), which by default is paging size, which may be 4k or 8k depending on the operating system

Proxy_busy_buffers_size 64k;

Buffer size under high load (proxy_buffers*2)

Proxy_temp_file_write_size 64k;

This option restricts the size of the temporary file to be written each time the cached proxy server responds to a temporary file. Proxy_temp_path (which can be compiled at the time) specifies which directory to write to.

Proxy_temp_path/data0/proxy_temp_dir;

The paths specified by Proxy_temp_path and Proxy_cache_path must be in the same partition

Proxy_cache_path/data0/proxy_cache_dir levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;

#设置内存缓存空间大小为200MB, content that has not been accessed for 1 days is automatically cleared and the size of the hard disk cache space is 30GB.

Keepalive_timeout 120;

Long connection timeout time, the unit is seconds, this parameter is very sensitive, involving the type of browser, back-end server time-out settings, operating system settings, can be another piece of article. Long connection requests a large number of small files, you can reduce the cost of rebuilding the connection, but if there is a large file upload, 65s does not upload completed will lead to failure. If you set the time too long and you have more users, staying connected for a long time can be resource intensive.

Send_timeout 120;

Used to specify the time-out period for the response client. This timeout is limited to the time between two connection activities, and if the client does not have any activity at this time, Nginx will close the connection.

Tcp_nodelay on;

Tell Nginx not to cache the data, but to send it over a period of time-when the data needs to be sent in a timely manner, it should be set to the application, so that the return value cannot be obtained immediately when sending a small piece of data.

Client_body_buffer_size 512k;

If it is set to a larger value, such as 256k, then, whether using Firefox or IE browser, to submit any image less than 256k is normal. If you comment on the instruction, using the default Client_body_buffer_size setting, which is twice times the size of the operating system page, 8k or 16k, the problem arises.

Whether using firefox4.0 or IE8.0, submit a larger, 200k or so picture, all returned to the Internal Server error Error

Proxy_intercept_errors on;

Represents an answer that causes Nginx to block HTTP answer codes of 400 or higher.

Upstream Bakend {

Server 127.0.0.1:8027;

Server 127.0.0.1:8028;

Server 127.0.0.1:8029;

Hash $request _uri;

}

This design to load balancing problem.

Nginx upstream currently supports distribution in the following ways

1. Polling (default)

Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.

2, Weight

Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.

For example:

Upstream Bakend {

Server 192.168.0.14 weight=10;

Server 192.168.0.15 weight=10;

}

3, Ip_hash

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.

For example:

Upstream Bakend {

Ip_hash;

Server 192.168.0.14:88;

Server 192.168.0.15:80;

}

4. Fair (third party)

The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.

Upstream Backend {

server Server1;

Server Server2;

Fair

}

5. Url_hash (Third Party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm

Upstream Backend {

Server squid1:3128;

Server squid2:3128;

Hash $request _uri;

Hash_method CRC32;

}

#定义负载均衡设备的Ip及设备状态

Upstream bakend{

Ip_hash;

Server 127.0.0.1:9090 down;

Server 127.0.0.1:8080 weight=2;

Server 127.0.0.1:6060;

Server 127.0.0.1:7070 backup;

}

In servers that need to use load balancing, add

Proxy_pass http://bakend/;

The status of each device is set to:

1.down indicates that the server is temporarily not participating in the load

The larger the 2.weight is weight, the greater the weight of the load.

3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded

4.fail_timeout:max_fails the time of the pause after the failure.

5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Nginx supports multiple sets of load balancing at the same time, which is used for unused servers.

Client_body_in_file_only set to On can speak the client post data logged to the file to do debug

Client_body_temp_path setting a directory of record files can be set up to 3 levels of directories

The location matches the URL. Can redirect or perform new proxy load balancing

# #配置虚拟机

Server

{

Listen 80;

Configuring the Listening port

server_name image.***.com;

Configure access domain names

Location ~* \. (Mp3|exe) $ {

Regular expression for load balancing of addresses ending with "MP3 or EXE"

Proxy_pass Http://img_relay$request_uri;

Set the port or socket of the proxy server, and the URL

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Above three lines, the purpose is to send the user information received by the proxy server to the real server


}


L Ocation/face {

if ($http _user_agent ~* "Xnp") {

Rewrite ^ (. *) $ http://211.151.188.190:8080/face.jpg redirect;

}

#这里涉及到Nginx的Rewrite规则问题, due to limited space, the next section

Proxy_pass Http://img_relay$request_uri;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

Error_page 404 502 = @fetch;

}

}

}

As you can see from the above, the main format of the nginx.conf file is:

......

Events

{

......

}


http

{

......

Server

{

......

}

Server

{

......

}

......

}


Nginx configuration is a big feature. Can be analogous to the definition of the style in the CSS file, the child element inherits the parent element's style definition and can choose whether to overwrite, the Nginx configuration also has the similar inheritance relation.

In order to understand the inheritance model of nginx configuration, we need to know that there are several blocks in Nginx configuration, a block is also called a context, such as the instruction defined in the server context is stored in the server{} block, the directives defined in the HTTP context are stored in the http{} Block.

There are 6 possible contexts in Nginx, in order from high to Low:

Global

Http

Server

If

Location

Nested Location

If in location

Limit_except

The default inheritance model direction is lower-level inheritance, rather than landscape or reverse. A common scenario is that the rewrite request jumps from one location to another, so the instructions defined in the first location block are ignored, and only the instructions defined in the second location block are valid in the location context. This is just a simple mention.

In fact, nginx configuration is not only these, there are other, after all, Nginx has a lot of modules, each module may have some special configuration commands, here is just a few basic configuration information, and so learn, understand the deeper, and then gradually add it, not the place to welcome criticism!

Reference "Real Nginx"


Initial contact with Nginx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.