Nginx easy to get started

Source: Internet
Author: User
Tags sendfile

Nginx design was originally designed to solve the c10k problem as a reverse proxy server, you can reverse proxy HTTP and SMTP/POP3 requests, but soon be a group of companies without get to focus on the Web server, such as a treasure on the development of Tengine, Nginx as a Web server because Nginx as a reverse proxy server needs to cache the client persistent connection state, can buffer a large number of pictures and video, when the request to receive pictures and video data directly from the cache data response, caching technology in Nginx application of the incisively and vividly, After the Nginx configuration can be found that a large number of cache buffer configuration instructions can basically be cached have specific instructions. Nginx not on the "detour" on the farther, the old bank reverse proxy also more and more powerful after a period of time of the iteration now Nginx can proxy most of the TCP/UDP protocol, such as Mysql,dns.

# directory

* One IO Model

* Two file structure

* Three configuration file contents

1main

2 Socket Configuration

3 Path Configuration

4 Request Control

5 Client Throttling

6 File operations

# One I/O model

Understanding the IO Model we can distinguish between Nginx and Apache, before we can know how the heyday of Apache to the trough. What is an IO model and how is it generated?

It is generated because of a Web request, the computer needs to invoke a variety of resources, but the various resources have the call order and the speed of the difference, because these differences cause some processing has been completed, but it requires another slow resource data. This creates a situation where a process waits for another process, in which case the I/O model is the way in which the two processes communicate. A process waits for another process, and there is a situation similar to a function call in programming.

# # Synchronous/Asynchronous

It means how the caller gets the processing state of the other process.

Synchronization means that you do nothing. Wait for another process to notify it to send the data

Asynchronous means to go to another process and get a handle status every time.

# # Blocking/non-blocking

It means the caller's own state after the call

Blocking is called after the data is no longer processed, only the information of the callee is received

Non-blocking is the processing of data after a call, not only receiving information from the caller

# # of five I/O models

Block type, non-blocking, multiplexed, signal driven, asynchronous

To unify a concept here, we divide an I/O call into two steps: The first step is to get the data to be called, and the second step is to call the process to transfer data to the caller.

Blocking, blocking yourself in the first step, and receiving notifications in a synchronized way, blocking yourself and receiving data from the callee in the second step

Non-blocking, the first step does not block itself can receive data, and again and again to be called the request processing State, the second step to block themselves and receive the data sent by the caller

Multiplexing, the first step to block themselves, but simultaneously call multiple processes to obtain multi-point data, the second step to block themselves and receive the data sent by the caller

Signal driver, the first step does not block itself, once received the caller's signal to enter the second step, the second step to block themselves and receive the data sent by the caller

Asynchronous, the first step does not block itself, once received the caller's signal to use the data directly, the second step does not block itself, the data by the callee directly written to the caller's memory



# Two file structure

program, configuration file, log scrolling configuration

# # Program Files

Using the RPM package installed

Parameters that can be used by the corresponding Nginx command

#不加任何参数, start Nginx

-T #检查配置文件的语法

-S #Nginx传递一个参数, commonly used Reload,stop,restart

# # configuration file

Master configuration file/etc/nginx/nginx.conf

Primarily configuring the main block and Http,main block is the basic configuration for configuring Nginx processes, HTTP configuration and HTTP-related properties

Modular configuration file/etc/nginx/conf.d/, which is primarily configured for each virtual host server

Log scrolling configuration, this is the location to prevent the log file too large settings



# three main profiles and module configuration file contents

# # Event-driven

Main block In addition to the Nginx process configuration and the configuration of the event, which is also nginx can support the c10k of the root. The event mechanism takes a process to respond to multiple requests, but differs from threads.

In a multithreaded version, these 3 tasks are executed separately in separate threads. These threads are managed by the operating system, can be processed in parallel on multiprocessor systems, or interleaved on a single-processor system. This allows other threads to continue executing while a thread is blocking a resource. This is more efficient than synchronizing a similar function, but programmers must write code to protect shared resources from being accessed by multiple threads at the same time. Multithreaded programs are more difficult to infer because such programs have to handle thread-safety issues through thread synchronization mechanisms such as locks, reentrant functions, thread-local storage, or other mechanisms, which can lead to subtle and painful bugs if implemented improperly.

In the event-driven version of the program, 3 tasks are interleaved, but still in a separate line-controlled system. When processing I/O or other expensive operations, register a callback into the event loop and continue execution when I/O operations are complete. The callback describes how to handle an event. The event loop polls all events and assigns them to the callback function that waits for the event to be processed when the event arrives. This approach allows the program to execute as much as possible without the need for additional threads. Event-driven programs are more likely to infer behavior than multithreaded applications because programmers do not need to be concerned about thread safety issues.



# # Main configuration section Common configuration directives :

Classification:

Required configuration for normal operation

Optimize performance-related configurations

Configuration for debugging and locating problem-related

Event-Driven related configurations

# # # Required configuration for normal operation

1. User #指定运行worker进程的用户和组

Syntax:User User [group]

Default:User Nobody nobody

2, Pid/path/to/pid_file; #指定存储nginx主进程进程号码的文件路径

3. include file | Mask #指明包含进来的其它配置文件片断

4, Load_module file; #指明要装载的动态模块

# # # Performance Optimization-related configurations

1, worker_processes number | Auto #worker进程的数量; The physical core number of the CPU that should normally be the current host

2, worker_cpu_affinity cpumask ...; #worker_cpu_affinity Auto [cpumask]

CPU MASK:

00000001: CPU Number NO. 0

00000010: CPU Number 1th

... ...

3, worker_priority number; #指定worker进程的nice值, set the worker process priority; [ -20,20]

4, worker_rlimit_nofile number; #worker进程所能够打开的文件数量上限

debugging, positioning problems:

1, daemon On|off; #是否以守护进程方式运行Nignx

2, master_process On|off; #是否以master/worker model to run Nginx; default is on

3, Error_log file [level]; #错误日志级别

# # # Event-Driven related configurations:

Events {

...

}

1, worker_connections number; #每个worker进程所能够打开的最大并发连接数数量

2, use method; #指明并发连接请求的处理方法; #常用方法epoll

3, Accept_mutex on | Off #处理新的连接请求的方法; on means that each worker takes turns to process the new request, and off means that all worker processes are notified when each new request arrives




# # Configuration related to sockets

1. server {...}

Configure a virtual host;

server {

Listen address[:P ort]| PORT;

server_name server_name;

Root/path/to/document_root;

}

2, listen port|address[:p ort]|unix:/path/to/socket_file

Listen address[:p ort] [default_server] [SSL] [HTTP2 | spdy] [backlog=number] [rcvbuf=size] [sndbuf=size]

Default_server: Set as the default virtual host;

SSL: Restricts the ability to provide services only through SSL connections;

Backlog=number: Backup Queue Length;

Rcvbuf=size: Receive buffer size;

Sndbuf=size: Send buffer size;

3. Server_Name name ...;

Indicates the host name of the virtual host, which can be followed by several strings separated by whitespace characters;

Support * Wildcard any character of any length; server_name *.magedu.com www.magedu.*

Supports ~ starting characters for regular expression pattern matching; server_name ~^www\d+\.magedu\.com$

Matching mechanism:

(1) The first is the exact string matching;

(2) left * wildcard character;

(3) Right * wildcard character;

(4) Regular expression;

4, Tcp_nodelay on | Off

Whether the Tcp_nodelay option is enabled for connections in keepalived mode;

5, Sendfile on | Off

Whether the Sendfile function is enabled;

# # defines path-related configurations:

6, Root path;

Sets the Web resource path mapping, which indicates the directory path of the document on the local file system that corresponds to the URL that the user requested; Available locations: HTTP, server, location, if in;

7. Location [= | ~ | ~* | ^~] URI {...}

Location @name {...}

There can be more than one location configuration segment in a server to implement a path mapping from URI to file system; Ngnix will check all the defined location based on the URI requested by the user, find out a best match, and then apply its configuration;

=: exact Match of URI; for example, http://www.magedu.com/, http://www.magedu.com/index.html

Location =/{

...

}

~: Make regular expression pattern matching for URI, distinguish character case;

~*: Make regular expression pattern matching for URI, not distinguish character case;

^~: The left half of the URI is checked for matching, and the character case is not distinguished;

Unsigned: Matches all URLs starting with this URI;

Match priority: =, ^~, ~/~*, without symbol;

root/vhosts/www/htdocs/

Http://www.magedu.com/index.html-/vhosts/www/htdocs/index.html

server {

root/vhosts/www/htdocs/

location/admin/{

root/webapps/app1/data/

}

}

8, alias Path;

Another mechanism that defines path aliases, document mappings, and can be used only in the location context;

Note: The use of the root command and alias instruction in location is different;

(a) root, the given path corresponds to the left side of the/uri/in the location;

(b) Alias, the given path corresponds to the right side of the/uri/in the location;

9. index file ...;

Default resources: HTTP, server, location;

10. Error_page Code ... [=[response]] URI;

Defines the URI that is shown for the specified errors.

11. try_files file ... uri;

# # Defines the relevant configuration for client requests


12, keepalive_timeout timeout [header_timeout];

Set the timeout length to keep the connection, 0 to prohibit the long connection; default is 75s;

13, keepalive_requests number;

The maximum number of resources allowed on a long connection, the default is 100;

14, keepalive_disable None | Browser ...;

What kind of browser disables long connection;

15, Send_timeout time;

The timeout period for sending a response message to the client, where the interval between two write operations is long;

16, client_body_buffer_size size;

The buffer size used to receive the body portion of the client request message, which defaults to 16k; When this size is exceeded, it is staged to the disk by the location defined by the Client_body_temp_path directive;

17, Client_body_temp_path path [Level1 [Level2 [LEVEL3]];

Set the temporary storage path and subdirectory structure and quantity for storing the body part of the client request message;

16 binary digits;

Client_body_temp_path Path/var/tmp/client_body 1 2 2

# # Related configuration to restrict the client:

18, limit_rate rate;

Limit the transmission rate of the response to the client, which is bytes/second,0 means no limit;

19, Limit_except Method ... { ... }

Restrict the use of the client to methods other than the specified request method;

Limit_except GET {

Allow 192.168.1.0/24;

Deny all;

}

# # File Operation optimized configuration

20. Aio on | Off | Threads[=pool];

Whether the AIO feature is enabled;

21, Directio Size | Off

Enable the O_direct tag on the Linux host, which means that the file is greater than or equal to the given size when used, such as directio 4m;

22, Open_file_cache off;

Open_file_cache max=n [Inactive=time];

Nginx can cache the following three kinds of information:

(1) Descriptor, file size and last modification time of the file;

(2) Open directory structure;

(3) Information about files not found or accessed without permission;

Max=n: The cache entry upper limit can be cached, and the LRU algorithm is used to implement cache management when the upper limit is reached;

Inactive=time: The inactive duration of the cache entry, which is an inactive item if the cache entry is not hit in the specified length of time or is less than the number of times specified by the open_file_cache_min_uses instruction;

23, Open_file_cache_valid time;

The check frequency of the validity of the cache entry; default is 60s;

24, open_file_cache_min_uses number;

The length of the inactive parameter specified in the open_file_cache instruction should be hit at least how many times before it can be classified as the active item;

25, open_file_cache_errors on | Off

Whether to cache the information of the file type where the error occurred during the lookup;



# Summary

Configuration instructions are particularly complex, the focus is to remember the principle, and are the focus, Nginx is the tool that accompanies our life, slowly understand just fine. And this is only the use of the most basic HTTP core modules, there are a lot of modules are not introduced, and more important than the content here. I do not introduce:)


This article is from "Lao Wang Linux Journey" blog, please be sure to keep this source http://oldking.blog.51cto.com/10402759/1892629

Nginx easy to get started

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.