Nginx conf file structure Introduction and related configuration

Source: Internet
Author: User
Tags modifier modifiers pack rfc nginx reverse proxy

This article briefly introduces the structure of the Nginx conf file, how it has been configured: How to configure Nginx to provide static content, how to configure Nginx as a proxy server, how to configure forwarding requests to fastcgi services

Nginx Process Model: 1 main processes, n work processes, the main process is responsible for the configuration and work process management, the actual request is handled by the worker process. Nginx is a work model based on event-driven and multiplexing. 1. Nginx Kai-Stop

The Nginx boot can execute the Nginx bin file directly, and when Nginx is started, the-s parameter can be used to control the Nginx

Nginx-s Reload     #重新加载配置文件
nginx-s reopen     #重新打开log文件
nginx-s stop       #快速关闭nginx服务
nginx-s quit< c7/> #优雅的关闭nginx服务, waiting for the worker process to finish processing all requests

Nginx the process of reloading a configuration file: The main process receives the load signal: First verifies the configuration syntax, then takes effect on the new configuration, and if successful, the main process initiates a new worker process and sends an abort signal to the old worker process. Otherwise the main process fallback configuration to continue working.

In the second step, when the old worker process receives a stop signal, it stops receiving new connection requests, knows all existing requests are processed, and exits. 2. Structure of nginx.conf documents

The configuration of a nginx is divided into several different modules by a specific identifier (the instruction character).
The instruction is divided into simple instruction and block instruction . Simple instruction format: [name parameters] Block instruction format: there is the same structure as the simple instruction format, but its end identifier is not a semicolon, but a curly brace {}, and the block instruction can contain the simplicity directives and the blocks directives. Block instruction can be called context (e.g. events, HTTP, server, location)

In the Conf file, all the simple instructions that are not part of the block instruction belong to the main context, and the HTTP block instruction belongs to the main context, the server block instruction HTTP context. 2.1 Configuring static Access

A very important part of WEB server work is to provide static page access, such as images, HTML page. Nginx can be returned to the client from a local directory by providing different files, depending on the request, through different configurations.
Open the nginx.conf file under the installation directory, the default profile has created an empty server block in the HTTP instruction block, and a default server block has been created in the HTTP block in nginx-1.8.0. The contents are as follows:

server {
        listen       ;
        server_name  localhost;

        Location/{
            root   html;
            Index  index.html index.htm;
        }

        Error_page   502 503 504  /50x.html;
        Location =/50x.html {
            root   html;
        } 
}  

In general, the Conf file has multiple server blocks, which are differentiated by the listen port (the default is port 80) and server_name, providing different services for different requests, as follows:

server {
    listen      ;
    server_name a.example.org;
    ...
}

The format of the Listen directive is as follows:

Note
Description
Syntax: Listen address[:p ort] [default_server] [SSL] [HTTP2 | spdy] [PROXY_PROTOCOL] [set Fib=number] [Fastopen=number] [backlog=number] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [ Ipv6only=on|off] [Reuseport] [so_keepalive=on|off|[ KEEPIDLE]:[KEEPINTVL]:[KEEPCNT]];
Listen port [default_server] [SSL] [HTTP2 | spdy] [PROXY_PROTOCOL] [setfib=number] [fastopen=number] [Backlog=number ] [rcvbuf=size] [sndbuf=size] [accept_filter=filter] [deferred] [bind] [Ipv6only=on|off] [reuseport] [so_keepalive=on| Off| [KEEPIDLE]:[KEEPINTVL]:[KEEPCNT]];
Listen Unix:path [default_server] [SSL] [HTTP2 | spdy] [PROXY_PROTOCOL] [backlog=number] [rcvbuf=size] [sndbuf=size] [Accept_filter=filter] [Deferred] [Bind] [So_keepalive=on|off
Default: Listen *:80 | *:8000;
Context: server

Listen instruction parameters: can be IP, hostname, ip/hostname:port, port, Unix-domain socket. For example:

Listen 127.0.0.1:8000;listen 127.0.0.1;listen 8000;listen *:8000;listen localhost:8000;
Listen Unix:/var/run/nginx.sock;

The listen and server_name inside the server block cannot be identical to the other server blocks, otherwise the following error occurs when you start the load configuration:

server {
    listen      ;
    server_name a.example.org;
    ...
}
server {
    listen      ;
    server_name a.example.org;
    ...
}

#nginx-S Reload
nginx: [warn] Conflicting server name "a.example.org" on 0.0.0.0:80, ignored

When Nginx determines which server handles a client request, Nginx resolves the URI in the request header ( here and most of the following references refer to relative URIs). It then matches the parameters of the location instruction in the server block, as described in the next section of the matching rule. For example, the following example:

server {
        listen       ;
        server_name  localhost;

        Location/{
            root   html;
            Index  index.html index.htm;
        }
}  

The location block instruction matches the URI of the client request with its parameters, and the matching URI request is directed to the special local file system directory defined by the root directive, which is: adding the URI to the root parameter and generating a local file path, namely: Root parameter + URI request . Here The example parameter "/" matches all requests, which are generally default. The following example locates the directory as html/, and by default navigates to the html/under the path to the installation directory. Here the meaning of the two simple instructions inside the location block directive is: root Specifies the resource lookup path of the URI after redirection, where the HTML is relative to the path, relative to the Nginx installation directory. index specifies the name of the index file on the first page, you can configure multiple, the parameters are separated by a space, search by configuration order.

The default in the Nginx installation directory will have an HTML target, in My computer for:/usr/local/nginx-1.8.0/html, and then there is a default Nignx welcome interface, such as I installed Nginx directly after the launch of the Nginx bin file, Then access the corresponding domain name to get the following page:

If there are multiple location instruction blocks to match, the Nginx selection strategy is the longest prefix longest prefix matching principle.
For example, when a location block is added to the above server, the matching parameter is "/htdocs/", and the redirected resource path is configured as follows.

server {
        listen       ;
        server_name  localhost;

        Location/{
            root   html;
            Index  index.html index.htm;
        }

        Location/htdocs {
            root   /home/anonymalias;
            Index  index.html;
        }
}  

When the http://anonymalias.oicp.net:8008/htdocs/is accessed, the/home/anonymalias/htdocs/index.html 2.2 location command is matched.

The Nginx location directive is the core of the configuration, which matches the path part of the client request URI, and then provides different static content for different requests, or redirects to the internal server via a reverse proxy.

For client request, Nginx is preprocessed, and Nginx first decodes the URI encoded in the '%xx ' (URI using the%+ hexadecimal format for displaying non-standard letters and characters in browsers and Plug-ins). Then process the relative path symbol '. ' In path. and '. ', and then to contain two and more '/' compressed into a '/'. This is done so that a clean path is obtained and the path is matched with the parameters of the location instruction in the server.

Location Syntax:
    location [= | ~ | ~* | ^~] uri {...}
    Location @name {...}

parameter of location Directive : can be a prefix string , parameter modifiers are divided into:
without any modifiers; = "modifier: Defines an exact match in which the exact match has the highest priority;" ^~ modifier: The corresponding argument is matched to the longest string, and the location that does not continue to search for the regular expression of the parameter can also be a regular expression . Parameter modifiers are divided into:
"~" Modifier: Location parameter part case sensitive; "~*" modifier: Location parameter part case insensitive;

Location ~* \. (gif|jpg|jpeg) $ {
    [configuration A]
}

location ~ \. ( Gif|jpg|jpeg) $ {
    [configuration B]
}

If the request URI is/images/a.jpg, only a can be matched.

Location matching process :
1. First, Nginx will match the URI of the request to the longest string in the location normal string parameter and save the result;
2. If the longest prefix match result is preceded by the "^~" modifier, stop the search;
3. If there is a "=" modifier before the longest matching result, the search will also stop;
4. Next, to match all location of the regular expression of the parameter, according to the location configuration order, when matching to the first regular expression, stop searching for other regular expressions;
So if the following example is official online:

Location =/{
    [configuration A]
}

location/{
    [configuration B]
}

location/documents/{
  [configuration C]
}

location ^~/images/{
    [configuration D]
}

location ~* \. ( Gif|jpg|jpeg) $ {
    [configuration E]
}
When "/" is requested, the match to a when the request "/index.html" matches to B; When the request "documens/document.html" matches to C; When the request "/images/1.gif" matches to D; Matching Process: The first match to D, because the location parameter of D contains the modifier "^~", when the match to D, no longer search for the parameters of the regular expression of the location, when the request "/documents/1.jpg", will match to E; Matching Process: first match to C, this time will save the match result of C, then continue the search parameter is regular location, the result found that e match, then discard the previous match to C 2.3 Configuring Proxy Server

Another service that Nginx uses very frequently is as a proxy server, providing a reverse proxy for external client requests to the internally proxied server, and then receiving the response from the proxied server and wrapping it back to the client.

The configuration of proxy server is consistent with the structure that provides static Web services as configured above. The role of proxy server is to forward a request to a internal server and then pass the response of internal server back to the client. about how to transfer the internal server's back pack to the client is nignx the bottom of the story, do not need to do the related configuration, at the configuration level, we have to do is how to send a client request to internal server.
The following is an example of the official configuration of the simplest proxy server in Nginx:

server {
    location/{
        proxy_pass http://localhost:8080/;
    }

    Location ~ \. (gif|jpg|png) $ {
        root/data/images;
    }
}

The meaning of the configuration: all the requests in the URI that are already. gif,. jpg,. png end are mapped to the/data/images local disk directory, and all other URI requests are passed to the configured proxy server:http://localhost:8080/.

The Nginx reverse proxy module has several important parameters: Proxy_pass:
The directive is a basic instruction for a reverse proxy that sets the protocol and address of the proxy server; For a client request, the Proxy_pass instruction forwards the URI in the following manner:
If the parameter of the proxy_pass instruction does not have a URI, the requested URI is passed as-is to internal server. If the parameter of the Proxy_pass directive contains a URI matching the uri,client request, the portion of the location will be replaced by the Proxy_pass path parameter.
For example: The request for 127.0.0.1/name/index.html will be forwarded as: 127.0.0.1/remote/index.html

location/name/{
    proxy_pass http://127.0.0.1/remote/;
}

Note: Before 1.1.12, if the Proxy_pass directive does not contain a URI, the URI in the original request is processed in some cases and then passed to proxy server.

In some cases, whether the request URI part is replaced is not determined:
1. When the parameter of the location instruction is a regular expression, the Proxy_pass parameter should not shout the URI part.
2. When the URI is changed by the rewrite instruction in the location instruction module, ... Proxy_pass_header:
Syntax: Proxy_pass_header field
The field parameter is the header name of all HTTP headers, which can be referenced by: [http/1.1 protocol][1] P100 chapter14 definition of all Header field for HTTP protocol.

This directive is used for a specific HTTP header to be passed back to the client from proxy server. Because some headers are not passed back to the client by default. By default, Nginx does not pass "Data", "Server", "X-pad", "X-accel-..." back to the client in the arguments returned by proxy Server. Where the Proxy_hide_header directive is used to qualify which headers are not passed back to the client. Proxy_set_header:
Syntax: Proxy_set_header field value;
This directive is used to redefine or add a field to the request header for proxy server by passing the client. Value can be a literal, a variable, and a combination of both.
If the value of the header field for a set is empty, then the header is not passed to proxy server, for example:

Proxy_set_header accept-encoding "";

By default, only the following two fields are redefined:

Proxy_set_header Host       $proxy _host;
Proxy_set_header Connection Close;

In fact, the default definition is often not what we expect, such as the URL and port of the proxy server that the Host header is set for Proxy_pass directives. However, in practical applications, we are often the host to retain the information of the client:

Proxy_set_header Host       $host;

In general, the location of proxy server makes the following basic settings:

Proxy_redirect off     ;
Proxy_set_header   Host             $host;
Proxy_set_header   x-real-ip        $remote _addr;
Proxy_set_header   x-forwarded-for  $proxy _add_x_forwarded_for;
2.5 configuration fastcgi proxying

What is CGI:
CGI Full Name: The Universal Gateway Interface (Common Gateway Interface) is the interface between the Web server and the external application (CGI program), which is so obscure that CGI is theprotocol standard for communication between Web servers and CGI programs . Many people, as well as many articles on the web, do not distinguish between CGI and CGI programs, which are two different concepts.
CGI1.1 RFC3875 documentation has a detailed CGI protocol standard, download link.

Web server plays the role of application Gateway in the process of CGI program invocation. When a Web server receives a client request, it converts the HTTP request of the clients to a CGI request based on the CGI standard, and then gives the CGI request to the CGI program, which, after the CGI program is processed, the Web server will be able to send the CGI back (in accordance with the CGI standard) Converted to a client HTTP back pack, sent to the client.

There are 2 ways to pass data on CGI requests : Themeta-variables and message-body. meta-variables meta variable : Data transfer by defining the system environment variables, as follows the name of the base variable defined by the RfC, detailed in the document
message-body: A system-defined method: a standard input stream or pipeline. Meta-variables delivery is limited by the size of the data,

Corresponding to the above two CGI data transfer paths , there are two corresponding request methods, which are determined by the value of the system variable Request_method in META-variable:
* Get method corresponds to the data transmission of meta-variable mode;
* The Post method corresponds to the data transmission of message-body mode;

There are other ways, of course, that are not elaborated here and are explained in detail in the P10 chapter4:the CGI request of the CGI1.1 RFC3875 document.

the process of executing the CGI program is as follows, Fork-and-execute mode:
1. The Web server receives a request from the client (browser), launches a CGI program, and passes data through environment variables, standard input
2. CGI processes start parsers, load configurations (such as business-related configuration), connect to other servers (such as database servers), logical processing, etc.
3. The CGI process passes the processing result to the Web server via standard output, standard error
4. The Web server receives the results of the CGI return, builds the HTTP response back to the client, and kills the CGI process

The Web server and CGI programs have been described earlier: passing data to and from the environment variables, standard input, standard output, standard error.

So much about what CGI is and how the CGI program works, so now it's time to say what's fastcgi ,
FastCGI: The Fast Universal Gateway Interface (Fast Common gateway Interface) is an improvement to the Universal Gateway Interface (CGI) FastCGI simple because it actually adds some extensions to the CGI base . So it's also a protocol standard for Web servers to communicate with CGI programs, except that it's a recommended standard. FASTCGI is dedicated to reducing the overhead of interacting between Web servers and CGI programs, allowing the server to process more Web requests at the same time. Unlike using CGI, when creating a new process for each request, FASTCGI uses a continuous process to process a series of requests. These processes are managed by the FASTCGI process Manager, not the Web server.

FASTCGI developers are committed to disseminating the fastcgi as an open standard. To this end, free fastcgi application libraries (c/ES ++,java,perl,tcl) and upgrade modules (APACHE,ISS,LIGHTTPD) are available for today's popular and free web server,fastcgi developers.

FastCGI's execution process :
1. Load the initialization fastcgi execution environment when the WEB server is started. For example, IIS ISAPI, Apache mod_fastcgi, Nginx ngx_http_fastcgi_module, lighttpd mod_fastcgi
2. FastCGI the process manager itself, initiates multiple CGI interpreter processes and waits for connections from the Web server. When you start the fastcgi process, you can configure it to boot in two ways, IP and UNIX domain sockets.
3. When a client request arrives at the Web server, the Web server forwards the request to the FASTCGI main process using a socket, fastcgi the main process to select and connect to a CGI interpreter. The WEB server sends CGI environment variables and standard input to the fastcgi subprocess.
4. The fastcgi process completes processing and returns standard output and error information from the same socket connection to the Web server. When the fastcgi child process closes the connection, the request is processed to complete.
5. The fastcgi process then waits and processes the next connection from the Web server.

Because the FastCGI program does not need to produce the new process unceasingly, can reduce the server pressure greatly and produces the high application efficiency. Its speed efficiency is at least 5 times times higher than that of the CGI technique. It also supports distributed deployments, where FastCGI programs can be executed on a host other than the Web server.

Summary: CGI is called a short lifetime application, and FastCGI is the so-called long lifetime application. FastCGI is like a resident (long-live) CGI, which can be executed all the time, and will not have to spend a few fork times (this is CGI's most criticized Fork-and-execute model).

Because Nginx cannot execute external executable programs as directly as Apache, Nginx does not support CGI programs directly, but Nginx can support FastCGI proxies, because NIGNX supports reverse proxies, Nginx FastCGI proxy simple configuration and reverse proxy, Only the instructions are different, as follows:

server {
    location/{
        fastcgi_pass  localhost:9000;
        Fastcgi_param script_filename $document _root$fastcgi_script_name;
        Fastcgi_param query_string    $query _string;
    }

    Location ~ \. (gif|jpg|png) $ {
        root/data/images;
    }
}
FASTCGI_PASS directive: Replaces Proxy_pass for forwarding CGI requests to the port Fastcgi_param instructions for the corresponding FASTCGI process Manager: For set parameters, which are passed to the FASTCGI program. The parameter name follows the CGI protocol meta-variable because the fastcgi is based on CGI.

[1] http://www.rfc-editor.org/info/rfc2616
[2] http://www.rfc-editor.org/info/rfc3875
[3] http://nginx.org/en/docs/
[4] http://www.fastcgi.com/drupal/
[5] Http://www.biaodianfu.com/cgi-fastcgi-wsgi.html
[6] Http://www.cnblogs.com/skynet/p/4173450.html
[8] Http://www.cnblogs.com/skynet/p/4173450.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.