Nginx practical tutorial (II): getting started with the configuration file, nginx practical tutorial

Source: Internet
Author: User

Nginx practical tutorial (II): getting started with the configuration file, nginx practical tutorial
Nginx configuration file structure

Nginx configuration fileDirective)Commands are divided into two forms: simple commands and block commands.

OneSimple commandsIt consists of the command name, parameter, and end semicolon (;), for example, listen 80 backlog 4096; where "listen" is the command name, "80", "backlog", and "4096" are all parameters. ";" indicates the end of the command.

Block commandsIt consists of the command name, parameter, and curly brackets ({}), for example, location/imag {}, where "location" is the command name and "/imag" is the parameter, "{}" is used to include other commands and indicate the end. If the braces in a block instruction can include other simple instructions or block instructions, this block instruction is calledContext)", Most of the commonly used block commands are"Context".

Commands not included by any other block commands are considered to be in the main context, that is, the main context is the outermost context in the nginx configuration file, all commands are in the main context or subcontext of the main context. See the following configuration file example:

 1 # nginx.conf 2 worker_processes 2; 3 events { 4     use epoll; 5     worker_connections 1024; 6 } 7 http { 8     include       mime.types; 9     upstream server_group_a {   10       server 192.168.1.1:8080;11       server 192.168.1.2:8080;12     }13     server {14         listen       80;15         server_name  www.example.com;   16         location / {17            proxy_pass  http://server_group_a;        18         } 19     }20 }

In the preceding example, the worker_processes, event, and http commands are in the main context, the use and worker_connections commands are in the event context, and the include, upstream, and server commands are in the http context, the two server commands are in the upstream context ......

Nginx software is composed of various functional modules. Therefore, the configuration file follows this modular structure. The nginx core module provides some global configuration instructions, function configuration commands are provided by other functional modules. In the preceding example, the worker_processes and event commands are provided by the nginx core module, http commands are provided by the http function module, and proxy_pass commands are provided by a submodule of the http module.

During nginx installation, some common functional modules are provided by default. You can also choose to install other functional modules by compiling and installing the source code. When configuring nginx, you can find the documents of the functional modules, this document describes the commands included in this function module and the context in which these commands should be configured. However, it is unreliable to find out which configurable commands it contains from the context (instructions, because the installed modules are different and contain different commands, you must have some experience in configuring nginx. For Beginners, you can only start by referring to others' examples.

In addition to http, the function module also includes mail (mail proxy), stream (TCP, UDP proxy, after v1.9.0)These two functional modules.

 

Global configuration command
  • Syntax:Include File|Mask;
  • Default Value: None
  • Context: arbitrary

It can be used in any context to introduce the commands in other configuration files into the context where the include command is used. The introduced commands must comply with the syntax and context requirements. Example:

http {    include mime.types;    include vhosts/*.conf;}

Introduce the files ending with ". conf" in the mime. types and vhosts directories into the http context.

 

  • Syntax:DeamonOn | off;
  • Default Value: deamon on
  • Context: main

Specifies whether nginx runs as a daemon.

 

  • Syntax:Debug_pointsAbort | stop;
  • Default Value: None
  • Context: main

It is used for debugging to determine nginx internal errors, especially the socket overflow problem of processes in the work. Nginx Code contains some debugging points. If debug_points is set to abord, a core running information dump file will be generated when it is run to the debugging point, and the process will be stopped when it is set to stop.

 

  • Syntax:Error_logFile [level]
  • Default Value: error_log logs/error. log error;
  • Context: main, http, mail (after v1.9.0), stream (after v1.7.11), server, location

Specify the log file and log level.

File can be a specified file or stderr, syslog server, or memory output file with a standard error. The prefix "syslog:" is used for output to the syslog server, and the prefix "memory:" and buffer size are used for output to the cyclic memory buffer.

The level parameter specifies the log output level. Logs higher than the specified level are output. Supported levels are: debug, info, notice, warn, error, crit, alert, and emerg.

The debug module must be installed at the debug level during compilation.

 

  • Syntax:EnvVariable [= value];
  • Default Value: env TZ;
  • Context: main

By default, nginx only inherits the TZ environment variable. This command can pass the environment variable to the nginx process, or define new variables to be passed to the nginx process.

 

  • Syntax:Load_moduleFile;
  • Default Value: None
  • Context: main

Load the dynamic module. For example:

load_module module/ngx_mail_module.so;

 

  • Syntax:Lock_fileFile;
  • Default Value: lock_file logs/nginx. lock;
  • Context: main

Nginx uses the lock mechanism to implement the accept_mutex function and shared memory. In most operating systems, the lock is an atomic operation. In this case, this command is invalid, there are also some operating systems that use the "Lock file" mechanism to implement the lock. This command is used to specify the lock file prefix name.

 

  • Syntax:Master_processOn | off;
  • Default Value: master_process on;
  • Context: main

Whether to enable the worker process. If it is set to off, the worker process is not enabled and the master process processes requests.

 

  • Syntax:Pcre_jitOn | off;
  • Default Value: pcre_jit off;
  • Context: main

Enable or disable real-time compilation (pcre jit) for regular expressions when parsing the configuration file ).

Rre jit can significantly increase the processing speed of regular expressions.

JIT depends on the version later than PCRE Library 8.20, and the -- enable-jit parameter must be specified during compilation. You can also use the PCRE Library as the module for nginx compilation and installation (compile nginx to specify the -- with-pcre = parameter), and specify the -- with-pcre-jit parameter during compilation to enable the JIT function.

 

  • Syntax:PidFile;
  • Default Value: pid nginx. pid;
  • Context: main

Specify the pid file. The pid file stores the process Number of the master process.

 

Syntax:Ssl_engineDevice;

Default Value: None

Context: main

If a hardware ssl Acceleration device is used, use this command to specify.

 

  • Syntax:Thread_poolName threads = number [max_queue = number];
  • Default Value: thread_pool default threads = 32 max_queue = 65535;
  • Context: main

When asynchronous IO is used, define the named thread pool and set the thread pool size and waiting queue size. When all threads in the thread pool are busy, new tasks are placed in the waiting queue. If the waiting queue is full, the task will return an error and exit.

Multiple named thread pools can be defined for the asynchronous thread instruction (aio) calls of the http module.

This command appears in v1.7.11.

 

  • Syntax:Timer_resolutionInterval;
  • Default Value: None
  • Context: main

Set the time precision to reduce the number of times the worker process calls the system time function. By default, each core event calls the gettimeofday () interface to obtain the system time, so that nginx calculates the connection timeout and other work. This command specifies the update interval, nginx only calls the system time function once within the interval.

 

  • Syntax:UserUser [group];
  • Default Value: user nobody nodoby;
  • Context: main

Specifies the linux users and groups used by the master to start the worker process. If the group is not specified, a group name with the same name as the user is used by default.

 

  • Syntax:Worker_processesNumber | auto
  • Default Value: worker_processes 1
  • Context: main

Specifies the number of worker processes. The number of processes is preferably a multiple of the number of CPU cores or the number of CPU cores. When set to auto, nginx automatically obtains and sets the number of CPU cores.

Auto parameters are supported after v1.3.8 and v1.2.5.

 

  • Syntax:Worker_cpu_affinityCpumask ...;
  •    Worker_cpu_affinityAuto [cpumask];
  • Default Value: None
  • Context: main

Bind a worker process to the CPU core. Each worker process corresponds to a binary mask, and each bit of the mask corresponds to a CPU. By default, worker is not bound to the cpu core. This command is only applicable to Linux and FreeBSD.

Example:

worker_processes 4;worker_cpu_affinity 0001 0010 0100 1000;

Indicates that there are four worker processes. The first is bound to CPU0, the second is bound to CPU1, and so on. The four processes are bound to one CPU core respectively.

Another example:

worker_processes 2;worker_cpu_affinity 0101 0101;

Bind the first process to CPU0/CPU2, and the second worker Process to CPU1/CPU3. This example applies to hyper-threading scenarios, that is, a core virtualizes two CPU threads.

Auto (v1.9.10) allows automatic binding to available CPUs:

worker_process auto;worker_cpu_affinity auto;

A mask (mask) can be used to restrict certain CPUs from binding. For example:

worker_cpu_affinity auto 01010101;

Only CPU0/0/6/6 is bound, and other CPUs are not allocated to the worker process.

 

  • Syntax:Worker_rlimit_coreSize;
  • Default Value: None
  • Context: main

Modify the size limit of the system core file for the worker process. Generally, you do not need to restart the main process to increase the value.

 

  • Syntax:Worker_rlimit_nofileNumber;
  • Default Value: None
  • Context: main

Modify the maximum number of opened handles of a worker process. Generally, you do not need to restart the main process to increase the value.

 

  • Syntax:Worker_shutdown_timeoutTime;
  • Default Value: None
  • Context: main

This command appears in v1.11.11. Used to set the timeout time for safely ending a worker process.

When a worker process ends securely, it stops allocating new connections to the worker process and exits after it completes processing the current task. If a timeout time is set, after the timeout, nginx forcibly closes the connection of the worker process.

 

  • Syntax:Working_directoryDirectory;
  • Default Value: None
  • Context: main

Specify the default working path. It is mainly used for the worker Process to export memory dump files. users specified by the worker process must have the write permission to change files.

 

Connection processing control commands
  • Syntax:Events{...}
  • Default Value: None
  • Context: main

The function is to open up a command block, and the commands configured in the events context are used to control the connection processing behavior.

 

  • Syntax:Accept_mutexOn | off;
  • Default Value: accept_mutex off;
  • Context: events

When this parameter is enabled, a mutex lock is used to allocate a new connection to the worker process. Otherwise, all worker processes will compete for a new connection, or cause a so-called "surprise group problem ", if the problem occurs, the idle worker Process of nginx cannot enter the sleep state, resulting in excessive system resource occupation. Enabling this parameter will cause Server Load balancer in the background to some extent. However, disabling this parameter can increase the nginx throughput in the case of high concurrency.

You do not need to enable accept_mutex (after v1.11.3) on the operating system that supports epoll, or enable the reuseport function (after v1.9.1, the operating system must support the SO_REUSEPORT socket option, linux 3.9 + ).

In versions earlier than v1.11.3, the default value is on.

 

  • Syntax:Accept_mutex_delay Time;
  • Default Value: accept_mutex_delay 500 ms;
  • Context: events

If the accept_mutex parameter is enabled, when a idle worker Process attempts to obtain the mutex lock, it will find that another worker process has obtained the mutex lock and record the new connection, this idle worker process waits for the next time it attempts to obtain the mutex lock. After processing the current connection, the process that obtains the mutex lock immediately tries to obtain the mutex lock. Therefore, if the value is large or the connection pressure is low, some worker processes are always idle, some processes are always busy.

 

  • Syntax:Debug_connection Address|Network| Unix :;
  • Default Value: None
  • Context: events

The debug module is required. You must confirm that the debug module is included during installation. You can use the nginx-V command to determine whether the -- wih-debug parameter is included.

Enable debugging logs for connections initiated by specific customers for analysis and error shooting. You can specify an IPv4 address or IPv6 address (v1.3.0, v1.2.1), an unclass‑class CIDR block or domain name, or a UNIX socket (v1.3.0, v1.2.1 ). For example:

events {    debug_connection 127.0.0.1;    debug_connection localhost;    debug_connection 192.168.2.0/24;    debug_connection 2001:0db8::/32;    debug_connection unix:;}

The Log Level of unspecified connections remainsError_logCommand.

 

  • Syntax:Multi_acceptOn | off;
  • Default Value: multi_accept off;
  • Context: events

When it is set to off, only one new connection is processed at a time when a worker process obtains a mutex lock. If it is set to on, all new connections are allocated to the worker process that obtains the current mutex lock at one time. When kqueue connection is used for processing (use kqueue), this command is invalid.

 

  • Syntax:UseMethod;
  • Default Value: None
  • Context: events

Specify the connection processing method. You do not need to specify this method. nginx will automatically use the most effective method.

The connection processing method is used to determine the method used to identify which connections are ready to transmit/receive data from the current connection pool. Common connection handling methods include:

Select (select module required), poll (poll module required), kqueue (macOS/FreeBSD 4.1 +/OpenBSD 2.9 +), epoll (Linux 2.6 +) ,/dev/poll (Solaris 7 11/99 +, HP/UX 11.22 + (eventport), IRIX 6.5.15 +, and Tru64 UNIX 5.1A +), eventport (Solaris 10 +)

 

  • Syntax:Worker_aio_requestsNumber;
  • Default Value: worker_aio_requests 32;
  • Context: events

It appears in v1.1.4 and 1.0.7. When the aio (asynchronous IO) and epoll connection processing methods are enabled, the maximum number of uncompleted asynchronous IO operations of a single worker process is exceeded.

 

  • Syntax:Worker_connectionsNumber;
  • Default Value: worker_connections 512;
  • Context: events

Maximum number of concurrent connections that can be processed by a worker process.

This number of connections includes all connections to the backend server, not just the connection to the customer. The total number of connections (worker_connections × worker_processes) of all worker processes cannot exceed the maximum number of opened handles allowed by the operating system (nofile). The nofile limit can be modified using the worker_rlimit_nofile command.

 

If you think this article is helpful to you, scan the QR code to give a donation. Your support is the motivation for the author to continue writing better articles!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.