Nginx details (1)

Source: Internet
Author: User
Tags install openssl sendfile

Nginx introduction:

Official Website: http://nginx.org. Based on official documents, it is more useful for reference. Therefore, some common commands are translated in the form of additional URLs.

Click "simplified Chinese" to view the nginx features.

1. Automatic Indexing: in the same way as apache, when the webserver directory does not contain the first page file index.html, all files in the current directory are automatically listed.

2. Open file descriptor cache: in Linux, the metadata of the accessed file must first be loaded into buffer and cache, while nginx can directly cache the metadata of the page file.

3. Support for HTTP Referer verification: the anti-leech function, that is, when you access an image of website a, the image cannot be opened and may display "this image is for users of website B only".

4. sendfile: when a user requests a webpage content, the data needs to enter from the NIC. After being split and encapsulated by TCP, the data is sent to the Web process listening on port 80, and then give it.

When the Web process receives the request and finds that the content on the homepage of the website is requested, it initiates a system call to the kernel and requires I/O to retrieve the homepage file.

After the Kernel performs I/O, the file is first put into the kernel memory, then copied to the Web process memory, the Web process is encapsulated, and then sent to the kernel's network module, and finally the network card is removed.

In this way, the response data goes from "Hard Disk" to "kernel space" to "Web process" to "kernel space network module" and suddenly takes a lot more steps.

Sendfile directly routes data from "kernel space" to "kernel space network module", that is, the data completes response encapsulation in the kernel space.

The file supported by sendfile is small, so sendfile_x64 is later displayed.

5. directi/o is completely different from asynchronous I/O (AIO). It is directly sent to the disk through the kernel buffer.

6. MMAP support, that is, data ing from kernel space to nginx Space

7. Smooth upgrade:

Nginx has a master process and N worker threads.

The master process only copies and reads configurations, generates on demand, and recycles worker threads. The worker thread is only responsible for responding to user requests.

During nginx upgrade, you only need to replace the binary execution file. When a new connection comes, the new worker thread will be used. The old worker thread will still be used for the old connection and will not be recycled until it is disconnected.

8. Dynamic Loading of modules is not supported, but the revised tengine can.

Install nginx on centos6.4:

1. Resolve Dependencies

# Yum groupinstall "development tools" "server platform deveopment"
# Yum-y install OpenSSL-devel PCRE-devel

2. Installation

First, add an nginx user to run the nginx service process as follows:

# Groupadd-r nginx
# Useradd-r-g nginx

Then start compilation and installation:
#./Configure \
-- Prefix =/usr \
-- Sbin-Path =/usr/sbin/nginx \
-- Conf-Path =/etc/nginx. conf \
-- Error-log-Path =/var/log/nginx/error. log \
-- Http-log-Path =/var/log/nginx/access. log \
-- PID-Path =/var/run/nginx. PID \
-- Lock-Path =/var/lock/nginx. Lock \
-- User = nginx \
-- Group = nginx \
With-http_ssl_module \
With-http_flv_module \
With-http_stub_status_module \
With-http_gzip_static_module \
-- Http-client-body-temp-Path =/var/tmp/nginx/client /\
-- Http-proxy-temp-Path =/var/tmp/nginx/Proxy /\
-- Http-FastCGI-temp-Path =/var/tmp/nginx/fcgi /\
-- Http-uwsgi-temp-Path =/var/tmp/nginx/uwsgi \
-- Http-scgi-temp-Path =/var/tmp/nginx/scgi \
-- With-PCRE
# Make & make install

3. Provide sysv init script for nginx:

Create a file/etc/rc. d/init. d/nginx with the following content:
#! /Bin/sh
#
# Nginx-this script starts and stops the nginx daemon
#
# Chkconfig:-85 15
# Description: nginx is an HTTP (s) server, HTTP (s) reverse \
# Proxy and IMAP/POP3 Proxy Server
# Processname: nginx
# Config:/etc/nginx. conf
# Config:/etc/sysconfig/nginx
# Pidfile:/var/run/nginx. PID

# Source function library.
./Etc/rc. d/init. d/functions

# Source networking configuration.
./Etc/sysconfig/Network

# Check that networking is up.
["$ Networking" = "no"] & Exit 0

Nginx = "/usr/sbin/nginx"
Prog = $ (basename $ nginx)

Nginx_conf_file = "/etc/nginx. conf"

[-F/etc/sysconfig/nginx] &./etc/sysconfig/nginx

Lockfile =/var/lock/subsys/nginx

Make_dirs (){
# Make Required Directories
User = 'nginx-V 2> & 1 | grep "Configure arguments:" | SED's/[^ *] * -- user = \ ([^] * \). */\ 1/G '-'
Options = '$ nginx-V 2> & 1 | grep' configure arguments :''
For opt in $ options; do
If ['echo $ opt | grep'. *-temp-path'']; then
Value = 'echo $ opt | cut-d "="-F 2'
If [! -D "$ value"]; then
# Echo "Creating" $ Value
Mkdir-p $ Value & chown-r $ user $ Value
Fi
Fi
Done
}

Start (){
[-X $ nginx] | Exit 5
[-F $ nginx_conf_file] | exit 6
Make_dirs
Echo-N $ "Starting $ prog :"
Daemon $ nginx-C $ nginx_conf_file
Retval =$?
Echo
[$ Retval-EQ 0] & touch $ lockfile
Return $ retval
}

Stop (){
Echo-N $ "Stopping $ prog :"
Killproc $ prog-quit
Retval =$?
Echo
[$ Retval-EQ 0] & Rm-F $ lockfile
Return $ retval
}

Restart (){
Configtest | return $?
Stop
Sleep 1
Start
}

Reload (){
Configtest | return $?
Echo-N $ "reloading $ prog :"
Killproc $ nginx-Hup
Retval =$?
Echo
}

Force_reload (){
Restart
}

Configtest (){
$ Nginx-T-C $ nginx_conf_file
}

Rh_status (){
Status $ prog
}

Rh_status_q (){
Rh_status>/dev/null 2> & 1
}

Case "$1" in
Start)
Rh_status_q & Exit 0
$1
;;
Stop)
Rh_status_q | exit 0
$1
;;
Restart | configtest)
$1
;;
Reload)
Rh_status_q | exit 7
$1
;;
Force-Reload)
Force_reload
;;
Status)
Rh_status
;;
Condrestart | try-Restart)
Rh_status_q | exit 0
;;
*)
Echo $ "Usage: $0 {START | stop | status | restart | condrestart | try-Restart | reload | force-Reload | configtest }"
Exit 2
Esac

Then grant the execution permission to the script:
# Chmod + x/etc/rc. d/init. d/nginx

Add it to the Service Management list and enable it to automatically start upon startup:
# Chkconfig -- add nginx
# Chkconfig nginx on

Then you can start the service and test it:
# Service nginx start

Ii. Configure nginx:

The core modules of nginx are main and events. In addition, they include the standard HTTP module, optional HTTP module, and mail module. They also support many third-party modules. Main is used to configure parameters related to error logs, processes, and permissions. events is used to configure Io models, such as epoll, kqueue, select, and poll. They are necessary modules.

The main configuration file of nginx consists of several segments, which are also called the nginx context. The Definition Format of each segment is as follows. Note that each of its commands must end with a semicolon (;); otherwise, it is a syntax error.

<Section> {
<Directive> <parameters>;
}

2.1.1 error_log

Used to configure error logs, which can be used in main, HTTP, server, and location contexts. Syntax format:

Error_log file | stderr [debug | info | notice | warn | error | crit | alert | emerg]

If the -- with-debug option is used during nginx compilation, you can enable the debugging function in the following format.

Error_log logfile [debug_core | debug_alloc | debug_mutex | debug_event | debug_http | debug_imap];

To disable error logs, you cannot use "error_log off;". You must use the following options:

Error_log/dev/null crit;

2.1.2 timer_resolution

Used to reduce the number of gettimeofday () system calls. By default, this system call is executed every time it is returned from kevent (), epoll,/dev/poll, select (), or Poll. Syntax format:

Timer_resolution Interval

For example:

Timer_resolution 100 ms;


2.1.3 worker_priority

Set the priority for the Worker Process (Nice value specified). This parameter can only be used in the main context. The default value is 0. Syntax format:

Worker_priority number; (-20, 20)

2.1.4 worker_processes

A worker process is a single-threaded process.

If nginx is used in CPU-intensive scenarios, such as SSL or gzip, and there are at least two CPUs on the host, set this parameter to be the same as the number of CPU cores;

If nginx is used to access a large number of static files and the total size of all files is greater than the available memory, set this parameter to a large enough value to take full advantage of the disk bandwidth.

This parameter, together with the work_connections variable in the events context, determines the value of maxclient:
Maxclients = work_processes * work_connections

2.1.5 worker_cpu_affinity

Use sched_setaffinity () to bind a worker to the CPU to reduce context switching and improve performance. It can only be used in the main context. Syntax format:

Worker_cpu_affinity cpumask...

1 In cpumask indicates that the CPU core is bound.

 

For example:
Worker_processes 4;
Worker_cpu_affinity 0001 0010 0100 1000;

 

Worker_processes 2;
Worker_cpu_affinity 01 11;

 

2.1.6 worker_rlimit_nofile

Sets the maximum number of file descriptors that a worker process can open. Syntax format:

Worker_rlimit_nofile number;

Note: If you use the AB command for testing, it will not work. Generally, the number is 51200.

2.2 configure the events Module

2.2.1 worker_connections

Sets the maximum number of connections processed by each worker. It determines the value of maxclients together with worker_processes from the main context.

Max clients = worker_processes * worker_connections

In the reverse proxy scenario, the calculation method is different from the above formula, because by default, the browser will open two connections, and nginx will open two file descriptors for each connection. Therefore, the maxclients calculation method is as follows:

Max clients = worker_processes * worker_connections/4

2.2.2 use

In scenarios with more than one event model Io, you can use this command to set the IO mechanism used by nginx. The default value is. the/configure script selects the most suitable operating system version for each mechanism. Syntax format:

Use [kqueue | rtsig | epoll |/dev/poll | select | poll | eventport]

2.3 configuration example

User nginx;
# The load is CPU-bound and we have 16 cores
Worker_processes 16;
Error_log/var/log/nginx/error. log;
Pid/var/run/nginx. PID;

Events {
Use epoll;
Worker_connections 2048;
}

2.4 HTTP service configuration

HTTP context is used to configure the modules used for HTTP. There are many such commands, and each module has its own specific specification. For details, refer to the nginx official wiki to describe the modules. In general, the configuration commands provided by these modules can be divided into the following categories.

Client commands: for example, client_body_buffer_size, client_header_buffer_size, client_header_timeout, and keepalive_timeout;

File IO commands: such as AIO, directio, open_file_cache, open_file_cache_min_uses, open_file_cache_valid, and sendfile;
Hash commands: defines the size of memory space allocated by nginx to a specific variable, such as types_hash_bucket_size, server_names_hash_bucket_size, and variables_hash_bucket_size;
Socket commands: defines how nginx handles TCP socket-related functions, such as tcp_nodelay (when keepalive is enabled) and tcp_nopush (when sendfile is enabled;

Listen Address [: Port]; listening port, required.

SERVER_NAME hostname; is generally defined in server

The following two commands are generally defined in server or location in server:

Root/path/to/webroot; defines the starting path of the URI,

Index index.html index. php index.html

2.5 virtual server configurations

Server {
<Directive> <parameters>;
}

Defines attributes related to virtual servers. Common Commands include backlog, rcvbuf, bind, and sndbuf.

2.6 Location-related configuration

Location [modifier] URI {...} or location @ name {...}

It is usually used in server or location nesting and cannot be used in httpd segments:

When a URI wants to use another path or a URI wants to have unique access control, location is used. Although location is the same as the if command described in the following section, when a regular expression is used for Uri matching, location is more than if.

The permissions and proxy content under each location are different, and the content obtained by the browser is also limited.

Location [= | ^ ~ | ~ | ~ * | None] URI {... }

In the above 5 cases, the priority is decreased sequentially:

(In the following example, the host names all use IP addresses, and the root directory of the webpage is the/web/hosts file to facilitate the experiment)

=: Exact match. For example:

Location =/images {

Root/web/htdocs/images;

Index index.html;

}

When the browser accesses http: // ip/images/, it directly accesses the files in the/web/htdocs/images directory, and its access permission is directly restricted by this location.

^ ~ : The subsequent content cannot be matched using regular expressions.

~ : Uses regular expressions for pattern matching, but is case sensitive. (For vertices, a backslash is required)

~ *: Case-insensitive mode matching.

None: the lowest level. It is generally used under the root.

Create a custom error page to prevent a stiff error page from being returned to the user:

Error_page 404 sorry.html;

Location =/sorry.html {

Root/web/htdocs/error;

}

Location @ name {...}

Server {
Location /{
Set $ memcached_key $ URI;
Memcached_pass name: 11211;
Default_type text/html;
Error_page 404 @ fallback;
}

Location @ fallback {
Proxy_pass http: // backend;
}
}

Here is a reverse proxy of memcached, that is, nginx uses memcached as the cache server rather than itself. When memcached does not hit, send the request to another upstream for response.

Nginx details (1)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.