Nginx as a common module of HTTP server

Source: Internet
Author: User
Tags sendfile set time url example

1, detailed description of common Nginx common modules and modules use examples

Common modules:
1.nginx Core function module (core functionality)
Accept_mutex On|off; #Context: Events
The meaning of this directive: when a new connection or user request arrives at the Nginx service, if Accept_mutex is on,
Then more than one worker will be processed serially, one of the workers will be activated and the other worker should remain dormant.
If Accept_mutex is off, all the workers will be awakened, but only one worker will get the new connection, and the other worker will hibernate again.
This can be considered a surprise group problem. When the Web site is large, it can be set to OFF,

    Error_log error log file path, error logging level, level from low to high for debug, info, notice, warn, error, crit, alert, emerg Example: error_log/var/l    Og/nginx/error.log; Events is used to specify Nginx's working mode and maximum number of connections: events {use epoll; #使用epoll I/O transfer model Worker_conn Ections 1024; #每个worker允许的最大用户请求数} ***worker_connections #Context: Events Each worker process can concurrently process (initiate) the maximum number of connections, not the larger the more Good * * * requires an in-depth understanding of the optimization options worker_cpu_affinity the worker process to the specified CPU (or core), because Nginx defaults to randomly allocated CPU (or core), this setting can be nginx into the        Fixed on the specified CPU (or core) to speed up the processing speed and reduce the error rate.            Example: Worker_processes 4;    Worker_cpu_affinity 0001 0010 0100 1000;        Worker_priority let the worker work with a specified nice value, the higher the -20~19 nice value, the more CPU resources will be preferred; Nginx By default the nice value is 0 worker_processes set the number of Nginx work process, the recommended setting is equal to or slightly less than the CPU core number, because if set to greater than the CPU core number will be crowded, the scope will Impact performance, if set to auto, the default number is the core number of CPUs include file |        Mask Include the specified file in the configuration, the Nginx runtime automatically loads an example: include Vhost/*.conf master_process on |        Off Confirm MasterThe open state of the process. Multi_accept on | Off #Context: Events When Multi_accept is on, all new connections are received at the same time by the worker process, and when Multi_accept is off, the worker process receives processing only once    A new connection defaults to OFF PID setting Nginx main process ID File location example: PID log/nginx.pid; Use to set how the process connects.        It is important to note that this is not generally done because Nginx automatically chooses the best way to fit the system.    There is normally no need to specify it explicitly, because Nginx would by default use the most efficient method. User Specifies Nginx main process users and Groups 2.http core function module (ngx_http_core_module) alias #Context: Location alias represents the path alias, alias followed by Path, that is        The real path of user access, typically used in the location environment example: location/i/{alias/data/w3/images/; #一定要记得结尾加/} When a user requests a path uri/i/, the actual access is uri/data/w3/images/here to drill down to the difference between Root and alias: 1.root specifies the top-level directory of the path directory that the location matches access to, or The starting root for the understanding of location matching Access 2.alias Specifies the true directory path of the location-matching access path directory, and location is the path directory alias 3. For root, the path directory after location   It has to be real. 4. For alias, the path directory after location can be used only in location 5.alias, and root is available in Http,server,location,if in Location 6.1) Configure the root directory in the location/, 2) Configure Ali in the Location/path    As virtual directory.        Listen #Context: The server sets the address and port of the listener IP, generally only need to set IP and port example: Listen 127.0.0.1:8000;        Listen 127.0.0.1;        Listen 8000;        Listen *:8000;        Listen localhost:8000;            Extension settings: Listen address[:p ort] [default_server] [SSL] [HTTP2 | spdy] [backlog=number] [rcvbuf=size] [sndbuf=size] Default_server: Set as Default virtual host; SSL: restrict the ability to provide services over SSL connections only; Backlog=number: Backup queue Length; Rcvbuf=size: Receive buffer large    Small; sndbuf=size: send buffer size; Server #Context: HTTP is used to define the virtual host, in the format server {...} SERVER_NAME #Context: Server sets the host name of the virtual host, you can use regular expressions to match the example: server {server_name ~^ (www\.)?                (.+)$;                Index index.php index.html;                    root/nginx/$2;      This uses the knowledge of a regular expression back reference, which enables you to configure multiple sites in one server, and the home directory of the corresponding site is the/nginx/fff.com directory when you enter the site www.fff.com      When entering site www.ddd.org, the home directory of the corresponding site must exist Tcp_nodelay On|off for the/nginx/ddd.com directory directory; #Context: HTTP, server, location only open in keepalived under the effective, long connection does not do message packaging sent, no delay (delay) Tcp_nopush On|off; #Context: HTTP, server, location in Nginx, Tcp_nopush configuration and Tcp_nodelay "mutex". It can configure the packet size for sending data at once.        In other words, it does not send packets after 0.2 seconds of accumulation, but sends them when the package accumulates to a certain size.    In Nginx, Tcp_nopush must be used in conjunction with Sendfile. Sendfile #Context: HTTP, server, location, if on location this option improves the transport performance of the Web server and does not use the traditional network transport of Sendfile: Read (file            , Tmp_buf, Len);            Write (Socket,tmp_buf, Len); Hard disk--kernel----user buffer--kernel socket buffer--and the protocol stack uses Sendfile network transfer: SENDF            Ile (Socket,file, Len);        Hard drives--kernel buffer (fast copy to Kernelsocket buffer)--the protocol stack sendfile eliminates a lot of intermediate processes, which improves transmission speed keepalive_timeout    KeepAlive time-out example: KeepAlive 75s; Root #Context: HTTP, server, location, if in location set the root directory of the site #Context: server, Location based on the URI of the user request to make the Access logic settings, root under the root of the server has a higher priority?        Example: location =/{[Configuration A]} location/{[Configuration B] } location/documents/{[Configuration C]} location ^~/images/{[Configurat Ion D]} location ~* \.        (gif|jpg|jpeg) $ {[Configuration E]} error_page #Context: HTTP, server, location, if Set the URI setting method to the user error display when the user requests an error: 1. Error_page 502 503/50x.html; When #当访问出现502 503 error, give the user feedback on the content of 50x.html 2. Error_page 502 503 =200/50x.html; When #当访问出现502 503, feedback to the user status code 200, and feedback 50x.html content 50x.html can be replaced with other content, such as Jpg,gi f,php file 3. Error_page 404 = http://www.baidu.com; #当访问出现404错误时, the user request to jump to Baidu home page 4.                                Location/{Error_page 404 = @fallback;       }                 Location @fallback {Proxy_pass http://backend;    Keepalive_timeout set timeout for long connections, set 0 to close long connections, default to 75s example: Keepalive_timeout 40s; Keepalive_disable None |        Browser ...; Setting a browser that prohibits long connections keepalive_requests set the maximum number of requests allowed for a long connection, by default, Client_body_buffer_size # context:http, server, Locatio n body is the main part of the message, this option is used to set the buffer size of the body portion of the receiving client request message, the default is 16k, once the set value is exceeded, the disk I/O will be initiated, affecting the site performance, only when the user is allowed to request or upload more than 16k data , it is necessary to adjust this, but when the user uploads data is large to have to be stored directly on the disk, you need to use Client_body_temp_path Client_body_temp_path # context:http, server, Locatio        n here is the question of how to quickly and accurately find the files that are stored in the site when the user is more and the user is storing more files. This option allows you to set the file rating, for example: Client_body_temp_path/var/tmp/client_body 2 1 1 #用户存放主目录为/var/tmp/client_body Build 16*16 Sub-directories, create 16 level two subdirectories under each sub-directory, and create 16 level three subdirectories per level Two subdirectory this is an algorithmic mechanism that provides a fast path routing mechanism for users accessing disk data. AIO On|off AIO refers to the asynchronous non-blocking IO model, which is described in detail in the Nginx first lecture open_file_Cache context:http, server, location is used to set the server access to a higher frequency of the metadata cache, the more frequently accessed files cached in memory, when the user access without the need for disk lookup, can be directly based on the file metadata to find the text            The next step, which greatly improves the user's access efficiency.                Example: Open_file_cache off; Open_file_cache max=1000 inactive=20s; #最大缓存1000个缓存项 the inactive duration of the cache entry is 20s, the cache is cleaned up when the number of hits is less than the Open_file_cache_min_uses setting in the set time, affected by the open_file_cache_min_uses option Open_                File_cache_uses Reference Open_file_cache Example: Open_file_cache max=1000 inactive=20s;                Open_file_cache_valid 30s;                Open_file_cache_min_uses 2; Open_file_cache_error on;                #是否缓存查找时发生错误的文件信息的3. IP-based Access control module (ngx_http_access_module) Allow access to deny denied Access example: Location/{                Deny 192.168.1.1;                Allow 192.168.1.0/24;                Allow 10.1.1.0/16;                Allow 2001:0DB8::/32; Deny all; 4. User Login-based access control module (ngx_http_auth_basic_module) auth_basic #Context: HTTP, server, location, limit_except feedback to User'sServer authentication information, you must use Auth_basic_user_file with Auth_basic_user_file to set the path of the user authentication file, configure user authentication username and password available httpd-tools to achieve 1.        Yum-y Install httpd-tools #安装httpd-tools 2.htpasswd-c-m/etc/nginx/.ngxpwd ready #指定密码文件路径与文件名, user name is ready                Example: server {server_name www.hhh.com;                root/nginx/test2/;                    Location ~* ^/(login|admin) {#alias/admin/; #为何加此项会访问不成功, change to location/login/but access successful?                    Auth_basic "Why?";                    Auth_basic_user_file/etc/nginx/.ngxpwd;        #root路径下用户请求以admin或login开头时, the user authentication 5.nginx Access status module (Ngx_http_stub_status_module)} is present Stub_status                You can view Nginx status information, this information must be confidential, you can create a separate location and cooperate with Auth_basic_user_file and Auth_basic use example: location/abc {                Stub_status;                Auth_basic "Why?";                    Auth_basic_user_file/etc/nginx/.ngxpwd; } Output Example: Active connections:3 #当前的活动链接数            Server accepts handled requests #accept已经接受的用户请求数, handled the number of user requests completed, requests total Requests 303 3                                             2533 reading:0 writing:1 waiting:2 #Reading: The number of connections that are in the header of the Read client request packet; #Writing: The number of connections in the process of sending a response message to the client; #Waiting: The number of idle connections waiting for a client to make a request 6.nginx access log module (ngx_http_log_module) Access_log #Context: HTTP, server, location, if on location, limit_except set access log path, can be set globally, also Fine-grained setup in Server,location, etc. for easy access to different hosts fine-grained management can also compress the package processing example: access_log/path/to/log.gz combined gzip buffer=16k F                lush=5m;  #buffer可设置访问日志的缓冲区大小, flush defines the format of the access log for the refresh Cycle Log_format Example: Log_format main ' $remote _addr-$remote _user [$time _local]                                  "$request" $status $body _bytes_sent "$http _referer" ' "$http _user_agent" "$http _x_forwarded_for" '; the main function of gzip in 7.ngx_http_gzip_module is to reduce the bandwidth of the server, after GziP Compressed page size can be changed to 30% or smaller, so that users can browse the page much faster. Gzip Compression page requires both browser and server support, in fact, server-side compression, uploaded to the browser after the browser decompression and parsing. Most current browsers support the parsing of gzip-compressed pages.            Default is OFF: gzip on; Gzip_comp_level 2; #压缩等级, the default is 1 Gzip_min_length 1000;            #启用压缩功能的临界值 gzip_proxied expired No-cache no-store private auth; Gzip_types text/plain Application/xml;8.https Service Module (ngx_http_ssl_module) SSL on |        Off        Enable/Disable SSL service example: server {listen 443 SSL;        server_name www.hhh.com;        root/nginx/test2/;        Access_log/var/log/nginx/ssl_access.log main;        SSL on; SSL_CERTIFICATE/ETC/PKI/NGINX/SERVER.CRT; #证书存放路径 Ssl_certificate_key/etc/pki/nginx/private/server.key; #私钥存放路径 Ssl_session_cache shared:ssl:1m; #在各worker之间启用大小为1m的共享缓存, can improve cache utilization, 1m can cache 4,000 ssl_protocols SSLv3 TLSv1 tlsv1.1 tlsv1.2; #设置支持的ssl协议 Ssl_session_timeout 10m; #设置共享缓存的超时时间9. Nginx URL rewrite module, rewrite module (ngx_http_rewritE_module) replaces or directs the user's requested URL according to a certain rule, causing the user to request to jump to a rule-customized URL example: server {server_name www.fff.com;                location/alais/{alias/nginx/ta/;                Index index.html index.htm; Rewrite/(. *) \.png$ http://www.fff.com/$1.jpg; Requests that end in #将用户输入以. PNG are all replaced by a. jpg End request rewrite/(. *) $ https://www.fff.com/$1; #将用户请求全部重写为https协议并返回请求}} Rewrite rules are executed from the top down by default, and when the first rewrite executes, the new URL is re-         Ocation check again, until there is no matching rewrite, this may produce a dead loop, because the rewrite end by default has a last identity, can also be changed at the end of the other identity implementation does not mention the function 1.last #只要有新URL就会使用此标识结尾的规则检查一次 2.break #检查若匹配就执行并不再执行后面的rewrite规则 3.redirect #反馈给用户302响应码并将新定向的URL, let the client request the newly directed URL 4.permanent #与re Unlike direct, permanent is permanently redirected 10. Anti-theft chain module (ngx_http_referer_module) * * #再打开某些网站时, there will be similar to the "This picture only allow XX net internal use" of the picture hint, This is a way to implement the anti-theft chain based on this module valid_referers none | Blocked | Server_names |            String ...;            Defines the valid available values for the Referer header; none: the header of the request packet is not referer;    Blocked: The referer header of the request message has no value; Server_names: parameter, which can have a value as the hostname or hostname mode; arbitrary_string: direct String,                However, you can use * as a wildcard character; regular expression: the string to which the specified regular expression pattern matches; To use a ~ to begin with, for example, ~.*\.magedu\.com;                Valid_referers None block Server_names *.magedu.com *.mageedu.com magedu.* mageedu.* ~\.magedu\.; if ($invalid _referer) {return http://www.magedu.com/invalid.jpg;

Nginx as an HTTP server common module

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.