Nginx Server Installation and configuration file introduction _nginx

Source: Internet
Author: User
Tags epoll file upload openssl sendfile nginx server tomcat nginx reverse proxy nginx load balancing

Nginx in the work already has several environments in use, each time is to go to the Internet to look for the blog, each kind of compiling configuration, today oneself also compiles a installment document and the nginx.conf configuration option explanation, leaves for later reference.

1. Install Nginx

1.1 Select stable version
We compile and install Nginx to customize our own modules, machine CentOS 6.2 x86_64. First install the missing dependency pack:

# yum-y Install gcc gcc-c++ make libtool zlib zlib-devel OpenSSL openssl-devel pcre pcre-devel

These packages, if not yum, can download the source code to compile the installation, just pay attention to the directory installed by default at compile time to ensure that these dynamic library files (ldconfig) are found below when installing Nginx.

From download stable version to/usr/local/src under decompression.

For further preparation we download 2 additional plug-in modules: nginx_upstream_check_module-0.3.0.tar.gz--Check the back-end server status, Nginx-goodies-nginx-sticky-module-ng-bd312d586752.tar.gz (recommended in/usr/local/ SRC under the decompression after the directory renamed to nginx-sticky-module-ng-1.2.5)--back-end to do load balancing solution session sticky problem (with the Upstream_check module with the use of a separate patch, Please refer to Nginx load Balancing configuration for combat.

Please note that the plug-in and Nginx version compatibility issues, the general plug-in the better, nginx do not have to chase new, stable first. nginx-1.4.7,nginx-sticky-module-1.1,nginx_upstream_check_module-0.2.0, this collocation is no problem. sticky-1.1 and nginx-1.6 version error because the update did not keep up with the compilation. (You can use Tengine directly, and these modules are included by default)

[Root@cachets nginx-1.6.3]# pwd
[root@cachets nginx-1.6.3]#./configure--prefix =/usr/local/nginx-1.6--with-pcre \
> With-http_gzip_static_module--with-http_realip_module \
>--add-module= ... /nginx_upstream_check_module-0.3.0
[root@cachets nginx-1.6.3]# make && make install

1.2 Common compilation Options description

Nginx most common modules, compile-time./configure--help are installed by default at the beginning of--without.

    • --prefix=path: Specifies the installation directory for Nginx. The default/usr/local/nginx
    • --conf-path=path: Sets the path to the nginx.conf configuration file. Nginx allows you to start with a different configuration file, using the-C option on the command line. The default is prefix/conf/nginx.conf
    • --user=name: Sets the user for the Nginx worker process. After the installation is complete, you can change the user directive at any time in the nginx.conf configuration file. The default user name is nobody. --group=name is similar to
    • --with-pcre: Sets the source path for the Pcre library, and if it has been Yum installed, automatically finds the library file using--with-pcre. When using--with-pcre=path, you need to download the source code (version 4.4–8.30) of the Pcre Library from the Pcre website and extract the rest to Nginx./configure and make to complete. Perl regular expressions are used in location directives and ngx_http_rewrite_module modules.
    • --with-zlib=path: Specifies zlib (version 1.1.3–1.2.5) source extract directory. Zlib is required to ngx_http_gzip_module the network transport compression module that is enabled by default.
    • --with-http_ssl_module: Use the HTTPS protocol module. By default, the module is not built. If OpenSSL and Openssl-devel are installed
    • --with-http_stub_status_module: Used to monitor the current status of Nginx
    • --with-http_realip_ Module: It allows us to change the client IP address value (such as X-real-ip or x-forwarded-for) in the client request header to enable the background server to record the IP address of the original client
    • --add-module =path: Add third party external modules, such as Nginx-sticky-module-ng or cache modules. The new modules are recompiled each time (Tengine can not be recompiled when new module is added)

Then provide a compilation scenario:

./configure \
>--sbin-path=/usr/sbin/nginx \
>--conf-path=/etc/nginx/ nginx.conf \
>--http-log-path=/var/log/nginx/ Access.log \
>- User=nginx \
>--with-http_ssl_module \
>--with-http_stub_status_module \
>--with-pcre= ... /pcre-7.8
>--with-zlib= ... /zlib-1.2.3

1.3 Start off Nginx

# # Check that the configuration file is correct
#/usr/local/nginx-1.6/sbin/nginx-t #
/sbin/nginx-v # can see the compile option # #
start, close
#./sbin/nginx # The default profile conf/nginx.conf,-c specifies
#./sbin/nginx-s Stop
or Pkill nginx # # Reboot
, and will not change the profile specified at Startup
#./sbin/nginx- s reload
or kill-hup ' cat/usr/local/nginx-1.6/logs/ '

Of course, can also be nginx as a system service management, download Nginx to/etc/init.d/, modify the path inside and then give executable permissions.

# service Nginx {start|stop|status|restart|reload|configtest}

1.4 Yum Installation
Yum installation of the RPM package is much simpler than the compilation installation, the default will install many modules, but the disadvantage is that if you want to install a Third-party module later, then there is no way.

# Vi/etc/yum.repo.d/nginx.repo
name=nginx repo
baseurl=$ releasever/$basearch/

The rest of the Yum install Nginx, you can also yum install nginx-1.6.3 install the specified version (if you go to packages see there is a corresponding version, the default is the latest version of the stable version).

2. nginx.conf configuration file

The Nginx configuration file is divided into four parts: main (Global Settings), Server (host settings), upstream (upstream server settings, mainly reverse proxies, load balancing related configurations), and location (URLs match settings after a specific location), each containing several instructions. The instructions that are set in the main section will affect the settings of all other parts; the instructions in the Server section are primarily used to specify the domain name, IP, and port of the virtual host; Upstream's instructions are used to set up a series of backend servers, to set the reverse proxy and load balancing of back-end servers The location part is used to match the page location (for example, the root directory "/", "/images", and so on). The relationship between them: server Inherits Main,location inheritance Server;upstream neither inherits nor inherits the instructions. It has its own special instructions and does not need to be applied elsewhere.

Several directive contexts supported by the current Nginx:

2.1 General

The following nginx.conf simple implementation nginx in front of the reverse proxy server example, processing JS, PNG and other static files, JSP and other dynamic requests forwarded to other servers Tomcat:

User www www.
Worker_processes 2;
Error_log Logs/error.log;
#error_log Logs/error.log Notice;
#error_log Logs/error.log Info;
PID Logs/;
events {use epoll; worker_connections 2048;}
HTTP {include mime.types; Default_type Application/octet-stream; #log_format Main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $body _bytes_sent ' $http _referer ' ' # '
"$http _user_agent" "$http _x_forwarded_for";
#access_log Logs/access.log Main;
Sendfile on;
# Tcp_nopush on;
Keepalive_timeout 65;
# gzip compression feature set gzip on;
Gzip_min_length 1k;
Gzip_buffers 4 16k;
Gzip_http_version 1.0;
Gzip_comp_level 6; Gzip_types text/html text/plain text/css text/javascript application/json application/javascript application/
X-javascript Application/xml;
Gzip_vary on;
# Http_proxy set client_max_body_size 10m;
Client_body_buffer_size 128k;
Proxy_connect_timeout 75;
Proxy_send_timeout 75;
Proxy_read_timeout 75;
Proxy_buffer_size 4k;
Proxy_buffers 4 32k; Proxy_busy_buffers_Size 64k;
Proxy_temp_file_write_size 64k;
Proxy_temp_path/usr/local/nginx/proxy_temp 1 2; # Set Load Balancing background server list upstream backend {#ip_hash; server max_fails=2 fail_timeout=30s server
01:8080 max_fails=2 fail_timeout=30s;
# Very important virtual host Configuration server {listen; server_name; root/apps/oaapp;
CharSet Utf-8;
Access_log Logs/host.access.log Main;
#对/all do load balancing + reverse proxy location/{root/apps/oaapp; index index.jsp index.html index.htm;
Proxy_pass Http://backend;
Proxy_redirect off;
# The backend Web server can obtain the user's real IP proxy_set_header Host $host through x-forwarded-for;
Proxy_set_header X-real-ip $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
Proxy_next_upstream Error timeout Invalid_header http_500 http_502 http_503;
#静态文件, Nginx, not to backend request Tomcat location ~*/download/{root/apps/oa/fs; } location ~. *\. (GIF|JPG|JPEG|BMP|PNG|ICO|TXT|JS|CSS) $ {root/apps/oaapp; expires 7d;} location/nginx_status {Stub_staTus on;
Access_log off;
Deny all;
} location ~ ^/(Web-inf)/{deny all;} #error_page 404/404.html;
# REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html;
Location =/50x.html {root html;}}
 # # Other virtual hosts, Server command start}

2.2 Common Instruction Instructions

2.2.1 Main global configuration

Nginx some parameters unrelated to specific business functions (such as HTTP services or email service proxies) at run time, such as number of work processes, status of running, etc.

1, woker_processes 2
In the top-level main section of the configuration file, the number of worker processes that are being processed by the master process is to receive and assign requests to worker processing. This is a little bit simpler. Can be set to the kernel number of CPUs grep ^processor/proc/cpuinfo | Wc-l, also the auto value, can reduce I/O operations if SSL and Gzip are turned on as much as the logical number of CPUs or even twice times. If there are other services available on the NGINX server, consider reducing it appropriately.

2, Worker_cpu_affinity
It is also written in the main section. In the case of high concurrency, by setting the CPU stickiness to reduce the performance loss caused by the register, such as the multiple CPU kernel switching. such as worker_cpu_affinity 0001 0010 0100 1000; (quad Core).

3, Worker_connections 2048
Written in the events section. The maximum number of connections that each worker process can process (initiate) concurrently (including the number of connections to and from the client or back-end proxy servers). Nginx as a reverse proxy server, calculate the maximum number of connections = worker_processes * WORKER_CONNECTIONS/4, so the client maximum number of connections is 1024, this can be increased to 8192 it doesn't matter, look at the situation depends, But not beyond the worker_rlimit_nofile behind. When Nginx as an HTTP server, the calculation formula is divided by 2.

4, Worker_rlimit_nofile 10240
Written in the main section. Default is not set, you can limit the operating system to the maximum limit of 65535.

5. Use Epoll
Written in the events section. Under the Linux operating system, Nginx uses the Epoll event model by default, and Nginx is highly efficient under Linux operating systems. At the same time, Nginx uses an efficient event model kqueue similar to epoll on OpenBSD or FreeBSD operating systems. Select is used when the operating system does not support these efficient models.

2.2.2 HTTP Server

Some configuration parameters related to the provision of HTTP services. For example: whether to use keepalive ah, whether to use gzip to compress and so on.

    • Sendfile o: Open efficient file transfer mode, sendfile instructions Specify whether Nginx call Sendfile function to output files, reduce user space to the kernel space context switch. For a common application set to ON, if the use of disk IO heavy load applications such as downloads can be set to off to balance disk and network I/O processing speed and reduce system load.
    • Keepalive_timeout 65: Long Connection Timeout time, the unit is seconds, this parameter is sensitive to the type of browser, back-end server time-out settings, operating system settings, you can start another article. Long connection requests a large number of small files, you can reduce the cost of rebuilding the connection, but if there is a large file upload, 65s did not upload the completion will lead to failure. If you set the time too long and you have more users, staying connected for a long time can be a huge resource.
    • Send_timeout: Used to specify the timeout time for responding to clients. This timeout is limited to the time between two connection activities, and if the client does not have any activity at this time, Nginx will close the connection.
    • Client_max_body_size 10m: Maximum number of single file bytes allowed for client requests. If there is a large file upload, set its limit value
    • Client_body_buffer_size 128k: The maximum number of bytes that the buffer proxy buffers client requests

Module Http_proxy:

This module implements the Nginx function as a reverse proxy server, including caching (see article)

    • Proxy_connect_timeout 60:nginx connection Timeout with back-end server (Agent connection timeout)
    • Proxy_read_timeout 60: Timeout between two successful response operations to the back-end server after successful connection (proxy receive timeout)
    • Proxy_buffer_size 4k: Set proxy Server (Nginx) to read and save the user header information from the backend realserver buffer size, the default is the same as the proxy_buffers size, in fact, you can set this instruction value smaller
    • Proxy_buffers 4 32k:proxy_buffers buffers, nginx response to a single connection cache from back-end realserver, the average page below 32k, so set
    • Proxy_busy_buffers_size 64k: Buffer size under high load (proxy_buffers*2)
    • Proxy_max_temp_file_size: When the proxy_buffers does not fit the response content of the backend server, it saves a portion to the temporary file on the hard disk, which is used to set the maximum temporary file size, the default 1024M, and the Proxy_cache It does not matter. Greater than this value, will be passed back from the upstream server. Set to 0 disabled.
    • Proxy_temp_file_write_size 64k: This option restricts the size of the temporary file per write when the server that is cached by the proxy responds to the temporary file. Proxy_temp_path (which can be compiled) specifies which directory to write to.

Proxy_pass,proxy_redirect See location part.

Module Http_gzip:

    • Gzip on: Turn on gzip compression output to reduce network transmission.
    • Gzip_min_length 1k: Sets the minimum number of bytes allowed for compression, and the number of page bytes is obtained from the header content-length. The default value is 20. It is recommended that you set the number of bytes larger than 1k, less than 1k may be more pressing.
    • Gzip_buffers 4 16k: Set the system to get several units of cache to store the gzip compressed result data stream. 4 16k represents 16k as the installation of the original data size in 16k of 4 times times the application memory.
    • Gzip_http_version 1.0: Used to identify the version of the HTTP protocol, the early browsers do not support gzip compression, users will see garbled, so in order to support the previous version plus this option, if you use a Nginx reverse proxy and expect to also enable gzip compression, because The end communication is http/1.0, so please set to 1.0.
    • Gzip_comp_level 6:gzip Compression ratio, 1 compression than the minimum processing speed is the fastest, 9 compression than the largest but processing speed slowest (faster transmission but more CPU consumption)
    • Gzip_types: Matches MIME type for compression, the "text/html" type is always compressed, regardless of whether it is specified.
    • Gzip_proxied Any:nginx is enabled as a reverse proxy and determines whether the results returned by the back-end server on or off is compressed, as long as the backend server must return header header containing "Via".
    • Gzip_vary on: Is related to the HTTP header, which adds a vary:accept-encoding to the response header, allowing the front-end cache server to cache the page with gzip compression, for example, using squid to cache data that has been nginx compressed.

2.2.3 Server Virtual Host

Several virtual hosts are supported on HTTP services. A corresponding server configuration entry for each virtual host that contains the configuration associated with the virtual host. You can also create several servers when you provide a proxy for the mail service. Each server is differentiated by listening to addresses or ports.

1, listen
Listening port, default 80, less than 1024 to start with root. Can be listen *:80, listen and other forms.

2, server_name
The server name, such as localhost,, can be matched by a regular match.

Module Http_stream

This module realizes the load balance of the client IP to the back-end server through a simple scheduling algorithm, upstream the name of the load balancer, and the backend realserver to Host:port options; Mode is organized in {}. If the back end is represented by only one, it can also be written directly in Proxy_pass.

2.2.4 Location
In the HTTP service, a series of configuration items that correspond to certain URLs.

    • Root/var/www/html: Defines the server's default site root location. If Locationurl matches a subdirectory or file, root does not work, usually in the server directive or/or below.
    • Index index.jsp index.html index.htm: The name of the file to which the default access is defined under the path, typically followed by root
    • Proxy_pass Http:/backend: Request to the backend defined list of servers, that is, the reverse proxy, corresponding to the upstream load balancer. You can also proxy_pass http://ip:port.
    • Proxy_redirect off;
    • Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;
These four are set for the moment, and if you delve into it, each one involves very complicated content and will be interpreted by another article.
About the location matching rule's writing, may say is particularly crucial and the foundation, the reference article nginx disposition location summary and the rewrite rule formulation;

2.3 Other

2.3.1 Access Control Allow/deny

Nginx access control module will be installed by default, and the writing is also very simple, you can have multiple allow,deny, allow or prohibit an IP or IP segment access, in order to meet any one of the rules to stop the match down. Such as:

Location/nginx-status {
stub_status on;
Access_log off;
# auth_basic "Nginxstatus";
# auth_basic_user_file/usr/local/nginx-1.6/htpasswd;
Deny all;

We also use the Httpd-devel tool's htpasswd to set the login password for the access path:

# htpasswd-c htpasswd admin
new passwd:
re-type New password:
adding password for user admin
# Htpass WD HTPASSWD Admin//Modify admin password
# htpasswd htpasswd Sean/Add one more authenticated user

This generates a password file that is encrypted by default using crypt. Open the two line comments above Nginx-status and restart Nginx to take effect.

2.3.2 Lists Directories AutoIndex

Nginx is not allowed to list the entire directory by default. For this feature, open the nginx.conf file and add AutoIndex on to the Location,server or HTTP segment, and the other two parameters are best added:

    • Autoindex_exact_size off; The default is on, showing the exact size of the file, in bytes. When off, displays the approximate size of the file, in kilobytes or MB or GB
    • Autoindex_localtime on;

The default is off, and the file time displayed is GMT time. When on, the file time displayed is the file's server time

location/images {
AutoIndex on;
Autoindex_exact_size off;
Autoindex_localtime on;

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.