Nginx Installation and Configuration

Source: Internet
Author: User
Tags epoll sendfile nginx server nginx reverse proxy


Installing Nginx

Download the latest Nginx source package, install Nginx compiler dependency

Yum-y install gcc gcc-c++ make libtool zlib zlib-devel OpenSSL openssl-devel pcre pcre-devel


Nginx Plug-in module

nginx_upstream_check_module-0.3.0.tar.gz--Check the status of the backend server, Nginx-goodies-nginx-sticky-module-ng-bd312d586752.tar.gz (recommended in/usr/local/ SRC after extracting directory renamed to nginx-sticky-module-ng-1.2.5)--backend do load balancing to solve the session sticky problem (combined with the Upstream_check module requires additional patching, Please refer to Nginx Load Balancer configuration for actual combat).


Configuring the installation Nginx

./configure--prefix=/usr/local/nginx-1.6--with-pcre \>--with-http_stub_status_module--with-http_ssl_module \ >--with-http_gzip_static_module--with-http_realip_module \>--add-module=. /nginx_upstream_check_module-0.3.0

Make

Make install


Description of the compilation options:

--with-http_ssl_module: Use the HTTPS protocol module. By default, the module is not built. Provided that OpenSSL and Openssl-devel have been installed

--with-http_stub_status_module: Used to monitor the current state of Nginx

--with-http_realip_module: This module allows us to change the client IP address value (for example, X-real-ip or x-forwarded-for) in the client request header, meaning it allows the background server to record the original client's IP address

--add-module=path: Add third-party external modules, such as Nginx-sticky-module-ng or cache modules. Recompile every time a new module is added (Tengine can not be recompiled when new modules are joined)


./configure \>--prefix=/usr \>--sbin-path=/usr/sbin/nginx \>--conf-path=/etc/nginx/nginx.conf \>-- Error-log-path=/var/log/nginx/error.log \>--http-log-path=/var/log/nginx/access.log \>--pid-path=/var/run/ Nginx/nginx.pid \>--lock-path=/var/lock/nginx.lock \ >--user=nginx \>--group=nginx \>--with-http_ssl_mo Dule \>--with-http_stub_status_module \>--with-http_gzip_static_module \>--http-client-body-temp-path=/ var/tmp/nginx/client/\>--http-proxy-temp-path=/var/tmp/nginx/proxy/\>--http-fastcgi-temp-path=/var/tmp/ nginx/fcgi/\>--http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \>--with-pcre=. /pcre-7.8>--with-zlib=. /zlib-1.2.3


Start off Nginx

# # Check that the configuration file is correct #/usr/local/nginx-1.6/sbin/nginx-t

#./SBIN/NGINX-V # you can see the compilation options

# # Start, close

#./sbin/nginx # Default configuration file conf/nginx.conf,-c specified

#./sbin/nginx-s Stop

or Pkill Nginx

# # Reboot, does not change the configuration file specified at startup

#./sbin/nginx-s Reload

or Kill-hup ' Cat/usr/local/nginx/logs/nginx.pid '


Download Nginx to/etc/init.d/, modify the path inside and give executable permission.


the disadvantage of using Yum installation is that third-party modules cannot be added later.




Configuration of Nginx


Main configuration options for the nginx.conf configuration file:

The Nginx configuration file is divided into four main parts:

Main (global setting),

Server (Host settings),

Upstream (upstream server setup, mainly reverse proxy, load balancer related configuration) and

Location (URL matches settings after a specific position), and each section contains several instructions.

The instructions set in the main section will affect the settings of all other parts;

The instructions in the Server section are primarily used to specify the virtual host domain name, IP, and port;

The upstream instruction is used to set up a series of backend servers, set up the reverse proxy and load balance of the back-end server;

The location section is used to match page locations (for example, root directory "/", "/images", and so on).

The relationship between them: The server Inherits Main,location inheritance Server;upstream neither inherits the instruction nor inherits it. It has its own special instructions and does not need to be used elsewhere.


nginx.conf example of reverse proxy on the front end: Handling js, png,jsp dynamic Request forwarding to other server tomcat

user  www www;worker_processes  2;error_log  logs/error.log; #error_log   logs/error.log  notice; #error_log   logs/error.log  info;pid         logs/nginx.pid;events {    use epoll;     worker_connections  2048;} http {    include       mime.types;     default_type  application/octet-stream;     #log_format    main   ' $remote _addr -  $remote _user [$time _local]  "$request"   "     #                    ' $status   $body _bytes_sent  "$http _referer   '     #                    ' "$http _user_agent"   "$http _x_forwarded_for" ';     #access_log    logs/access.log  main;    sendfile         on;    # tcp_nopush     on;     keepalive_timeout  65;  # gzip compression Function Settings     gzip on;     gzip_min_length 1k;    gzip_buffers    4  16k;    gzip_http_version 1.0;    gzip_comp_level 6;     gzip_types text/html text/plain text/css text/javascript  application/json application/javascript application/x-javascript application/xml;     gzip_vary on;    # http_proxy  Setup     client_ max_body_size   10m;    client_body_buffer_size   128k;    proxy_connect_ timeout   75;    proxy_send_timeout   75;     proxy_read_timeout   75;    proxy_buffer_size   4k;     proxy_buffers   4 32k;    proxy_busy_buffers_ size   64k;    proxy_temp_file_write_size  64k;     proxy_temp_path   /usr/local/nginx/proxy_temp 1 2;  #  Set the Load Balancer background server list      upstream  backend  {                 #ip_hash;                server   192.168.10.100:8080  Max_fails=2 fail_timeout=30s&nbSP;;                 server    192.168.10.101:8080 max_fails=2 fail_timeout=30s ;       }  #  very Important virtual host configuration     server {         listen       80;         server_name  itoatest.example.com;        root    /apps/oaapp;        charset utf-8;         access_log  logs/host.access.log  main;          #对  /  all do load balancer + reverse proxy          location / {            root    /apps/oaapp;            index  index.jsp index.html  index.htm;            proxy_pass         http://backend;               proxy_redirect off;           The   #  backend Web server can get the user real ip          through x-forwarded-for    proxy_set_header  Host   $host;             proxy_set_header  X-Real-IP   $remote _addr;               proxy_set_header  x-forwarded-for    $proxy _add_x_forwarded_for;             proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;                      }         #静态文件, Nginx handle, do not go to backend request tomcat         location  ~* /download/ {               root /apps/oa/fs;                       }         location ~ .*\. (GIF|JPG|JPEG|BMP|PNG|ICO|TXT|JS|CSS) $           {                root /apps/ Oaapp;               expires      7d;          }       location /nginx_status {             stub_status on;             access_log off;             allow 192.168.10.0/24;             deny all;        }         location ~ ^/(Web-inf)/ {                deny all;            }         #error_page   404               /404.html;        # redirect  server error pages to the static page /50x.html         #        error_page   500  502 503 504  /50x.html;        location =  /50x.html {            root    html;        }    }  ##  Other virtual host,server  instruction start}


Description of COMMON commands


Main global configuration

The parameters that Nginx does not have to do with specific business functions (such as HTTP service or email service proxy) at runtime, such as the number of working processes, the identity of the running, etc.

Woker_processes 2

In the top-level main section of the configuration file, the number of worker processes in the workers role, the master process is receiving and assigning requests to worker processing. This is a simple number. Can be set to the CPU of the number of cores grep ^processor/proc/cpuinfo | Wc-l, also the auto value, can reduce I/O operations if SSL is turned on and gzip should be set to as much as twice times the number of logical CPUs. If the Nginx server has other services, you can consider the appropriate reduction.

Worker_cpu_affinity

is also written in the main section. In high concurrency, by setting CPU stickiness, you can reduce the performance loss caused by field rebuilds such as registers due to multi-CPU core switching. such as worker_cpu_affinity 0001 0010 0100 1000; (quad Core).

Worker_connections 2048

Written in the events section. The maximum number of connections (originating) that can be processed concurrently by each worker process (including all connections to the client or back-end proxy server). Nginx as a reverse proxy server, the calculation formula maximum number of connections = worker_processes * WORKER_CONNECTIONS/4, so here the maximum number of client connections is 1024, this can be increased to 8192 it doesn't matter, depends on the situation, But not beyond the worker_rlimit_nofile behind. When Nginx acts as an HTTP server, the calculation formula is divided by 2.

Worker_rlimit_nofile 10240

Written in the main section. The default is no setting and can be limited to the maximum limit of 65535 for the operating system.

Use Epoll

Written in the events section. Under the Linux operating system, Nginx uses the Epoll event model by default, which, thanks to this, is quite efficient under the Linux operating system. At the same time, Nginx uses an efficient event model similar to Epoll on OpenBSD or FreeBSD operating system kqueue. Select is used only when the operating system does not support these efficient models.



HTTP Server


Some configuration parameters related to providing the HTTP service. For example: whether to use keepalive, whether to use gzip compression, and so on.

Sendfile on

Open efficient file transfer mode, the sendfile directive specifies whether Nginx calls the Sendfile function to output files, reducing user space to the kernel space of the context switch. For normal applications set to ON, if used for downloading applications such as disk IO heavy load applications, can be set to off to balance disk and network I/O processing speed, reduce the load on the system.

Keepalive_timeout 65: Long connection Timeout, unit is seconds, this parameter is very sensitive, involving the type of browser, backend server time-out settings, operating system settings, can be another piece of article. Long connection requests a large number of small files, you can reduce the cost of rebuilding the connection, but if there is a large file upload, 65s does not upload completed will lead to failure. If you set the time too long and you have more users, staying connected for a long time can be resource intensive.

Send_timeout: Used to specify the time-out period for the response client. This timeout is limited to the time between two connection activities, and if the client does not have any activity at this time, Nginx will close the connection.

Client_max_body_size 10m

The maximum number of single-file bytes allowed to be requested by the client. If you upload a larger file, set its limit value

Client_body_buffer_size 128k

Maximum number of bytes of buffer proxy cache client requests

Module Http_proxy:

This module implements the functionality of Nginx as a reverse proxy server, including the caching function (see also article)

Proxy_connect_timeout 60

Nginx-to-backend server connection time-out (proxy connection timed out)

Proxy_read_timeout 60

Timeout between two successful response operations with back-end server after successful connection (proxy receive timeout)

Proxy_buffer_size 4k

Set the proxy server (Nginx) from the backend Realserver read and save the user header information buffer size, the default is the same size as proxy_buffers, you can actually set this instruction value smaller

Proxy_buffers 4 32k

Proxy_buffers buffer, Nginx for a single connection cache from the backend realserver response, the average web page below 32k, so set

Proxy_busy_buffers_size 64k

Buffer size under high load (proxy_buffers*2)

Proxy_max_temp_file_size

When Proxy_buffers does not fit the response content of the backend server, a portion is saved to the temporary file of the hard disk, which is used to set the maximum temporary file size, the default 1024M, which is not related to Proxy_cache. Greater than this value, will be returned from the upstream server. Set to 0 disabled.

Proxy_temp_file_write_size 64k

This option restricts the size of the temporary file to be written each time the cached proxy server responds to a temporary file. Proxy_temp_path (which can be compiled at the time) specifies which directory to write to.

Proxy_pass,proxy_redirect See location section.

Module Http_gzip:

Gzip on: Turn on gzip compression output to reduce network transmission.

Gzip_min_length 1k: Sets the minimum number of bytes of pages allowed to compress, and the number of page bytes is obtained from header content-length. The default value is 20. It is recommended to set the number of bytes greater than 1k, which may be more or less larger than 1k.

Gzip_buffers 4 16k: Set the system to get a few units of cache for storing gzip compressed result data streams. 4 16k represents a 16k unit, with the original data size of 4 times times 16k of memory.

Gzip_http_version 1.0: Used to identify the version of the HTTP protocol, the early browser does not support gzip compression, users will see garbled, so in order to support the previous version added this option, if you use the Nginx reverse proxy and expect to also enable Gzip compression, because Terminal communication is http/1.0, so please set it to 1.0.

Gzip_comp_level 6:gzip Compression ratio, 1 compression than the minimum processing speed, 9 compression than the maximum but the slowest processing speed (transmission is faster but the CPU is relatively consumed)

Gzip_types: A matching MIME type is compressed, and the "text/html" type is always compressed, regardless of whether it is specified.

Gzip_proxied Any:nginx is enabled as a reverse proxy, the decision to turn on or off the return of the backend server is compressed, as long as the backend server must return headers that contain "Via".

Gzip_vary on: With the HTTP header, a vary:accept-encoding is added to the response header, allowing the front-end cache server to cache the gzip-compressed page, for example, to cache the Nginx compressed data with squid.



Server virtual Host


Several virtual hosts are supported on the HTTP service. Each virtual host has a corresponding server configuration item, and the configuration item contains the virtual host-related configuration. You can also establish several servers when you provide a proxy for the mail service. Each server is differentiated by listening for addresses or ports.

Listen

Listening port, default 80, less than 1024 to start with root. Can be listen *:80, listen 127.0.0.1:80 and other forms.

server_name

Server names, such as localhost, www.example.com, can be matched by a regular match.

Module Http_stream

This module through a simple scheduling algorithm to achieve client IP to back-end server load balancing, upstream followed by the name of the load balancer, back-end realserver to Host:port options; Method is organized in {}. If the backend is only one agent, it can also be written directly on the Proxy_pass.



Location


A series of configuration items that correspond to some specific URLs in the HTTP service.

Root/var/www/html

Defines the default Web site root location for the server. If Locationurl matches a subdirectory or file, root has no effect and is generally placed inside or below the server directive.

Index index.jsp index.html index.htm

Defines the file name that is accessed by default under the path, typically followed by root

Proxy_pass Http:/backend

The request is directed to the backend defined server list, which is the reverse proxy, corresponding to the upstream load balancer. You can also proxy_pass http://ip:port.

Proxy_redirect off;

Proxy_set_header Host $host;

Proxy_set_header X-real-ip $remote _addr;

Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;

These four are set aside for the moment, and if you delve into it, each one involves a very complex piece of content that will be read through another article.

The writing of location matching rules, can be said to be particularly critical and basic, reference the article Nginx configuration location summary and rewrite rules;




Access Control Allow/deny

Nginx access control module will be installed by default, and the wording is very simple, you can have more than one allow,deny, allow or prohibit an IP or IP segment access, in order to meet any one of the rules to stop matching down.

Location/nginx-status {stub_status on;  Access_log off;# auth_basic "Nginxstatus"; # auth_basic_user_file/usr/local/nginx-1.6/htpasswd;  Allow 192.168.10.100;  Allow 172.29.73.0/24; Deny all;}

Use the Httpd-devel tool's htpasswd to set the login password for the access path:

#htpasswd-chtpasswdadminnewpasswd:re-type new password:adding password for user admin#htpasswdhtpasswdadmin// Modify Admin password #htpasswdhtpasswdsean//Add one more authentication user



List Directory AutoIndex


Nginx default is not allowed to list the entire directory. For this function, open the nginx.conf file, add AutoIndex on to the Location,server or HTTP segment, and the other two parameters should also be added:

Autoindex_exact_size off; The default is on to show the exact size of the file, in bytes. After change to off, the approximate size of the file is displayed, in kilobytes or MB or GB

Autoindex_localtime on;

The default is off, and the file time displayed is GMT time. When on, the file time displayed is the server time of the file

location/images {root/var/www/nginx-default/images;  AutoIndex on;  Autoindex_exact_size off;  Autoindex_localtime on; }


Nginx Installation and Configuration

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.