Nginx Server Setup and basic configuration _nginx

Source: Internet
Author: User
Tags epoll sendfile nginx server server port pkill

Nginx (engine X) is a high-performance HTTP server and reverse proxy server, the purpose of this software development is to solve the c10k problem.

The Nginx architecture leverages the features of many modern operating systems to implement a high-performance HTTP server. For example, on Linux systems, Nginx uses mechanisms such as Epoll,sendfile,file aio,directio to make Nginx not only efficient, but also very low resource occupancy, officially claiming that Nginx maintains 10,000 inactive HTTP Kee The p-alive connection requires only 2.5M of memory.

Nginx will run multiple processes on demand: a master process (master) and several worker processes (worker), as well as cache loader processes (cached loader) and cache manager processes (cached manager) when the cache is configured. All processes contain only one thread and implement interprocess communication primarily through the "shared memory" mechanism. The main process runs as root, while worker, cache loader, and cache manager should all run as unprivileged users.

1. Install Nginx
In the CentOS6 version of the Epel source, the Nginx RPM package has been added, but the RPM package version is lower. If you need an updated version, you can use the official RPM package, or compile the installation using the source package.

You can also use some of the two development enhancements of the Nginx version, such as Taobao's Tengine and openresty are good choices.

1.1 Common compilation parameters

--prefix=path: Specify nginx installation directory
--conf-path=path: Specify nginx.conf profile path
--user=name:nginx worker process user
-- With-pcre: Enable pcre Regular expression support
--with-http_ssl_module: Enabling SSL support
--with-http_stub_status_module: for monitoring Nginx status
--with-http-realip_module: Allow changing client IP address in client request header
--with-file-aio: Enable file Aio
--add-module=path: Add third party external module
A complete compilation scenario is provided here:

--prefix=/usr/local/nginx \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--http-client-body-temp-path=/var/tmp/nginx/client_body \
--http-proxy-temp-path=/var/tmp/nginx/proxy \
--HTTP-FASTCGI-TEMP-PATH=/VAR/TMP/NGINX/FASTCGI \
--HTTP-UWSGI-TEMP-PATH=/VAR/TMP/NGINX/UWSGI \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/lock/nginx \
--user=nginx \
--group=nginx \
--with-file-aio \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_sub_module \
--with-http_gzip_static_module \
--with-http_stub_status_module \
--with-pcre
startup and shutdown of 1.2 nginx

Start Nginx:

 
 


Close Nginx

# nginx-s Stop


Reread configuration file

# nginx-s Reload
# pkill-hup Nginx


Reopen the log file

# nginx-s Reopen
# PKILL-USR1 Nginx


You can also download the/etc/init.d/nginx file in the Nginx RPM package and use it after modifying the path:

# service Nginx {start|stop|status|restart|reload|configtest|}


2. nginx.conf configuration file

The Nginx configuration file is divided into four main parts: main (global setting), HTTP (General settings for HTTP), server (virtual host settings), location (matching URL path). There are other configuration segments, such as Event,upstream.

2.1 General Settings

User Nginx
Specify users and groups running the Nginx Workre process

Worker_rlimit_nofile #
Specify the maximum number of files that all worker processes can open

Worker_cpu_affinity
Set the CPU stickiness of the worker process to avoid the performance cost of the process switching between CPUs. such as worker_cpu_affinity 0001 0010 0100 1000; (quad core)

Worker_processes 4
The number of worker worker processes, which can be set to the same number of CPUs, and if SSL and Gzip are turned on, this value can be increased appropriately

Worker_connections 1000
The maximum number of concurrent connections that a single worker process can accept, placed in the event segment

Error_log Logs/error.log Info
The path and record level of the error log

Use Epoll
Use the Epoll event model and place it in the events section

2.2 HTTP Server

server {}:
Define a virtual host

Listen 80;
Defines the address and port of the listener, and the default listener is on all addresses of the machine

server_name name [...];
Define a virtual host name, you can use multiple names, or you can use regular expressions or wildcards.

Sendfile on
Turn on Sendfile to quickly respond to the client

Keepalive_timeout 65
Long connection timeout time, in seconds.

Send_timeout
Specify the timeout for responding to clients

Client_max_body_size 10m
Maximum entity size allowed for client requests

Root PATH
Sets the root directory on the file system where the requested URL corresponds to the resource

Location [= | ~ | ~* | ^~] URI {...}
Set a URI matching path
=: Exact match
~: Regular expression matching, distinguishing character case
~*: Regular expression matching, case-insensitive
The first half of the ^~:uri match, and the non-practical regular expression
Priority level:
= > Location Full path > ^~ > ~ > ~* > Location start path > location/

Allow and deny
Based on IP access control, such as:

Allow 192.168.0.0/24 network segment client access only

Allow 192.168.0.0/24;
Deny all;
Stub_status on
The open state is explicit and can only be used in location:
Open Status Explicit page

Location/status {
stub_status on;
Allow 172.16.0.0/16;
Deny all;
}


Rewrite <REGEX> <REPL> <FLAG>
URL overrides, you can use a variety of tags
For example:

Rewrite ^/images/(. *\.jpg) $/imgs/$1 break;
Available flag:
-Last: After the rewrite is complete, continue to match the other rewrite rules
-Break: No longer match after the override completes
-redirect: Returns 302 redirect (temporary redirection), the client initiates a new request for the redirected URL
-Permanent: Returns 301 Redirect (permanent redirect), the client initiates a new request for the redirected URL

An example of a server configuration:

server {
 listen  ;
 server_name www.example.com;
 Root/web/htdocs;

 Location/{
  index index.html index.htm;
 }

 Location/status {
  stub_status on;
  Allow 10.0.0.0/8;
  Deny all;
  Access_log off;


2.3 SSL Configuration

Enable an SSL virtual host

server {
  listen 443;
  server_name example.com;

  root/apps/www;
  Index index.html index.htm;

  SSL on;
  SSL_CERTIFICATE/ETC/NGINX/SSL/NGINX.CRT;
  Ssl_certificate_key/etc/nginx/ssl/nginx.key;

#  Ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
#  Ssl_ciphers all:! Adh:! Export56:rc4+rsa:+high:+medium:+low:+sslv2:+exp;
#  ssl_prefer_server_ciphers on;

}

Where ssl_certificate represents the CA file, Ssl_certificate_key represents the key file.

If you want to force HTTP requests to be transferred to HTTPS, you can do this:

server {
listen  ;
server_name example.me;

return https://$server _name$request_uri;
}


2.4 nginx Do load balancing reverse proxy

Nginx do reverse proxy, the back-end host has more than one, you can use upstream to define a back-end host pool, in the reverse proxy directly using the host pool name. In upstream, we can define load balancing scheduling algorithm, weight, health status detection and other parameters.

For example:

Upstream backend {
 server 172.16.0.1:80 weight=1 max-fails=3 fail_timeout=10;
 Server 172.16.0.2:80 weight=1max-fails=3 fail_timeout=10;;
}


Under the default request, the Round-robin scheduling algorithm is used, and the ability to check and recover the host is in a healthy state.

Ningx can also use these algorithms:

Ip_hash: Based on source address hashes, the primary purpose is to maintain session
Least_conn: Scheduling based on least active connections
Sticky: Session binding based on cookies, Nginx inserts routing information into the cookie on the first visit of the client, or selects the value of a field in the cookie as the key, and each request is then scheduled based on this information
A cookie-based session binding has a total of three cookie,route and learn.

For example, a dispatch based on cookie name:

Upstream backend {
 server backend1.example.com;
 Server backend2.example.com;

 Sticky Cookie srv_id expires=1h domain=.example.com path=/;
}


Use this host group to reverse the proxy:

Location/{
 Proxy_pass http://backend;
 Proxy_set_header Host $host;
 Proxy_set_haeder x-forwared-for $proxy _add_x_forwarded_for;
}


The Proxy_pass URL specifies the proxy's backend host, which can specify an "http" or "https" protocol, the domain name can be an IP address, or it can be a upstream pool name

If the proxy specifies a URI address, such as http://127.0.0.1/remote, it is directly delegated to the specified URI, regardless of the URI of the request
If the proxy specifies a host that does not have a URI, such as http://127.0.0.1, the URI requested by the client is passed to the specified domain name
If a pattern-matching URL is used in location, the URL is also passed to the end of the proxy URL
If URI overrides are used in location, then Proxy_pass will use the overridden result to process
Proxy_set_header Header VALUE modifies the forwarded message header

2.5 Cache-Related settings when reverse proxy

Proxy_cache_path path [Levels=levels] Keys_zone=name:size

Defines the disk cache path, where the NIGNX cache is stored as a key value, Keys_zone is used to specify the name and size of the memory space that the key holds, and the corresponding value is stored in the path specified by your path. Levels can specify the number of series and name characters for the cache-store path. This setting can only be defined in the HTTP segment.

Such as:

Proxy_cache_path/var/cache/nginx/proxy Levels=1:2 keys_zone=one:10m;


Proxy_cache_valid [code ...] time to specify how long to cache the contents of different response codes

Such as:

Proxy_cache_valid 302 10m;
Proxy_cache_valid 404  1m;
Proxy_cache_valid any  1m;


Proxy_cache_method method defines which methods ' request results can be cached, such as:

Proxy_cache_method get;
Proxy_cache_method Head;


Proxy_cache NAME Specifies use of predefined cache space for caching

2.6 FastCGI proxy settings

When using FastCGI, the method of setting up the proxy is similar to the Porxy_pass, and the FastCGI cache is also used, and the method set is similar to that of Proxy_cache.

Location ~ \.php$ {
 root   /web/htdocs;
 Fastcgi_pass 127.0.0.1:9000;
 Fastcgi_index index.php;
 Fastcgi_param script_filename $document _root$fastcgi_script_name;
 Include  fastcgi_params;
}


3. Some commonly used built-in variables

$arg _name: Request the name parameter in the URI to
$args: All parameters of the request URI, and $query _string the same
$uri: URI of the current request, without parameters
$request _uri: Requested URI , with the full parameter
$host: The host header in the HTTP request message, and if there is no host header, replace the host name of the virtual host that handles the request
$hostname: The Nginx service runs on the host name
$remote _ Addr: Client IP
$remote _port: Client Port
$remote _user: User name entered by client user at user authentication
$request _filename: URI in user request passed local root& nbsp or alias the local file path mapped after conversion
$request _method: Request method
$server _addr: Server address
$server _name: server name
$server _port: Server port
$server _protocol: protocol that the server sends a response to the client, such as http/1.1,http/1.0
$scheme: scheme used in the request, as in https://www.magedu.com/ HTTPS
$http _name: A specified header in a matching request message, such as $http _host the host header in a matching request message
$sent _http_name: The HEADER specified in the matching response message, such as $sent _ Content-type header in Content_Type matching response message
$status: Response status

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.