Nginx configuration instructions

Source: Internet
Author: User
Tags sendfile

 

# User Group
User WWW;
# Working Process: According to the hardware adjustment, some people say that a few CPU cores should be configured. I think it can be a little more
Worker_processes 5;
# Error Log
Error_log logs/error. log;
# PID File Location
PID logs/nginx. PID;
Worker_rlimit_nofile 8192;

Events {
# The maximum number of connections of worker processes. It can be used together with the previous Worker Process Based on hardware adjustment. The maximum number of connections is as large as possible, but do not run CPU to 100%.
Worker_connections 4096;
}

HTTP {
Include CONF/mime. types;
# Reverse proxy configuration, you can open proxy. conf to see
Include/etc/nginx/Proxy. conf;
# FastCGI configuration, you can open FastCGI. conf to see
Include/etc/nginx/FastCGI. conf;

Default_type application/octet-stream;
# Log format
Log_format main '$ remote_addr-$ remote_user [$ time_local] $ status'
'"$ Request" $ body_bytes_sent "$ http_referer "'
'"$ Http_user_agent" "$ http_x_forwarded_for "';
# Access logs
Access_log logs/access. Log main;
Sendfile on;
Tcp_nopush on;
# Adjust according to the actual situation. If there are many servers, increase the number.
Server_names_hash_bucket_size 128; # This seems to be required for some vhosts

# This is an example of FastCGI. If FastCGI is used, take a closer look.
Server {# PHP/FastCGI
Listen 80;
# Domain name, which can have multiple
SERVER_NAME domain1.com www.domain1.com;
# Access log, which is different from the above level, should be subordinate to overwrite the superior level
Access_log logs/domain1.access. Log main;
Root HTML;

Location /{
Index index.html index.htm index. php;
}

# All PHP suffixes are sent to port 1025 through FastCGI
# The above include FastCGI. conf should be useful here. If you do not include it, put the configuration item FastCGI. conf below.
Location ~ . Php $ {
Fastcgi_pass 127.0.0.1: 1025;
}
}

# This is an example of reverse proxy.
Server {# simple reverse-proxy
Listen 80;
SERVER_NAME domain2.com www.domain2.com;
Access_log logs/domain2.access. Log main;

# Handle static files by nginx
Location ~ ^/(Images | JavaScript | JS | CSS | flash | media | static )/{
Root/var/www/virtual/big.server.com/htdocs;
# Static files are not updated for 30 days after they expire. You can set them to a larger value when they expire. If they are updated frequently, you can set them to a smaller value.
Expires 30d;
}

# Forward the request to the backend Web server. The difference between the reverse proxy and FastCGI is that the reverse proxy is followed by the Web server, and the FastCGI background is the fasstcgi listening process. Of course, the Protocol is different.
Location /{
Proxy_pass http: // 127.0.0.1: 8080;
}
}

# Upstream Server Load balancer. weight is the weight. You can define the weight according to the machine configuration. It is said that nginx can be adjusted according to the background response time. Multiple web servers are required in the background.

01 upstream big_server_com {
02 server 127.0.0.3:8000 weight=5;
03 server 127.0.0.3:8001 weight=5;
04 server 192.168.0.1:8000;
05 server 192.168.0.1:8001;
06
07   
08 server {
09 listen 80;
10 server_name big.server.com;
11 access_log logs/big.server.access.log main; 
12   
13 location / {
14 proxy_pass http://big_server_com;
15 }
16 }
17 }

--------------

After nginx placement only one French file, oneself does not supply all kinds of handle French, it is to take advantage of parameters and system signal mechanism of nginx process held restraint. Nginx parameters include the following:

-C: Use the specified configuration file instead of the nginx. conf file in the conf directory.

-T: Test whether the configuration file is accurate. during operation, it is necessary to load the configuration file from the ground up. This command is very important to detect whether there is a syntax error in the configuration file.

-V: Indicates the nginx version number.

-V: Displays the nginx version number, compilation information, and parameters during compilation.

For example, to test whether a configuration file is accurate, we can use the following command

sbin/nginx – t – c conf/nginx2.conf

 

Through the process signal to control nginx

Nginx supports the signal in the following table:

 

 

 

 

 Signal name

 

 

 

 

 Influence morphology

 

 

 

 

Term, int

 

 

 

 

Quickly closed the French program and cut off the pending request

 

 

 

 

Quit

 

 

 

 

After processing the current request, close the French

 

 

 

 

Hup

 

 

 

 

Configure equipment from scratch, start new things, and close the process. This operation will not be interrupted.

 

 

 

 

Usr1

 

 

 

 

Open the diary file from the ground up to switch the diary. For example, a new diary file is generated every day.

 

 

 

 

Usr2

 

 

 

 

Smooth and progressive implementation of French

 

 

 

 

Winch

 

 

 

 

Close the course

There are two ways to through the process of these signals to control nginx, the first is through the process of logs directory nginx. PID check the current running nginx process ID, through the process
kill – XXX To control nginx, XXX is the signal name listed in the above table. If your system has only one nginx process, you can also go through the process
killallNumber command to complete, for example, runkillall – s HUP nginxTo enable nginx to load the configuration from scratch.

Configure nginx

Let's look at a realistic configuration file:

User nobody; # worker_processes 4; # Number of Event History, which is usually the same as the number of CPU cores # error_log logs/error. log; # error_log logs/error. log notice; # error_log logs/error. log Info; # PID logs/nginx. PID; events {use epoll; # worker_connections 2048, the best event mode in Linux; # the maximum number of simultaneous connections allowed for each event} HTTP {include mime. types; default_type application/octet-stream; # log_format main '$ remote_addr-$ remote_user [$ time_local] $ request' # '"$ Status" $ body_bytes_sent "$ http_referer"' # '"$ http_user_agent" "$ http_x_forwarded_for"'; # access_log off; access_log logs/access. log; # the log file name sendfile on; # tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; Include gzip. conf; # The configuration information of all background servers in the cluster upstream effecats {server 192.168.0.11: 8080 Weight = 10; server 192.168.0.11: 8081 Weight = 10; server 192.168.0.12: 8080 Weight = 10; server 192.168. 0.12: 8081 Weight = 10; server 192.168.0.13: 8080 Weight = 10; server 192.168.0.13: 8081 Weight = 10;} server {Listen 80; # http port SERVER_NAME localhost; charset UTF-8; # access_log logs/host. access. log main; location ~ ^/Nginxstatus/{stub_status on; # nginx condition monitoring settings access_log off;} location ~ ^/(WEB-INF)/{deny all;} location ~ \. (HTM | HTML | ASP | PHP | GIF | JPG | JPEG | PNG | BMP | ICO | RAR | CSS | JS | zip | Java | jar | TXT | FLV | SWF | mid | Doc | PPT | XLS | PDF | TXT | MP3 | WMA) $ {root/opt/webapp; expires 24 h;} location/{proxy_pass http: // invalid ATS; # reverse proxy include proxy. conf;} error_page 404/html/404.html; # redirect server error pages to the static page/50x.html # error_page 502 503/html/502.html; error_page 500 x.html; location =/50x.html {root HTML ;}}}

 

Nginx monitoring

The above is an instance of the configuration of a real website, the gray ink for the configuration of the Declaration. In the above configuration, we first define a location ~ ^/Nginxstatus/, through the http: // localhost/nginxstatus/process, you can monitor the running information of nginx. The performance is as follows:

Active connections: 70server accepts handled requests 14553819 14553819 19239266Reading: 0 Writing: 3 Waiting: 67

 

Nginxstatus indicates the following content:

  • Active connections-number of active connections being processed by nginx.
  • Server accepts handled requests-the system handles a total of 14553819 adjacent connections, and lecheng established 14553819 handshakes (confirming that the center has not failed ), A total of 19239266 requests were handled (even each handshake handled 1.3 Data requests ).
  • Reading-nginx reads the client's actual header information.
  • The number of headers that writing-nginx returns to the client.
  • In the waiting-enable keep-alive environment, this value is active-(Reading + writing), meaning that nginx has finished processing and is waiting for the next request command to reside in the connection.

Static file disposal

Through the regular expression, we can let nginx identify all kinds of static files, for example, all the requests in the images path can be written:

location ~ ^/images/ {    root /opt/webapp/images;}

 

The following configuration equipment is defined several file examples request disposal system.

location ~ \.(htm|html|gif|jpg|jpeg|png|bmp|ico|css|js|txt)$ {    root /opt/webapp;    expires 24h;}

 

For example, to deal with images, static html files, JS script files and CSS style files, we hope nginx will directly handle and return to the viewer, which can greatly speed up the page appreciation rate. In order to deal with this type of files, we need to specify the file storage path through the root command. At the same time, because this type of files are not very common, through the process
expiresCommand to control the cache in the viewer to cut unnecessary requests.expiresCommands can control the header labels of "expires" and "cache-control" in the HTTP Response (to control the page cache ). You can use the following pattern to write expires:

expires 1 January, 1970, 00:00:01 GMT;expires 60s;expires 30m;expires 24h;expires 1d;expires max;expires off;

 

Dynamic page Request Handling

Nginx does not support dynamic pages such as JSP, ASP, PHP, Perl, which are popular at the moment. However, it can send requests to a real server through reverse proxy, for example, tomcat, Apache, and IIS can be used to handle dynamic page requests. In the preceding configuration example, we first define some static files directly disposed of by nginx, all other requests are sent to the server through the proxy_pass command (tomcat in the above example ). Simplest
proxy_passThe usage is as follows:

location / {    proxy_pass        http://localhost:8080;    proxy_set_header  X-Real-IP  $remote_addr;}

 

Here we did not use the cluster, but sent the request directly to the Tomcat service running on port 8080 to complete the Request Handling of similar JSP and Servlet.

When the page reception volume is very large, every time we need multiple application servers to work with the dynamic page, we need to use the cluster architecture. Nginx ProcessupstreamCommand to define a server cluster, at the beginning of a complete example we defined a cluster named Tomcats, this cluster contains three servers a total of 6 Tomcat services. The proxy_pass command is written as follows:

location / {    proxy_pass        http://tomcats;    proxy_set_header  X-Real-IP  $remote_addr;}

 

In nginx cluster configuration equipment, nginx uses the most simple uniform distribution rule to assign requests to each node in the cluster. Once a node fails, it may take effect from the beginning, nginx City is a real-time change in the handling status, to ensure that it will not affect the user's waiting.

Summary

The entire French package only has more than five hundred K, but the sparrow is small and dirty. All kinds of function modules officially provided by nginx are all-encompassing. With these modules, you can complete all kinds of configuration requirements, such: compression, anti-leeching, cluster, FastCGI, streaming media server, memcached support, URL rewriting, etc. What's more, nginx has high performance that Apache and other HTTP server cannot be compared. You can even promote the website's waiting rate by introducing nginx at the front end without changing the original website architecture.

This article only briefly introduces the placement of nginx and the configuration and utilization of common foundations. For more information about nginx, please read the reference capital on the back of the article. Here I am very grateful to my partner-Chen Lei (chanix@msn.com), he has always been doing nginx Chinese Wiki (http://wiki.codemongers.com/NginxChs), but also he first let me such a good software.

If your website is running in Linux, if you have not used some very large functions that cannot be completed by nginx, try nginx.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.