Nginx Study Notes 2

Source: Internet
Author: User

First of all, this article references how to use nginx to speed up Website access in IBM developworks.

What kind of Web server is nginx, which has been mentioned in previous articles.

Control nginx by Signal

Nginx supports the following signals:

There are two ways to control nginx through these signals. The first is to use nginx under the logs directory. PID: view the ID of the currently running nginx process. Use kill-xxx <pid> to control nginx. xxx indicates the signal name listed in the preceding table. If there is only one nginx process in your system, you can also run the killall command, for example, run killall-s hup nginx to re-load the configuration of nginx.

About the configuration file

This section is the main configuration file nginx. conf of nginx. (The more configuration files you see, the better it will be... ^ !)

User nobody; # The owner of the worker process. When installing the source code, Mu has designated its users for it. You can give it to an nginx user or www-data user, first, these user permissions should be low! This is also true for Apache! Worker_processes 4; # Number of worker processes, which is generally the same as the number of CPU cores # error_log logs/error. log; # error_log logs/error. log notice; # error_log logs/error. log Info; # PID logs/nginx. PID; events {use epoll; # worker_connections 2048, the best-performing Event Mode in Linux; # maximum number of concurrent connections allowed by each worker process} HTTP {include mime. types; default_type application/octet-stream; # log_format main '$ remote_addr-$ remote_user [$ time_local] $ request' #' "$ status" $ body_bytes _ Sent "$ http_referer" '#' "$ http_user_agent" "$ http_x_forwarded_for" '; # access_log off; access_log logs/access. log; # log file name sendfile on; # tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; Include gzip. conf; # configuration information of all backend servers in the cluster. Refer to this configuration file! Use the Tomcat cluster as the backend web server! Upstream effecats {server 192.168.0.11: 8080 Weight = 10; server 192.168.0.11: 8081 Weight = 10; server 192.168.0.12: 8080 Weight = 10; server 192.168.0.12: 8081 Weight = 10; server 192.168.0.13: 8080 Weight = 10; server 192.168.0.13: 8081 Weight = 10;} server {Listen 80; # http port SERVER_NAME localhost; charset UTF-8; # This is the specified string, if it's okay, don't randomly specify it. The front-end developer will specify it for its webpage! # Access_log logs/host. Access. Log main; location ~ ^/Nginxstatus/{stub_status on; # nginx status monitoring configuration access_log off;} location ~ ^/(WEB-INF)/{deny all;} location ~ \. (HTM | HTML | ASP | PHP | GIF | JPG | JPEG | PNG | BMP | ICO | RAR | CSS | JS | zip | Java | jar | TXT | FLV | SWF | mid | Doc | PPT | XLS | PDF | TXT | MP3 | WMA) $ {## Regular Expression root/opt/webapp; expires 24 h;} location/{proxy_pass http: // invalid ATS; # reverse proxy include proxy. conf;} error_page 404/html/404.html; # redirect server error pages to the static page/50x.html # error_page 502 503/html/502.html; error_page 500 x.html; location =/50x.html {root HTML ;}}}

Nginx monitoring

The above is a configuration instance of the actual website, where the gray text shows the configuration instructions. In the above configuration, we first define a location ~ ^/Nginxstatus/. In this way, the nginx running information can be monitored through http: // localhost/nginxstatus/. The displayed content is as follows:

Active connections: 70
Server accepts handled requests
14553819 14553819 19239266
Reading: 0 writing: 3 waiting: 67

The content displayed in nginxstatus is as follows:

Active connections-number of active connections currently being processed by nginx.

Server accepts handled requests -- 14553819 connections are processed in total, and 14553819 handshakes are created successfully (proving that there is no failure in the middle ), A total of 19239266 requests are processed (an average of 1.3 Data requests are processed in each handshake ).

Reading -- number of headers that nginx reads to the client.

Writing -- number of headers that nginx returns to the client.

Waiting -- When keep-alive is enabled, this value is equal to active-(Reading + writing), meaning nginx has processed the resident connection waiting for the next request command.

Static File Processing

Location ~ ^/Images /{
Root/opt/webapp/images;
}

The following configuration defines several file-type request processing methods.

Location ~ \. (HTM | HTML | GIF | JPG | JPEG | PNG | BMP | ICO | CSS | JS | txt) $ {
Root/opt/webapp;
Expires 24 h;
}

For examples, static html files, JS script files, and CSS style files, we hope nginx can directly process and return them to the browser, which can greatly speed up Web browsing. Therefore, for such files, we need to use the root command to specify the file storage path. At the same time, because such files are not often modified, the expires command is used to control their cache in the browser, to reduce unnecessary requests. The expires command can control the header labels of "expires" and "cache-control" in the HTTP Response (to control the page cache ). You can use the following format for writing:

Expires 1 January, 1970, 00: 00: 01gmt;
Expires 60 s;
Expires 30 m;
Expires 24 h;
Expires 1D;
Expires Max;
Expires off;

Dynamic File Processing

Nginx does not support popular dynamic pages such as JSP, ASP, PHP, and Perl, but it can send requests to backend servers through reverse proxy, for example, tomcat, Apache, and IIS can process dynamic page requests. In the preceding configuration example, after defining some static file requests that are directly processed by nginx, all other requests are sent to the backend server through the proxy_pass command (Tomcat is used in the preceding example ). The simplest proxy_pass usage is as follows:

Location /{
Proxy_pass http: // localhost: 8080;
Proxy_set_header X-real-IP $ remote_addr;
}

Instead of using a cluster, we send requests directly to the Tomcat service running on port 8080 to process requests similar to JSP and Servlet.

When the page traffic is very large, multiple application servers are often required to perform operations on dynamic pages. In this case, we need to use the cluster architecture. Nginx defines a server cluster through the upstream command. In the previous complete example, we defined a cluster named Tomcats, which contains six Tomcat services on three servers. The proxy_pass command is written as follows:

Location /{
Proxy_pass http: // invalid ATS;
Proxy_set_header X-real-IP $ remote_addr;
}

In nginx cluster configuration, nginx uses the simplest average allocation rule to allocate requests to each node in the cluster. Once a node fails or becomes effective again, nginx will handle changes in the status in a timely manner to ensure that user access is not affected.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.