The main points of the optimization of Nginx as reverse proxy

Source: Internet
Author: User
Tags epoll vps

Original address: http://my.oschina.net/hyperichq/blog/405421

Common optimization essentials

When Nginx is used for a reverse proxy, each client will use two connections:

One for responding to the client's request, and the other for access to the backend;

If the machine is a two-core CPU, for example:

?
12 grep^proces /proc/cpuinfo wc-l2

Then, you can start with the following configuration:

?
1234567891011121314 # One worker per CPU-core.worker_processes  2;events {    worker_connections  8096;    multi_accept        on;    use                 epoll;}worker_rlimit_nofile 40000;http {    sendfile           on;    tcp_nopush         on;    tcp_nodelay        on;    keepalive_timeout  15;}
Standard proxy configuration

The following is a basic reverse proxy configuration template that forwards all requests to the specified back-end app.

For example, requests to http://your.ip:80/are redirected to the http://127.0.0.1:4433/private server:

?
12345678910111213141516171819202122232425 # One process for each CPU-Coreworker_processes  2;# Event handler.events {    worker_connections  8096;    multi_accept        on;    use                 epoll;}http {     # Basic reverse proxy server     upstream backend  {           server 127.0.0.1:4433;     }     # *:80 -> 127.0.0.1:4433     server {            listen       80;            server_name  example.com;            ## send all traffic to the back-end            location / {                 proxy_pass        http://backend;                 proxy_redirect    off;                 proxy_set_header  X-Forwarded-For $remote_addr;            }     }}

Below, we will optimize on this basis.

Buffer control

If buffering is disabled, then when Nginx receives feedback from the backend, it is passed to the client at the same time.

nginxThe entire feedback message is not read from the server being proxied.

The maximum data size that Nginx can receive from the server is controlled by Proxy_buffer_size.

?
123 proxy_buffering    off;proxy_buffer_size  128k;proxy_buffers 100  128k;
Cache and Expiration control

The above configuration is to forward all requests to the back-end app. To avoid the excessive load of static requests to back-end applications, we can configure Nginx to cache those unchanged response data.

This means that Nginx does not forward those requests back-to-back.

In the following example, the *.html *.gif file is cached for 30 minutes. :

?
12345678910111213141516171819 http {     #     # The path we‘ll cache to.     #     proxy_cache_path /tmp/cachelevels=1:2 keys_zone=cache:60m max_size=1G;}            ## send all traffic to the back-end            location / {                 proxy_pass  http://backend;                 proxy_redirect off;                 proxy_set_header        X-Forwarded-For $remote_addr;                 location ~* \.(html|css|jpg|gif|ico|js)$ {                        proxy_cache          cache;                        proxy_cache_key      $host$uri$is_args$args;                        proxy_cache_valid    200 301 302 30m;                        expires              30m;                        proxy_pass  http://backend;                 }            }

Here, we cache the request to /tmp/cache,并定义了其大小限制为1G。同时只允许缓存有效的返回数据,例如 :

?
1 proxy_cache_valid  200 301 302 30m;

The return code for all response information is not " HTTP (200|301|302) OK " and will not be cached.

For applications such as workpress, the expiration time of cookies and caches needs to be handled by caching only static resources to avoid problems.

Verify

The effect of optimizing the configuration requires practice testing, and it is recommended to deploy a monitoring tool that includes the following:

Nginx: The open source version provides monitoring indicators, only the following 7 indicators:

Connections,accepts,handled,requests,reading,writing,waiting,

To facilitate the analysis of statistics, the hyperic can be expanded to 10 indicators, the increase of three derived indicators, the number of receive, request and processing per minute:

Accepts per minute,handled per minute,requests per Minute

From the operating system point of view: The Nginx process should include CPU utilization, memory consumption, overall CPU utilization, swap area usage and other indicators.

If you are running on a virtual machine, you should also pay attention to the operating system's ST (steal time) indicators, to determine whether there is oversold, overload and other phenomena;

Oversold: Oversold refers to the host in a server put too many VPS accounts, if encountered all the VPS account at the same time using all the resources, there will be no access to the server situation, serious hardware paralysis, data loss. But oversold is hard to detect. Sometimes it can be seen through the ST indicator.

Reference resources:

http://tweaked.io/guide/nginx-proxying/

Network management software Hyperic HQ monitoring and management Nginx

Hyperic Monitoring Nginx1.6 configuration process

Download free Open Source Hyperic

The main points of the optimization of Nginx as reverse proxy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.