Nginx reverse proxy

Source: Internet
Author: User
Tags nginx server nginx reverse proxy

Nginx reverse proxy

Address: http://nginx.com/resources/admin-guide/reverse-proxy/

NGINX Reverse Proxy
Nginx reverse proxy


This section describes the basic configuration of a proxy server. You will learn how:

  • Pass a request from NGINX to proxied servers over different protocols
  • Modify client request headers that are sent to the proxied server
  • Configure buffering of responses coming from the proxied servers

    Proxying is typically used to distribute the load among several servers, seamlessly show content from different websites, or pass requests for processing to application servers over protocols other than HTTP.

    This chapter discusses the basic configurations of the proxy server. You will learn the following:

    • Send requests to the proxy server through different protocols
    • Modify the request header sent to the Proxy Server
    • Configure the Response Buffer from the Proxy Server

      The most classic proxy is used to distribute loads among multiple servers, seamlessly display content from different websites, and send requests to various application servers through protocols other than HTTP.

      Passing a Request to a Proxied Server
      Send a request to the Proxy Server

      When NGINX proxies a request, it sends the request to a specified proxied server, fetches the response, and sends it back to the client. it is possible to proxy requests to an HTTP server (another NGINX server or any other server) or a non-HTTP server (which can run an application developed with a specific framework, such as PHP or Python) using a specified protocol. supported protocols includeFastCGI,Uwsgi,SCGI, AndMemcached.

      When Nginx proxy requests a request, send the request to the specified proxy server, retrieve the response, and send it back to the client. A protocol proxy may be used to request an HTTP server (another Nginx server or any other host) or a non-HTTP server (a server that can run applications developed using a framework such as PHP or Python ). Supported protocols include FastCGI, uwsgi, SCGI, and memecached.

      To pass a request to an HTTP proxied server,Proxy_passDirective is specified inside a location. For example:

      To send a request to an HTTP proxy server,Proxy_pass The command must be specified in the location block, for example:

      location /some/path/ {    proxy_pass http://www.example.com/link/;}

      This example configuration results in passing all requests processed in this location to the proxied server at the specified address. this address can be specified as a domain name or an IP address. the address may also include a port:

      In the preceding example, all requests processed by this location are sent to the proxy server of the specified address. This address can be either a domain name or an IP address. This address may also contain a Port:

      location ~ \.php {    proxy_pass http://127.0.0.1:8000;}

      Note that in the first example above, the address of the proxied server is followed by a URI,/link /. if the URI is specified along with the address, it replaces the part of the request URI that matches the location parameter. for example, here the request with the/some/path/page.html URI will be proxied to http://www.example.com/link/page.html. if the address is specified without a URI, or it is not possible to determine the part of URI to be replaced, the full request URI is passed (possibly, modified ).

      Note that in the first example above, the proxy server address is followed by a/link/In the URI. If the URI is specified to this address, this address replaces the part of the request URI that matches the location parameter. For example, the URI with A/some/path/page.html request will be directed to http://www.example.com/link/page.html. If the specified address does not have a URI, or you cannot determine which part of the URI will be replaced, the entire request URI will be sent (may be modified ).

      To pass a request to a non-HTTP proxied server, the appropriate ** _ pass direve ve * shoshould be used:

      • Fastcgi_passPasses a request to a FastCGI server
      • Uwsgi_passPasses a request to a uwsgi server
      • Scgi_passPasses a request to a SCGI server
      • Memcached_passPasses a request to a memcached server

        To send a request to a non-HTTP proxy server, the ** _ pass Command * format is used.

        • Fastcgi_passSend a request to the FastCGI Server
        • Uwsgi_pass sends a request to the uwsgi server.
        • Scgi_pass sends a quote to the SCGI Server
        • Memcached_passSend a request to the memcached Server

          Note that in these cases, the rules for specifying addresses may be different. You may also need to pass additional parameters to the server (see the reference documentation for more detail ).

          Note that the rules for IP Address Allocation on various servers may be different in these cases. You may also need to send additional parameters to the server (to the relevant documentation for more details ).

          TheProxy_passDirective can also point to a named group of servers. In this case, requests are distributed among the servers in the group according to the specified method.

          The proxy_pass command can also point to a named host group. In this case, requests are distributed to hosts in the group using the specified method.

          Passing Request Headers
          Send request header

          By default, NGINX redefines two header fields in proxied requests, "Host" and "Connection", and eliminates the header fields whose values are empty strings. "Host" is set to the $ proxy_host variable, and "Connection" is set to close.

          By default, Nginx redefines the two header fields requested by the proxy, "Host" and "Connection", and removes the header fields with null values ." Host is set to $ proxy_host variable, and Connection is set to disabled.

          To change these setting, as well as modify other header fields, useProxy_set_header Directive. This directive can be specified in a location or higher. It can also be specified in a particle server context or in the http block. For example:

          To change these settings and modify other header fields, useProxy_set_headerCommand. This command can be used in location or higher level. It can also be specified in a special server block or http block. For example:

          location /some/path/ {    proxy_set_header Host $host;    proxy_set_header X-Real-IP $remote_addr;    proxy_pass http://localhost:8000;}

          In this configuration the "Host" field is set to the $ host variable.

          In this configuration, set the "Host" field to the $ host variable.

          To prevent a header field from being passed to the proxied server, set it to an empty string as follows:

          To prevent a header field from being sent to the proxy server, set it as an empty string as follows:

          location /some/path/ {    proxy_set_header Accept-Encoding "";    proxy_pass http://localhost:8000;}


          Caching ing Buffers
          Configure the buffer

          By default NGINX buffers responses from proxied servers. A response is stored in the internal buffers and is not sent to the client until the whole response is already ed. buffering helps to optimize performance with slow clients, which can waste proxied server time if the response is passed from NGINX to the client synchronously. however, when buffering is enabled NGINX allows the proxied server to process responses quickly, while NGINX stores the responses for as much time as the clients need to download them.

          Nginx caches responses from the proxy server by default. A response is stored in the kernel buffer until the entire response is received and sent to the client. The buffer can help optimize performance by slowing down the client. If the response is synchronously sent from Nginx to the client, it will waste time on the proxy server. However, when the buffer enables Nginx, the proxy server can quickly process the response, and Nginx will store the response until the client download is complete.

          The directive that is responsible for enabling and disabling buffering is proxy_buffering. By default it is set to on and buffering is enabled.

          The proxy_buffering command is used to enable and Disable buffering. The default value is on, and the buffer is enabled.

          TheProxy_buffersDirective controls the size and the number of buffers allocated for a request. The first part of the response from a proxied server is stored in a separate buffer, the size of which is set withProxy_buffer_sizeDirective. This part usually contains a comparatively small response header and can be made smaller than the buffers for the rest of the response.

          Proxy_buffersThe command controls the size and quantity of the buffer allocated to the request. The first part of the response returned from the proxy server is stored in a separate buffer zone.Proxy_buffer_sizeCommand settings. This part usually contains a relatively small response header, making the rest of the response smaller than the buffer.

          In the following example, the default number of buffers is increased and the size of the buffer for the first portion of the response is made smaller than the default.

          In the following example. Increase the number of default buffers and reduce the size of the buffer that saves the first part of the response.

          location /some/path/ {    proxy_buffers 16 4k;    proxy_buffer_size 2k;    proxy_pass http://localhost:8000;}

          If buffering is disabled, the response is sent to the client synchronously while it is refreshing it from the proxied server. this behavior may be desirable for fast interactive clients that need to start loading the response as soon as possible.

          If the buffer zone is disabled, the response is synchronously sent to the client as soon as it is received by the proxy server. This behavior may satisfy the fast interaction client that needs to receive responses as quickly as possible.

          To disable buffering in a specific location, placeProxy_bufferingDirective in the location with the off parameter, as follows:

          To disable the buffer of a location, add the proxy_buffering command to the location and set its value to off, as shown below:

          location /some/path/ {    proxy_buffering off;    proxy_pass http://localhost:8000;}

          In this case NGINX uses only the buffer configuredProxy_buffer_sizeTo store the current part of a response.

          In this way, Nginx only uses the buffer configured through proxy_buffer_size to save the current part of the response.





Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.