Architect Journal--nginx reverse proxy, static and dynamic separation and load balancing

Source: Internet
Author: User
Tags nginx reverse proxy
Reverse Proxy

The reverse proxy can be understood as a service-side and server-side communication through an agent, the agent to distribute the request
Nginx is often used as a reverse proxy for back-end servers, so it is easy to achieve static and dynamic separation, as well as load balancing, thus greatly improving the processing capacity of the server.

Common configurations

Location/{
    proxy_pass http://127.0.0.1:8080
}

Or

Upstream abc.com{
    server 127.0.0.1:8080 weight=5;
}
Location/{
    proxy_pass http://abc.com
}

The second configuration uses upstream, which can be used to prepare static and dynamic separation after load balancing.

Static separation, dynamic content such as php,jsp page must be dynamically processed by the server, forwarded to the container such as Tomcat, static content such as pictures, HTML directly access to the file
Nginx to achieve static and dynamic separation, in fact, is in the reverse team, if it is static resources, then directly from the Nginx published path to read, and do not need to go from behind the backend server
Note : In this case, you need to ensure that the program in front of the backend is consistent and that you can do the server-side automatic synchronization or use NFS, MFS distributed shared storage

Such as:

Localtion ~ "." \. (Jpg|jpeg|gif|png|swf|ico) ${
    Root/usr/common/tomcat/webapps
}

A file matching the Jpg|jpeg|gif|png|swf|ico suffix will find the load balance directly on the root path

Nginx through the upstream module to achieve simple load balancing common instructions

Ip_hash

Syntax : Ip_hash
default value : None
working with Fields : Upstream
This directive will distribute the request based on the IP address of the client connection.
The hash keyword is the client's Class C network address, this feature will ensure that the client request is always forwarded to a server, but if the server is unavailable, the request will be forwarded to another server, which will guarantee a client a high probability of always connecting to a server.
You cannot distribute a connection by using a combination of weights (weight) and ip_hash. If you have a server that is not available, you must mark it as down, as in the following example:

Upstream backend {
  ip_hash;
  Server   backend1.example.com;
  Server   backend2.example.com;
  Server   backend3.example.com down  ;
  Server   backend4.example.com;
}

Note : Even if the request from the same machine is not necessarily access to the same server, it may also use the machine agent, so that the IP changes, or access to the server down

Server

syntax : server name [parameters]
default value : None
working with Fields : Upstream
Specify the name of the backend server and some parameters, you can use domain name, IP, port, or UNIX socket. If specified as a domain name, it is first resolved to IP. Weight = number-Sets the server weight by default of 1. Max_fails = number-is set for a certain period of time (this time in the Fail_timeout parameter) to check that the server is available, the most failed requests, the default is 1, set it to 0 can turn off the check, these errors in the Proxy_next_ Upstream or fastcgi_next_upstream (the 404 error does not increase max_fails) is defined. Fail_timeout = time-The server may not be available after a failed attempt to set the size of the max_fails during this period, and it specifies the time that the server is unavailable (before the next attempt to connect request is initiated), and the default is 10 seconds, Fail_ Timeout is not directly related to front-end response time, but can be controlled using proxy_connect_timeout and proxy_read_timeout. Down-the tag server is offline and is typically used with Ip_hash. Backup-(0.6.7 or higher) if all non-backup servers are down or busy, use this server (which cannot be used with the Ip_hash Directive).
Sample Configuration

Upstream  backend  {
  server   backend1.example.com    weight=5;
  Server   127.0.0.1:8080          max_fails=3  fail_timeout=30s;
  Server   unix:/tmp/backend3;
}

Note : If you use only one upstream server, Nginx will set a built-in variable of 1, that is, max_fails and fail_timeout parameters will not be processed.
result : If Nginx cannot connect upstream, the request will be lost.
workaround : Use multiple upstream servers.

Upstream

syntax : Upstream name {...}
default value : None
working with Fields : http
This field sets up a group of servers that can be placed in the Proxy_pass and Fastcgi_pass directives as a separate entity that can be a server that listens to different ports, and can also be a server that listens for both TCP and UNIX sockets.
The server can specify a different weight, and the default is 1.
Sample Configuration

Upstream backend {
  server backend1.example.com weight=5;
  Server 127.0.0.1:8080       max_fails=3  fail_timeout=30s;
  Server Unix:/tmp/backend3;
}

Requests are distributed to back-end servers as polling, but weights are also considered.
In the example above, if 7 requests occur each time, 5 requests will be sent to Backend1.example.com, the other two will get a request, and if one server is unavailable, the request will be forwarded to the next server until all the server checks are passed. If all the servers fail to pass the check, the result will be returned to the server where the client last worked. Geo and GEOIP modules

These two modules are mainly used for global load balancing, and can access different servers based on different clients, as shown below

http{Geo $geo {default default;
        202.103.10.1/24 A;
    179.9.0.3/24 B;
    } upstream default.server{server 192.168.0.100;
    } upstream a.server{server 192.168.0.101;
    } upstream b.server{server 192.168.0.102;
        } server{listen 80;
        Location/{proxy_pass http://$geo. Server$request_uri; }
    }
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.