Reprinted please indicate from "LIU Da's csdn blog": http://blog.csdn.net/poechant
For more articles, refer to the csdn column nginx high-performance WEB server or
Backend server development series-practical nginx High-Performance Web Server
Nginx's httpupstreammodule provides simple Load Balancing for backend servers. The simplest upstream method is as follows:
Upstream backend {
Server backend1.example.com;
Server backend2.example.com;
Server.backend3.example.com;
}
Server {
Location /{
Proxy_pass http: // backend;
}
}
1. backend servers
You can use upstream to set backend servers. The specified method can be
IP address and port, domain name, and UNIX socket ). If the domain name can be resolved to multiple addresses, these addresses are used as backend. The following is an example:
Upstream backend {
Server blog.csdn.net/poechant;
Server 145.223.156.89: 8090;
Server UNIX:/tmp/backend3;
}
The first backend is specified by the domain name. The second backend is specified by IP address and port number. The third backend is specified using a UNIX socket.
2. Load Balancing Policy
Nginx provides three methods: Round Robin, user IP hash, and specified weight.
By default, nginx will provide you with Round Robin as a load balancing policy. But this does not necessarily satisfy you. For example, if a series of accesses in a certain period of time are initiated by the same user Michael, the first request of Michael may be backend2, And the next request may be backend3, then there is backend1, backend2, backend3 ......
This is not efficient in most application scenarios. Of course, nginx provides you with a hash Method Based on the IP addresses of Michael, Jason, David, and other messy users, in this way, each client's access request will be dumped to the same backend server. (In addition, I recently discovered that many websites steal this blog post without leaving the original article link, so I inserted the address "http://blog.csdn.net/poechant" here.) The specific usage is as follows:
Upstream backend {
Ip_hash;
Server backend1.example.com;
Server backend2.example.com;
Server.backend3.example.com;
}
This policy is used
The key of the hash operation is the class c ip address of the client (the class c ip address ranges from 192.0.0.0 to 223.20.255, and the first three numbers represent subnets, the fourth segment is the IP address category of the local host ). In this way, each request of a client is sent to the same backend. Of course, if the backend to be hashed is unavailable, the request will be transferred to another backend.
Next, we will introduce a keyword used with ip_hash: down. When a server is temporarily down, you can use "down" to mark it, and the identified server will not accept the request for processing. The details are as follows:
Upstream backend {
Server blog.csdn.net/poechant down;
Server 145.223.156.89: 8090;
Server UNIX:/tmp/backend3;
}
You can also specify the weight (weight) as follows:
Upstream backend {
Server backend1.example.com;
Server 123.321.123.321: 456 Weight = 4;
}
Weight is 1 by default. In the preceding example, the weight of the first server is 1 by default, and the second is 4, so it is equivalent to the first server receiving 20% requests, the second receives 80%. Note that
Weight and ip_hash cannot be used at the same time because they are different and conflict with each other.
3. Retry Policy
You can specify the maximum number of Retries for each backend and the Retry Interval. The keywords used are max_fails and fail_timeout. As follows:
Upstream backend {
Server backend1.example.com Weight = 5;
Server 54.244.56.3: 8081 max_fails = 3
Fail_timeout = 30 s;
}
In the preceding example, the maximum number of failures is 3, that is, the maximum number of attempts is 3, and the timeout time is 30 seconds. The default value of max_fails is 1, and that of fail_timeout is 10 s. In case of transmission failure, proxy_next_upstream or fastcgi_next_upstream
. In addition, you can use proxy_connect_timeout and proxy_read_timeout to control the upstream response time.
Note that when there is only one server in upstream, the max_fails and fail_timeout parameters may not work. The problem is that nginx will only try the upstream request once. If the request fails, it will be discarded :(
...... The solution is to write your poor and unique Server Multiple times in upstream, as shown below:
Upstream backend {
Server backend.example.com max_fails fail_timeout = 30 s;
Server backend.example.com max_fails fail_timeout = 30 s;
Server backend.example.com max_fails fail_timeout = 30 s;
}
4. Backup Policy
You can use the "backup" keyword from nginx 0.6.7. When all non-Backup machines are down or busy, only backup machines marked by backup are used. It must be noted that backup cannot be used with the ip_hash keyword. Example:
Upstream backend {
Server backend1.example.com;
Server backend2.example.com backup;
Server backend3.example.com;
}
Reprinted please indicate from "LIU Da's csdn blog": http://blog.csdn.net/poechant
For more articles, refer to the csdn column nginx high-performance WEB server or
Backend server development series-practical nginx High-Performance Web Server
-