Test related parameters of nginx HTTP Server Load balancer/reverse proxy

Source: Internet
Author: User
Tags nginx load balancing

For more information, see http://www.cnblogs.com/xiaochaohuashengmi/archive/2011/03/15/1984976.html.

(1) IdentifyMax_failsAndFail_timeoutThe relationship between parameters, what role they play in checking the health status of backend servers, and whether their values directly or indirectly affect other commands in the HTTP Proxy module ......

(2) Test the functions of the proxy_next_upstream, proxy_connect_timeout, proxy_read_timeout, proxy_send_timeout commands in the HTTP Proxy module, the impact on nginx performance, and the processing of responses to backend servers ......

Test Method

This article does not use stress testing. All tests are implemented by manual refresh in the browser. Backend servers are implemented using simple PHP programs.

Test Environment

Nginx Load Balancing/Reverse Proxy Server
System: centos 5.4 64bit
Nginx: 0.7.65
IP: 192.168.108.10

Backend Web Server
System: centos 5.4 64bit
Web Environment: Apache + PHP
Web-1 IP: 192.168.108.163
Web-2 IP: 192.168.108.164

This test mainly targets the HTTP upstream and HTTP Proxy modules. In the following test environment, the parameters of the HTTP upstream and HTTP Proxy modules are initialized. Then, the test parameters are modified accordingly:

...
Upstream Test {
Server 192.168.108.163;
Server 192.168.108.164: 80;
}

Server{
Listen80;
SERVER_NAME.Test.com;
IndexIndex. php index.html index.htm;

Location/{
Proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504 http_404;

Proxy_connect_timeout10 s;
Proxy_read_timeout2 S;
# Proxy_send_timeout10 s;
Proxy_pass http:// Test;
}
}

...

Put forward the parameter Section following the server command. The following excerpt is from the nginx wiki content.

Syntax:Server Name[Parameters]

ParametersPackage includes:

·Weight= Number-sets the server weight. The default value is 1.

·Max_fails= Number-the maximum number of failed requests generated when the server is available within a certain period of time (set in the fail_timeout parameter). The default value is 1, you can disable the check by setting it to 0. These errors are defined in proxy_next_upstream or fastcgi_next_upstream (the 404 error does not add max_fails.

·Fail_timeout= Time-this server may be unavailable after a connection request fails to be attempted due to the size specified by max_fails within this time period, similarly, it specifies the server unavailable time (before the next attempt to initiate a connection request). The default value is 10 seconds. fail_timeout is not directly related to the frontend response time, however, you can use proxy_connect_timeout and proxy_read_timeout for control.

·Down-It indicates that the server is offline and usually used with ip_hash.

·Backup-(0.6.7 or higher) only applies to this server if all non-backup servers are down or busy.

Understanding of max_fails Parameters: According to the above explanation, the default value of max_fails is 1, and the default value of fail_timeout is 10 seconds. That is to say, by default, the backend server can tolerate one failure within 10 seconds, if the number of requests exceeds 1, it is deemed that the server is faulty and the server is marked as unavailable. Wait 10 seconds before sending the request to the server, and perform health check on the backend server accordingly. However, if I set max_fails to 0,
This means that no health check is performed on the backend server, so that the fail_timeout parameter is meaningless. What if there is a problem with the backend server? As mentioned above, you can use proxy_connect_timeout and proxy_read_timeout for row control.

The following describes the commands in the HTTP Proxy module:

Proxy_next_upstream
Syntax: proxy_next_upstream [error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_404 | Off]
Determine under which circumstances the request will be forwarded to the next server. The forwarding request only occurs when no data is transmitted to the client.

Proxy_connect_timeout
Timeout value for backend server connection _ timeout value for initiating a handshake and waiting for response

Proxy_read_timeout
After successful connection _ Wait for the response time of the backend server _ in fact, it has entered the backend queue to wait for processing (it can be said that the time when the backend server processes the request)

Proxy_send_timeout
Backend server data return time _ indicates that the backend server must transmit all data within the specified time

Proxy_pass
This command sets the address of the proxy server and the mapped Uri.

Start testing

Case 1: the backend program execution time exceeds or is equal to the proxy_read_timeout value. max_fails = 0 disables the backend server health check.

Nginx Configuration Modification content Server 192.168.108.163 max_fails = 0;
Server 192.168.108.164 max_fails = 0;
Proxy_next_upstream error timeout
Proxy_read_timeout2 S
 
Backend Web Server
Web1 test. php Web2 test. php
<? PHP
Header ('rs: web1 ');
$ T =2;
Sleep ($ t );
Echo "Sleep {$ t} s <br> ";
Echo "Web-1 <br> ";
?>
<? PHP
Header ('rs: web2 ');
$ T =5;
Sleep ($ t );
Echo "Sleep {$ t} s <br> ";
Echo "Web-2 <br> ";
?>
Note:

I have two backend Web servers. Their home page files are all one test. PHP program, which sleep for 2 seconds and 5 seconds respectively, equals to and beyond the time of proxy_read_timeout. [max_fails = 0] means to disable the backend server health check. [Proxy_next_upstream error timeout] indicates that when an error or timeout occurs, it is switched to the next backend server. After this setting, use the curl command to initiate a connection request to nginx to see how nginx will respond.

Test start:

(1) curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 504 gateway time-out
Server: nginx/0.7.65
Date: Tue, 18 May 2010 02:43:08 GMT
Content-Type: text/html; charset = UTF-8
Content-Length: 183
Connection: keep-alive

4.008: 0.002: 4.007

Note:

After three consecutive requests, the returned HTTP results are the same, all of which are 504 gateway time-out errors. This error occurs only when there is a problem with the backend server. Obviously, the proxy_read_timeout setting here is too short, and the backend program has not finished executing the program, nginx can't wait to throw the request to another server defined by upstream. When it finds that the other server has not returned for the same two seconds, nginx will not be available on this server, only 504 gateway time-out is returned. This is why the last time_total time is 4 seconds. (After checking the access logs of the two web servers, we know that there is an access record and the return code is 200, indicating nginx
It does, but it leaves without waiting until the execution is complete.) If I have three servers, the time_total time will be 6 seconds without any change, because nginx will take all three servers one by one.

Bytes -----------------------------------------------------------------------------------------------------------------------

OK. After confirming that my proxy_read_timeout setting is too short, I set it to 3 seconds and then access it through the curl command:

(2) curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 200 OK
Server: nginx/0.7.65
Date: Tue, 18 May 2010 03:07:58 GMT
Content-Type: text/html; charset = UTF-8
Connection: keep-alive
Vary: Accept-Encoding
X-powered-by: PHP/5.1.6
RS: web1

5.042: 0.005: 5.042

Note: through three consecutive requests, the results are the same. RS: web1 means that all the three requests are forwarded to web1. However, it takes only two seconds for my web1 program to return results. But why is my program execution time + proxy_read_timeout time always returned after I use the nginx proxy?

Bytes -----------------------------------------------------------------------------------------------------------------------

Set proxy_read_timeout to 4 s

(3) curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 200 OK
Server: nginx/0.7.65
Date: Tue, 18 May 2010 03:15:25 GMT
Content-Type: text/html; charset = UTF-8
Connection: keep-alive
Vary: Accept-Encoding
X-powered-by: PHP/5.1.6
RS: web1

6.004: 0.000: 6.004

The results are the same after three requests. This time takes longer, but it is indeed the execution time + proxy_read_timeout. But why does it take 6 seconds? According to the weight defined in upstream, the request should be evenly distributed, at least two seconds. After analysis, it is found that the final request to the user is web1, then when the request is sent again, it will be allocated to web2, because web2 is sleep 5 seconds, so the time (4S) after proxy_read_timeout) then, it will jump to web1, and the result is the request returned by web1.
The waiting time + the execution time of web1, and so on. The next time nginx is naturally allocated to web2 ....... If there are more backend Web servers, you can determine which server is currently returned to the end user as the next request server, then, the query is performed in the order defined in upstream (the weight is the same)

Conclusion:

(1) in the preceding three tests, the values of proxy_read_timeout are set to 2 s, 3 s, and 4S respectively. The final test results are explained and explained later. Because I disabled the backend server health check (max_fails = 0), the only basis for determining the backend server condition is the proxy_read_timeout parameter. If this parameter is set too small, however, if the execution of the backend program exceeds this time, nginx is very inefficient.

(2) In the above test, if the backend server is normal but the execution times out, nginx selects the next server based on the values of proxy_read_timeout and proxy_next_upstream. What if the backend server reports an error directly? If the error message is defined in proxy_next_upstream, nginx will jump to the next server. Otherwise, the stored information is directly returned to nginx and finally presented to the user.

Case 2: Enable the backend server health check. If the test program execution time exceeds or is equal to the value of proxy_read_timeout or the backend server reports an error directly
Nginx Configuration Modification content Server 192.168.108.163 max_fails = 1;
Server 192.168.108.164 max_fails = 1;
Proxy_next_upstream error timeout http_500 http_502 http_504
Proxy_read_timeout2 S
 
Backend Web Server
Web1 test. php Web2 test. php
<? PHP
Header ('rs: web1 ');
$ T =2;
Sleep ($ t );
Echo "Sleep {$ t} s <br> ";
Echo "Web-1 <br> ";
?>
<? PHP
Header ('rs: web2 ');
Header ('HTTP/1.1 500 internal server error ');
# $ T =5;
# Sleep ($ t );
Echo "Sleep {$ t} s <br> ";
Echo "Web-2 <br> ";
?>
Note:

Backend server health check Enabled
Proxy_read_timeout2 S (the changes will follow the test below)
Web1 program still sleep 2 S
Modified the web2 program and asked him to directly Return Error 500.

Test start:

(1) The results of three consecutive tests are as follows:

Curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 504 gateway time-out
Server: nginx/0.7.65
Date: Tue, 18 May 2010 07:01:48 GMT
Content-Type: text/html; charset = UTF-8
Content-Length: 183
Connection: keep-alive

2.005: 0.001: 2.005

Curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 502 Bad Gateway
Server: nginx/0.7.65
Date: Tue, 18 May 2010 07:01:50 GMT
Content-Type: text/html; charset = UTF-8
Content-Length: 173
Connection: keep-alive

0.001: 0.001: 0.001

Curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 504 gateway time-out
Server: nginx/0.7.65
Date: Tue, 18 May 2010 07:01:57 GMT
Content-Type: text/html; charset = UTF-8
Content-Length: 183
Connection: keep-alive

2.005: 0.001: 2.005

Note:

The time used for 1st requests is 2 seconds. When web1 times out, web2 returns the 500 error and upstream does not have more backend servers. Therefore, nginx directly throws 504 and marks web2, web1 is unavailable. View the access logs of the two backend Web servers, with the access records of the nginx proxy.
The 2nd request takes a short time and reports a 502 error, indicating that no backend server is available to accept the request. Check the access logs of the two backend Web servers without any changes. This indicates that the two servers are marked as unavailable by nginx and the requests are not forwarded to the backend. the user's 502 error is returned directly.
3rd requests are the same as 1st requests

(2) Modify proxy_read_timeout3 SResults of 6 consecutive accesses and logs of two web servers
Curl-I-w % {time_total }:%{ time_connect }:%{ time_starttransfer} www.test.com/test.php
HTTP/1.1 200 OK
Server: nginx/0.7.65
Date: Tue, 18 May 2010 07:30:15 GMT
Content-Type: text/html; charset = UTF-8
Connection: keep-alive
Vary: Accept-Encoding
X-powered-by: PHP/5.1.6
RS: web1

2.003: 0.001: 2.002

Access log

Web1
[18/May/2010: 15: 30: 00
[18/May/2010: 15: 30: 03
[18/May/2010: 15: 30: 05
[18/May/2010: 15: 30: 08
[18/May/2010: 15: 30: 11
[18/May/2010: 15: 30: 13

Web2

[18/May/2010: 15: 30: 00
[18/May/2010: 15: 30: 11

Note:

The access log shows that:
The first request is sent to web2. Because it returns the 1st error, the request is forwarded to web1 and marked that web2 is unavailable.
From 2nd to 4th requests, all requests are sent to web1. After the fourth request is completed, the first request has passed 8 seconds.
The first request is alreadyFail_timeoutThe default 10 s of the parameter indicates that the time when web2 is unavailable has passed. Therefore, the first access is actually the same.

Conclusion:

(1)The proxy_next_upstream parameter is useful and can avoid many errors.
(2)We recommend that you set the max_fails parameter to 3 in a busy large system. If there are few backend servers, keep the default value.
(3)Proxy_read_timeout depends on its own program. It should not be too large or too small. For a PHP program, see the max_execution_time option value in PHP. ini.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.