Nginx proxy function and load balancing

Source: Internet
Author: User
Tags nginx load balancing

Preface

Nginx proxy function and load balancing function is most commonly used, about nginx basic grammar common sense and configuration has been described in the previous article, this article on the point, first describe some of the configuration of the agent function, and then explain the load balancer details.

configuration instructions for Nginx Proxy service

1, in the previous article we have the following configuration in the HTTP module, when the agent encountered a status code of 404, We put 404 pages directed Baidu.

404 https://www.baidu.com; #错误页

However this configuration, attentive friends can find that he does not work.

If we want to make him work, we have to use it with the following configuration

Proxy_intercept_errors on;    #如果被代理服务器返回的状态码为400或者大于400, set the Error_page configuration to work. The default is off.

2, if our agent only allows to accept the Get,post request method of a

get;    #支持客户端的请求方法. post/get;

3. Set the supported HTTP protocol version

1.0 ; #Nginx服务器提供代理服务的http协议版本1. 0,1.1, default set to 1.0 version

4, if your Nginx server to 2 Web servers to do proxy, load balancing algorithm polling, then when your one machine Web program IIS shut down, that is, the web is not accessible, then the NGINX server distribution request will still give this unreachable Web server, If the response time here is too long, it will cause the client's page to wait for the response, the user experience is compromised, here we how to avoid such a situation occurs. Here I have a picture to illustrate the problem.

If the load balancer in which web2 occurs, Nginx will first go to the WEB1 request, but nginx in the case of improper configuration will continue to distribute the request WEB2, and then wait for web2 response, until our response time expires, the request will be re-distributed to Web1, If the response time is too long, the longer the user waits.

The following configuration is one of the solutions.

1 ;    1  1; #nginx服务器想被代理服务器组发出write请求后, wait for the response time, the default is 60 seconds. Proxy_ignore_client_abort on;  #客户端断网时, the Nginx server is terminating the request to the proxy server. The default is off. 

5. If you use the upstream directive to configure a group of servers as a proxy server, the access algorithms in the server follow the configured load balancing rules, and you can use the directive to configure what happens in the event that the request is processed sequentially to the next set of servers.

Proxy_next_upstream timeout;  #反向代理upstream中设置的服务器组, the status value returned by the proxy server in the event of a failure. Error|timeout|invalid_header|http_500|http_502|http_503|http_504|http_404|off

Error: Server errors occurred while establishing a connection or sending a request or reading response information to the server being proxied.

Timeout: The server timed out when a connection was established to send a request or read the response from the proxy server.

Invalid_header: The response header exception returned by the proxy server.

OFF: The request could not be distributed to the server being proxied.

Http_400, ....: The status code returned by the proxy server is 400,500,502, and so on.

6, if you want to get the client's real IP through HTTP rather than get the IP address of the proxy server, then do the following settings.

Proxy_set_header Host $host; #只要用户在浏览器中访问的域名绑定了 VIP VIP under the RS; then use $host; host is the domain name and port in the Access URL  www.taobao.com:proxy_set_header X -real-ip $remote _addr;  #把源IP "$remote _addr, establish HTTP connection header information" assigned to X-REAL-IP, so that in code $X-real-IP to get the source Ipproxy_set_header X- Forwarded-for $proxy _add_x_forwarded_for; #在nginx as a proxy server, set the IP list, will pass through the machine IP, proxy machine IP are recorded, with "," separated; the code with echo $ x-forwarded-for'{print '}' as the source IP

About X-forwarded-for and x-real-ip Some related articles I recommend a Bo friend: HTTP request in the header of the x-forwarded-for, the Bo friend of the HTTP protocol has a series of articles, recommend everyone to pay attention to.

7, below is one of my configuration file section on agent configuration, for reference only.

include mime.types; #文件扩展名与文件类型映射表 Default_type Application/octet-stream; #默认文件类型, the default is text/plain #access_log off; #取消服务日志 Log_format MyFormat'$remote _addr– $remote _user [$time _local] $request $status $body _bytes_sent $http _referer $http _user_agent $http _x_ Forwarded_for'; #自定义格式 access_log Log/Access.log MyFormat;   #combined为日志格式的默认值 sendfile on;    #允许sendfile方式传输文件, the default is off, which can be in HTTP blocks, server blocks, and location blocks.  Sendfile_max_chunk 100k;    #每个进程每次调用传输数量不能大于设定的值, the default is 0, that is, there is no upper limit. Keepalive_timeout $;    #连接超时时间, the default is 75s, which can be http,server,location blocks. Proxy_connect_timeout1; #nginx服务器与被代理的服务器建立连接的超时时间, default 60 seconds Proxy_read_timeout1, #nginx服务器想被代理服务器组发出read请求后, wait for the response time, the default is 60 seconds. Proxy_send_timeout1, #nginx服务器想被代理服务器组发出write请求后, wait for the response time, the default is 60 seconds. Proxy_http_version1.0; #Nginx服务器提供代理服务的http协议版本1.0,1.1, the default setting is version 1.0. #proxy_methodGet; #支持客户端的请求方法. post/Get; Proxy_ignore_client_abort on; #客户端断网时, the Nginx server is terminating the request to the proxy server.    The default is off. Proxy_ignore_headers"Expires" "Set-cookie";    #Nginx服务器不处理设置的http相应投中的头域, you can set more than one space here.    Proxy_intercept_errors on; #如果被代理服务器返回的状态码为400或者大于400, set the Error_page configuration to work.    The default is off. Proxy_headers_hash_max_size1024x768; #存放http报文头的哈希表容量上限, the default is 512 characters. Proxy_headers_hash_bucket_size -; #nginx服务器申请存放http报文头的哈希表容量大小.    The default is 64 characters.  Proxy_next_upstream timeout; #反向代理upstream中设置的服务器组, the status value returned by the proxy server in the event of a failure. Error|timeout|invalid_header|http_500|http_502|http_503|http_504|http_404|off #proxy_ssl_session_reuse on, the default is on, and if we find "Ssl3_get_finshed:digest check failed" In the error log, you can set the directive to off. 
Nginx Load Balancing detailed

In the previous article I said that Nginx has a load balancing algorithm. This knot I will give if the operation configuration for everyone to do under the detailed instructions.

First of all, let's say upstream this configuration, this configuration is to write a set of proxy server address, and then configure the load balancing algorithm. Here the proxy server address is written in 2.

upstream Mysvr {       192.168.  10.121:3333;       192.168. 10.122:3333;    } server {        ....        . Location  ~*^.+$ {                    proxy_pass  http:///mysvr;  List         of servers #请求转向mysvr defined        

upstream Mysvr {       Server  http://192.168.10.121:3333;      Server  http://192.168.10.122:3333;     } server {        ....        . Location  ~*^.+$ {                    proxy_pass  mysvr;  List                 of servers #请求转向mysvr defined

Then, let's have something to do with the actual combat.

1, hot-standby: If you have 2 servers, when a server accident, only enable the second server to provide services. Order of the server processing requests: Aaaaaa suddenly a hangs, bbbbbbbbbbbbbb ...

upstream Mysvr {       127.0.  0.1:7878;        192.168. 10.121:3333  backup;  #热备         }

2, Polling: Nginx default is to poll its weight defaults to 1, the order of the server processing requests: Ababababab ....

upstream Mysvr {       127.0.  0.1:7878;        192.168. 10.121:3333;            }

3. Weighted polling: Distributes the different number of requests to different servers according to the size of the configured weights. If not set, the default is 1. The following server request order is: Abbabbabbabbabb ....

upstream Mysvr {       127.0.  0.1:weight=1;       192.168. 10.121:weight=2;
}

4, Ip_hash:nginx will let the same client IP request the same server.

upstream Mysvr {       127.0.  0.1:7878      192.168.  10.121:3333;       ip_hash;    }

5, if you are not very understanding of the above 4 equalization algorithms, then trouble you to see my last matching picture, it may be easier to understand the point.

Do you feel that the Nginx load balancer configuration is particularly simple and powerful, so it's not over yet, let's go ahead and lay eggs here.

Several state parameters for Nginx load balancer configuration are explained.

    • down, Indicates that the current server is temporarily not participating in load balancing.

    • backup, reserved backup machine. When all other non-backup machines fail or are busy, the backup machine is requested, so the pressure on this machine is the lightest.

    • max_fails, the number of times the request failed to be allowed, by default, 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

    • fail_timeout, the time to pause the service after a max_fails failure. Max_fails can be used with fail_timeout.

upstream Mysvr {       127.0.  0.1:7878 weight=2 max_fails=2 fail_timeout=2;       192.168. 10.121:3333 weight=1 max_fails=2 fail_timeout=1;        }

It should be possible to say that Nginx's built-in load balancing algorithm is out of stock. If you like with more in-depth understanding of nginx load Balancing algorithm, NGINX official provide some plug-ins you can understand the next.

Summary

If you use these techniques in the development process, or if you want to use the problem encountered, welcome to the upper left corner of the group, we discuss the study together, this article has not been continued.

Nginx proxy function and load balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.