Nginx agent function and load balance detailed explanation

Source: Internet
Author: User
Tags nginx server nginx load balancing
nginx agent function and load balance detailed explanation Preface

Nginx's agent function and load balancing function are most commonly used, about Nginx basic grammar and configuration has been explained in the previous article, this is to come to the point, first describe some of the configuration of the agent function, and then explain the load balance detail. configuration instructions for the Nginx Proxy service

1, the previous article in the HTTP module we have the following configuration, when the agent encountered status code of 404, We put 404 page to Baidu.

Error_page 404 https://www.baidu.com; #错误页

However, this configuration, careful friend can find that he did not work.

If we want him to work, we have to use it together with the following configuration

Proxy_intercept_errors on;    #如果被代理服务器返回的状态码为400或者大于400, the Error_page configuration of the setting works. The default is off.

2, if our agent is only allowed to accept the Get,post request method of a

Proxy_method get;    #支持客户端的请求方法. Post/get;

3, set the supported HTTP protocol version

Proxy_http_version 1.0; #Nginx服务器提供代理服务的http协议版本1.0,1.1, the default is set to version 1.0

4, if your Nginx server to 2 Web server to do proxy, load balancing algorithm using polling, then when one of your machine Web application IIS shutdown, that is, the web is not accessible, then the NGINX server distribution request will still give this inaccessible Web server, If the response time here is too long, it will cause the client's page has been waiting for the response, the user experience is compromised, here we how to avoid such a situation happen. Here I'm going to illustrate the problem with a picture.

If this happens in load balancing, Nginx will first web1 the request, but Nginx will continue to distribute the request WEB2 in the case of improper configuration, then wait for the WEB2 response until our response time times out, and the request will be web2 to Web1. If the response time here is too long, the user will wait longer.

The following configuration is one of the solutions.

Proxy_connect_timeout 1;   #nginx服务器与被代理的服务器建立连接的超时时间, default 60 seconds
Proxy_read_timeout 1; #nginx服务器想被代理服务器组发出read请求后, wait for the response timeout, default to 60 seconds.
proxy_send_timeout 1 #nginx服务器想被代理服务器组发出write请求后, waiting for the timeout for the response, default to 60 seconds.
proxy_ignore_client_abort on;  #客户端断网时, the Nginx server is terminating the request to the proxy server. The default is off.

5, if the use of upstream instructions configured a group of servers as a proxy server, the server's access algorithm follows the configured load-balancing rules, and you can use the instructions to configure what happens when the exception, the request to the next group of server processing.

Proxy_next_upstream timeout;  #反向代理upstream中设置的服务器组, the status value returned by the proxy server when a failure occurs. Error|timeout|invalid_header|http_500|http_502|http_503|http_504|http_404|off

Error: The server encountered a failure while establishing a connection or sending a request or reading response information to the server being represented.

Timeout: The server timed out when a connection was established to send a request or read response information to the proxy server.

Invalid_header: The response header returned by the proxy server is abnormal.

OFF: Unable to distribute the request to the server being represented.

http_400, ...: The status code returned by the proxy server is 400,500,502, etc.

6, if you want to get the customer's real IP through HTTP rather than get the proxy server IP address, then do the following settings.

Proxy_set_header Host $host; #只要用户在浏览器中访问的域名绑定了 VIP VIP below has RS, then use $host host is to access the domain name and port in the URL  www.taobao.com:80
proxy_set_header X-real-ip $ REMOTE_ADDR;  #把源IP "$remote _addr, establish HTTP connection header information" assigned to X-REAL-IP, so in the code $X-REAL-IP to obtain source IP
Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for #在nginx As a proxy server, set the IP list, will pass through the machine IP, proxy machine IP records down, with "," separated; The code uses echo $x-forwarded-for |awk-f , ' {print '} ' as source IP

About X-forwarded-for and x-real-ip Some related articles I recommend a Bo friend: HTTP request in the head of the X-forwarded-for, the Bo Friends of the HTTP protocol has a series of articles elaborated, recommend everyone to pay attention to.

7, the following is a configuration of my agent configuration file section, for reference only.

    Include Mime.types; #文件扩展名与文件类型映射表 Default_type Application/octet-stream; #默认文件类型, the default is Text/plain #access_log off; #取消服务日志 log_format myformat ' $remote _addr– $remote _user [$time _local] $request $status $body _bytes_sent $http _refe RER $http _user_agent $http _x_forwarded_for ';  #自定义格式 Access_log Log/access.log MyFormat;   #combined为日志格式的默认值 sendfile on;
    #允许sendfile方式传输文件, the default is off, which can be in HTTP blocks, server blocks, location blocks.  Sendfile_max_chunk 100k;
    #每个进程每次调用传输数量不能大于设定的值, the default is 0, that is, no upper bound.  Keepalive_timeout 65;
    #连接超时时间, the default is 75s, which can be in http,server,location block.   Proxy_connect_timeout 1; #nginx服务器与被代理的服务器建立连接的超时时间, default 60 seconds Proxy_read_timeout 1;
    #nginx服务器想被代理服务器组发出read请求后, wait for the response timeout, default to 60 seconds. Proxy_send_timeout 1;
    #nginx服务器想被代理服务器组发出write请求后, wait for the response timeout, default to 60 seconds. Proxy_http_version 1.0;
    #Nginx服务器提供代理服务的http协议版本1.0,1.1, the default setting is version 1.0.    #proxy_method get; #支持客户端的请求方法.  Post/get; Proxy_ignore_client_abort on; #客户端断网时, the Nginx server is terminating the request to the proxy server.
    The default is off. Proxy_ignore_headers "ExPires "" Set-cookie ";
    #Nginx服务器不处理设置的http相应投中的头域, where a space is separated to set multiple.    Proxy_intercept_errors on; #如果被代理服务器返回的状态码为400或者大于400, the Error_page configuration of the setting works.
    The default is off. Proxy_headers_hash_max_size 1024;
    #存放http报文头的哈希表容量上限, the default is 512 characters. Proxy_headers_hash_bucket_size 128; #nginx服务器申请存放http报文头的哈希表容量大小.
    The default is 64 characters.  Proxy_next_upstream timeout; #反向代理upstream中设置的服务器组, the status value returned by the proxy server when a failure occurs. Error|timeout|invalid_header|http_500|http_502|http_503|http_504|http_404|off #proxy_ssl_session_reuse on; The default is on, and you can set the directive to off if we find "Ssl3_get_finshed:digest check failed" in the error log.
Nginx Load Balancing detailed

I said in the last article, what are the nginx load balancing algorithms? This knot I will give you if the operation of the configuration of detailed instructions.

First of all, let's say upstream this configuration, this configuration is to write a set of proxy server address, and then configure load balancing algorithm. Here the proxy server address is written in 2.

Upstream Mysvr { 
      server 192.168.10.121:3333;
      Server 192.168.10.122:3333;
    }
 server {
        ....
        Location  ~*^.+$ {         
           proxy_pass  http://mysvr;  List of servers defined #请求转向mysvr         
        

Upstream Mysvr { 
      server  http://192.168.10.121:3333;
      Server  http://192.168.10.122:3333;
    }
 server {
        ....
        Location  ~*^.+$ {         
           proxy_pass  mysvr;  List of servers defined #请求转向mysvr         
        

And then, just a little combat.

1, Hot spare: If you have 2 servers, when a server accident, only enable the second server to provide services. The order in which the server handles requests: Aaaaaa suddenly a hangs, bbbbbbbbbbbbbb ...

Upstream Mysvr { 
      server 127.0.0.1:7878; 
      Server 192.168.10.121:3333 backup;  #热备     
    }

2, polling: Nginx By default is polling its weight defaults to 1, the server processing the request order: Ababababab ....

Upstream Mysvr { 
      server 127.0.0.1:7878;
      Server 192.168.10.121:3333;       
    }

3, weighted polling: According to the size of the configured weights distributed to different server number of requests. If not set, the default is 1. The following server request order is: Abbabbabbabbabb ....

Upstream Mysvr { 
      server 127.0.0.1:7878 weight=1;
      Server 192.168.10.121:3333 weight=2;
}

4, Ip_hash:nginx will make the same client IP request the same server.

Upstream Mysvr { 
      server 127.0.0.1:7878; 
      Server 192.168.10.121:3333;
      Ip_hash;
    }

5, if you are not very understanding of the above 4 kinds of equalization algorithms, then trouble you to see my last piece with the picture, may be easier to understand points.

Are you here to feel Nginx's load-balanced configuration is particularly simple and powerful, so it's not over yet, let's go.

Several state parameters of Nginx load balancing configuration are explained.

Down, which means that the current server is temporarily not participating in load balancing.

Backup machine that is reserved. When all other non-backup machines fail or are busy, the backup machine is requested, so the machine has the lightest pressure.

Max_fails, the number of times a request failed is allowed, and the default is 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

Fail_timeout, after a max_fails failure, the time to suspend service. Max_fails can be used with fail_timeout.

Upstream Mysvr { 
      server 127.0.0.1:7878 weight=2 MAX_FA

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.