Nginx Load Balancing cluster solution healthcheck_nginx_upstreams (i)

Source: Internet
Author: User
Tags hash http request website server port number server port nginx reverse proxy nginx load balancing

The article originates from the Internet, currently cannot find the original author, put here is the purpose is to record Healthcheck_nginx_upstreams installation process and related configuration, in the first successful installation can not run normally healthcheck_nginx_upstreams, After reading through the source code and debugging, can run normally.


However, the information is as follows:

*26 no live upstreams while connecting to upstream


Nginx is a free, open-source, high-performance server and reverse proxy Server software, while it can also be used for IMAP and POP3 Server Proxy, with its high performance, stability, rich features, simple structure, low resource consumption characteristics in order to the vast number of operations to love. Nginx differs from a traditional server by not relying on threads to process requests. Instead, it uses a more extensible event-driven schema (async). This structure consumes less resources, but more importantly, it can withstand a large request load.   Even if you don't want to handle thousands of requests, you can still benefit from Nginx's high performance and small memory footprint, as well as its rich features. Nginx Reverse Proxy: The reverse proxy refers to the proxy server to accept the connection request on the Internet, and then forward the request to the server on the internal network, and the results from the server back to the Internet request to connect to the client, when the proxy server is displayed as a server, This mode of operation is similar to the lvs-net model. Reverse proxies can also be understood as Web server acceleration, a technique that is used to reduce the load on the actual Web server by adding a high-speed Web caching server between the busy Web server and the external network. Reverse proxy is to improve the speed of the Web server, all external networks to access the server when all the requests through it, so that the reverse proxy server is responsible for receiving the client's request, and then to the source server to obtain the content, the content returned to the user, and save the content locally, In order to receive the same request for information in the future, it will send the contents of the local cache directly to the user, reducing the pressure on the backend Web server and improving the response speed.   Therefore, Nginx also has a caching function. Reverse proxy Workflow: 1) The user makes the access request through the domain name, the domain name is resolved to the reverse proxy server IP address, 2) The reverse proxy server receives the user's request, 3) The reverse proxy server in the local cache to find whether there is the current user requested content, find the content directly to the user;   4) If there is no local content requested by the user, the reverse proxy server will go to the backend server to request the same information content and send the information content to the user, if the content of the information can be cached, the content will be cached in the proxy server's local cache. The advantages of reverse proxy: 1) solve the problem that the website server is visible, improve the security of the website server, 2) Save the limited IP Address resource, the back-end server can use the private IP address to communicate with the proxy server, 3) speed up the website access speed, alleviate the load of the Web server.
(a), scheduling algorithm nginx upstream instructions used to specify the Proxy_pass and Fastcgi_pass used by the back-end server, the Nginx reverse proxy function, so can be combined to use to achieve load balancing purposes, Nginx also supports a variety of scheduling algorithms: 1, polling (default) each request is assigned to a different back-end server in chronological order, and if the backend server is down, it skips the server assigned to the next monitoring server. And it does not need to record the state of all current connections, so it is a stateless dispatch. 2, weight specified in the poll based on the weight, weight and access ratio is proportional, that is, to indicate the performance of the backend server, if the back-end server performance is better can allocate most of the request to it, has achieved its capabilities. Example: My backend server 172.23.136.148 configuration: e5520*2 cpu,8g Memory Backend server 172.23.136.148 configuration: Xeon (TM) 2.80GHz * 2,4g memory I want to have 30 requests reaching the front end, 20 of these requests to 172.23.136.148 processing, the remaining 10 requests to 172.23.136.149 processing, you can do the following configuration upstream Web_poll {server 172.23.136.148 weight=10; Server 172.23.136.149 weight=5; } 3, ip_hash  each request according to the hash result of the access IP allocation, when the new request arrives, its client IP hash algorithm is hashed out a value, the subsequent request client IP hash value as long as the same, will be assigned to the same back-end server, The scheduling algorithm can solve the problem of the session, but sometimes it leads to uneven distribution, which cannot guarantee the load balance. For example: Upstream Web_pool {ip_hash; server 172.23.136.148:80; server 172.23.136.149:80;} 4, Fair (third-party) by the response time of the backend server to allocate the request, response time is short The priority allocation. Upstream Web_pool {server 172.23.136.148; server 172.23.136.149; fair;} 5, Url_hash (third party) assigns the request by the hash result of the access URL, Directs each URL to the same back-end server, which is more efficient when the backend server is cached. Example: Adding a hash statement in upstream, the server statement cannotWrite weight and other parameters, Hash_method is the hash algorithm used upstream Web_pool {server squid1:3128; server squid2:3128; hash $request _uri; hash _method CRC32; }   The status of each device is set to: 1.down indicates that the current server does not participate in the load, and the greater the weight of the load is the greater the 2.weight default to 1.weight in Ip_hash. 3.max_fails the number of allowable requests to fail defaults to 1. Setting to 0 indicates that the function is turned off, and when the maximum number of times is exceeded, the error 4.fail_timeout defined by the Proxy_next_upstream module is returned after the max_fails-defined number of failures. The time of the pause. 5.backup can be understood as a standby machine, and all other non-backup machines are down or busy before the request is assigned to the backup machine. So the pressure on this machine is the lightest.   Nginx supports multiple sets of load balancing at the same time to use for unused servers.   (ii), use of instructions 1, upstream declares a set of servers that can be referenced by Proxy_pass and Fastcgi_pass; These servers can use different ports, and you can also use UNIX sockets, or you can assign different weights to the server. such as: Upstream Web_pool {    server coolinuz.9966.org weight=5;     Server 172.23.136.148:8080 max_fails =3  fail_timeout=30s;     Server Unix:/tmp/backend3; } 2, server syntax: server name [parameters] where name can be FQDN, host address, port or UNIX socket, if the FQDN resolves with multiple addresses, each address will be used. 3, Proxy_pass syntax: Proxy_pass URL; This directive is used to specify the address of the proxy server and the URL or address and port to which the URL will be mapped. This is used to specify the address or url[port of the backend server]. 4. Proxy_set_header Syntax: Proxy_set_header header VAlue; This directive allows for the redefinition and addition of some request header information that will be transferred to the proxy server. For example: Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Note: $proxy _add_x_forwarded_for contains "X-forwarded-for" in the client request header, separated from the $remote_addr with a comma, if there is no "x-forwarded-for" request header, $proxy_ Add_x_forwarded_for equals $remote_addr   by the way, the built-in variables of Nginx:

$args The parameter in the request, $is _args, if $args is already set, the value of the variable is ". ", otherwise" ". $content _length, the "content-length" in the HTTP request information header; $content _type, request "Content-type" in the information head; $document _root, the root directory path set for the root instruction to which the current request belongs; $document _uri, the same as $uri; $host, "host" in the request message, if there is no host row in the request, is equal to the set server name; $limit _rate, the limit of the connection rate; $request _method, the requested method, such as "GET", "POST" and so on; $remote _addr, client address; $remote _port, client port number; $remote _user, client user name, authentication; $request _filename, the file path name of the current request $request _body_file, and the client requests a temporary file name for the principal. $request _uri, URI of the request, with parameters; $query _string, the same as $args; $scheme, the protocol used, such as HTTP or HTTPS, such as rewrite  ^ (. +) $ $scheme://example.com$1  redirect; $server _protocol, the requested protocol version, "http/1.0" or "http/1.1"; $server _addr, server address, if you do not specify a server address with listen, using this variable will initiate a system call to obtain the address (resulting in resource wastage); $server _name, the name of the server on which the request arrived; $server _port, the server port number on which the request arrived; $uri, the requested URI may differ from the original value, such as redirection.
5, proxy_read_timeout syntax: Proxy_read_timeout time; This command sets Nginx to be connected to the backend server after it is established. Wait for the backend server response time 6, proxy_send_timeout syntax: roxy_send_timeout times; This directive specifies the time-out when a request is transferred to the backend server. The entire transmission requires no more than the timeout period, but only between two write operations. If the back-end server after this time does not take new data, then Nginx will close the connection. 7, proxy_connect_timeout syntax: Proxy_connect_timeout time; This instruction is used to set the connection time-out for the backend server. 8, Proxy_buffers Grammar: proxy_buffers the_number is_size; This instruction sets the number and size of buffers, and by default, one buffer is the same size as the page size. 9, Proxy_buffer_size Grammar: Proxy_buffer_size buffer_size; The proxy buffer that is used to hold the user's header information. 10, proxy_busy_buffers_size syntax: proxy_busy_buffers_size size; Used when the system load is large, buffer is not enough, you can apply for a larger proxy_buffers 11, proxy_temp_file_write_size syntax: proxy_temp_file_write_size size; Used to specify the size of the cache temp file

(c), fully functional installation configuration third-party module, to achieve upstream in the back-end Web server health state detection: module Download address: https://github.com/cep21/healthcheck_nginx_upstreams Module Name: Ngx_http_healthcheck_module installation Configuration method: 1, first decompression Healcheck module to a path, here is assumed to be/tmp/healthcheck_nginx_upstreams #tar-XVF Cep21-healthcheck_nginx_upstreams-16d6ae7.tar.gz-c/tmp/healthcheck_nginx_upstreams 2, to the Nginx patching first decompression nginx, and enter the Nginx source directory: # tar XF nginx-1.3.4.tar.gz # cd nginx-1.0.11 # PATCH-P1 </tmp/healthcheck_nginx_upstreams/nginx.patch Then compile nginx and add an option similar to the following when executing configure:--add-module=/tmp/healthcheck_nginx_upstreams So, here's the command:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.