This article is from the Internet and cannot be found by the original author. The purpose of this article is to record the installation process and related configurations of healthcheck_nginx_upstreams. After successful installation, healthcheck_nginx_upstreams cannot run properly, after reading the source code and debugging, you can run it normally.
However, the information is as follows:
*26 no live upstreams while connecting to upstream
Nginx is a free, open-source, high-performance server and reverse proxy server software. It can also be used as a proxy for IMAP and POP3 servers. With its high performance, stability, and rich functions, the features of simple structure and low resource consumption are favored by O & M personnel.
Unlike traditional servers, nginx does not rely on threads to process requests. Instead, it uses a more scalable event-driven architecture (asynchronous ). This structure consumes less resources, but more importantly, it can withstand a large request load. Even if you do not want to process thousands of requests, you can still benefit from nginx's high performance, small memory usage, and its rich features. Nginx reverse proxy: The Reverse Proxy Server accepts connection requests from the Internet and forwards the requests to the servers on the internal network, and return the results from the server to the Internet request to connect to the client, the proxy server acts as a server externally, and this mode of work is similar to the LVS-NET model. Reverse proxy can also be understood as Web Server acceleration. It is a high-speed Web buffer server added between a busy web server and an external network, A technology used to reduce the load of an actual Web server. Reverse Proxy is used to improve the acceleration function for Web servers. All requests from external networks to access the server must pass this function. In this way, the reverse proxy server is responsible for receiving client requests, then obtain the content from the source server, return the content to the user, and save the content locally so that you can receive the same information request in the future, it sends the content in the local cache directly to the user, reducing the pressure on the backend web server and improving the response speed. Therefore, nginx also has the cache function. Workflow of reverse proxy: 1) The user sends an access request through the domain name, and the domain name is resolved to the IP address of the reverse proxy server; 2) the reverse proxy server receives the user's request; 3) the reverse proxy server looks for the content requested by the current user in the local cache, and returns the content directly to the user. 4) if there is no content requested by the user locally, the reverse proxy server sends the same information content to the backend server as the proxy server and sends it to the user. If the information content can be cached, the content is cached locally on the proxy server. Benefits of reverse proxy: 1) solves the problem of external visibility of the website server, improves the security of the website server; 2) saves limited IP Address resources, backend servers can use private IP addresses to communicate with the proxy server. 3) This accelerates Website access and reduces the load on the Web server.
(1) The upstream command of the scheduling algorithm nginx is used to specify the backend servers used by proxy_pass and fastcgi_pass, that is, the reverse proxy function of nginx, therefore, the two can be combined to achieve load balancing, while nginx also supports multiple scheduling algorithms: 1. Round Robin (default) each request is distributed to different backend servers one by one in chronological order. If the backend server is down, the server is skipped and allocated to the server under the next monitoring. In addition, it does not need to record the status of all current connections, so it is a stateless scheduling. 2. weight adds weight based on the round-robin. weight is proportional to the access ratio, which indicates the performance of the backend server, if the backend server has good performance, you can allocate most of the requests to it. Example: My backend server 172.23.136.148 configuration: e5520 * 2 CPU, 8 GB memory backend server 172.23.136.148 configuration: Xeon (TM) 2.80 GHz * 2, 4G memory I want to send 20 of the 30 requests to the front end to 172.23.136.148 for processing, and the remaining 10 requests to 172.23.136.149 for processing, configure upstream web_poll {server 172.23.136.148 Weight = 10; server 172.23.136.149 Weight = 5;} 3. ip_hash: each request is allocated based on the hash result of the access IP address. When a new request arrives, hash the Client IP Address by using the hash algorithm. If the hash value of the Client IP address is the same, the Client IP address will be allocated to the same backend server, this scheduling algorithm can solve the session problem, but sometimes it will lead to uneven allocation and thus cannot guarantee load balancing. For example, upstream web_pool {ip_hash; server 172.23.136.148: 80; server 172.23.136.149: 80;} 4. Fair (a third party) allocates requests based on the response time of the backend server, priority is given for short response times. Upstream web_pool {server 172.23.136.148; server 172.23.136.149; fair;} 5. url_hash (third-party) allocates requests based on the hash results of the access URL so that each URL is directed to the same backend server, it is effective when the backend server is a cache. For example, add a hash statement to upstream. Other parameters such as weight cannot be written to the server statement. hash_method is the hash algorithm upstream web_pool {server squid1: 3128; server squid2: 3128; hash $ request_uri; hash_method CRC32;} the status of each device is set to: 1. down indicates that the current server is not involved in the load, used in ip_hash 2. weight is 1 by default. the larger the weight, the larger the load weight. 3. the default number of failed requests allowed by max_fails is 1. if it is set to 0, the function is disabled. If the maximum number of times is exceeded, the system returns the error 4 defined by the proxy_next_upstream module. the time when fail_timeout is paused after the number of failures defined by max_fails. 5. Backup can be understood as a backup machine. When all other non-Backup machines are down or busy, the requests will be allocated to the backup machine. Therefore, this machine is under the least pressure. Nginx supports setting multiple groups of Server Load balancer instances for unused servers. (2) Instructions 1. Upstream declares a group of servers that can be referenced by proxy_pass and fastcgi_pass. These servers can use different ports or use UNIX socket; you can also specify different weights for the server. For example, upstream web_pool {server coolinuz.9966.org Weight = 5; server 172.23.136.148: 8080 max_fails = 3 fail_timeout = 30 s; server UNIX:/tmp/backend3;} 2. Server Syntax: server name [parameters] the name can be FQDN, host address, port, or UNIX socket. If the FQDN resolution result is multiple addresses, each address will be used. 3. proxy_pass Syntax: proxy_pass URL. This command is used to specify the URL or address and port to which the proxy server address and URL will be mapped. It is used to specify the address or URL [port] of the backend server. 4. proxy_set_header Syntax: proxy_set_header header value. This command allows you to redefine and add request header information that will be transferred to the proxy server. Example: proxy_set_header host $ host; proxy_set_header X-real-IP $ remote_addr; proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for; note: $ proxy_add_x_forwarded_for contains "X-forwarded-for" in the client request header, which is separated from $ remote_addr by commas. If no "X-forwarded-for" Request Header exists, then $ proxy_add_x_forwarded_for is equal to $ remote_addr and nginx built-in variables are added by the way:
$ ARGs: Request Parameter; $ is_args. If $ ARGs has been set, the value of this variable is "?". Otherwise, it is "". $ Content_length: "Content-Length" in the HTTP request header; $ content_type: "Content-Type" in the Request Information header; $ document_root, the root directory path for the root command to which the current request belongs. $ document_uri is the same as $ URI. $ host indicates the "host" in the request information. If no host line exists in the request, it is equal to the set server name; $ limit_rate, the connection speed limit; $ request_method, the request method, such as "get", "Post", etc.; $ remote_addr, client address; $ remote_port, client port number; $ remote_user, client user name, used for authentication; $ request_filename, file path name of the current request $ request_body_file, temporary file name of the client request body. $ Request_uri: The request URI with parameters; $ QUERY_STRING, same as $ ARGs; $ scheme, used protocol, such as HTTP or HTTPS, such as rewrite ^ (. +) $ scheme: // example.com $1 redirect; $ server_protocol, request protocol version, "HTTP/1.0" or "HTTP/1.1"; $ server_addr, server address, if you do not use listen to specify the server address, using this variable will initiate a system call to obtain the address (resulting in resource waste); $ SERVER_NAME, the name of the server to which the request arrives; $ server_port, the Port Number of the server to which the request arrives. $ URI indicates the URI of the request, which may be different from the initial value, such as redirection. |
5. proxy_read_timeout Syntax: proxy_read_timeout time; this command sets the connection between nginx and the backend server. Wait for the response time of the backend server 6. proxy_send_timeout Syntax: roxy_send_timeout time; this command specifies the timeout time for the request to be transferred to the backend server. The request time for the entire transmission does not exceed the time-out time, but is only between two write operations. If the backend server does not take new data after this time, then nginx closes the connection. 7. proxy_connect_timeout Syntax: proxy_connect_timeout time; this command is used to set the connection timeout time allocated to the backend server. 8. proxy_buffers Syntax: proxy_buffers the_number is_size; this command sets the number and size of the buffer. By default, the size of a buffer is the same as the page size. 9. proxy_buffer_size Syntax: proxy_buffer_size buffer_size; Proxy Buffer. This command is used to save the user's header information. 10. Memory Syntax: memory size. When the system load is large and the buffer is insufficient, you can apply for a larger proxy_buffers11 and proxy_temp_file_write_size Syntax: proxy_temp_file_write_size; used to specify the size of the cached temporary file.
(3) fully functional third-party modules are installed and configured to detect the health status of backend Web servers in upstream: module: unzip-xvf cep21-healthcheck_nginx_upstreams-16d6ae7.tar.gz-C/tmp/healthcheck_nginx_upstreams2, patch nginx first decompress nginx, and enter the nginx source code directory: # tar xf nginx-1.3.4.tar.gz # cd nginx-1.0.11 # patch-P1 </tmp/healthcheck_nginx_upstreams/nginx. patch and then compile nginx. When you execute configure, add options similar to the following: -- add-module =/tmp/healthcheck_nginx_upstreams. Therefore, use the following command:
#./Configure \
-- Prefix =/usr/local/nginx \
-- Sbin-Path =/usr/sbin/nginx \
-- Conf-Path =/etc/nginx. conf \
-- Lock-Path =/var/lock/nginx. Lock \
-- User = nginx \
-- Group = nginx \
With-http_ssl_module \
With-http_flv_module \
With-http_stub_status_module \
With-http_gzip_static_module \
-- Http-proxy-temp-Path =/var/tmp/nginx/Proxy /\
-- Http-FastCGI-temp-Path =/var/tmp/nginx/fcgi /\
-- With-PCRE \
-- Add-module =/tmp/healthcheck_nginx_upstreams
# Make & make install |
Ngx_http_healthcheck_module usage: 1. commands supported by this module include: healthcheck_enabled # enable this module healthcheck_delay # interval between two checks on the same backend server, in milliseconds, the default value is 1000. healthcheck_timeout # the timeout time for a health check, in milliseconds. The default value is 2000; healthcheck_failcount # determine the number of successes or failures of a backend server and enable or disable the server; healthcheck_send # detection requests sent to check the health status of backend servers, for example, healthcheck_send "Get/Health HTTP/1.0" 'host: coolinuz.9966.org '; healthcheck_expected # The response content that is expected to be received from the backend server. If this parameter is not set, the status code 200 is returned from the backend server; healthcheck_buffer # size of the buffer space used for health check; healthcheck_status outputs detection information in a way similar to stub_status. The usage is as follows: Location/STAT {healthcheck_status ;}
(4) configuration and implementation configuration code:
HTTP {
Upstream web_pool {
Server 172.23.136.148: 80 Weight = 10;
Server 172.23.136.149: 80 Weight = 5;
Healthcheck_enabled;
Healthcheck_delay 1000;
Healthcheck_timeout 1000;
Healthcheck_failcount 2;
Healthcheck_send "Get/. Health HTTP/1.0 ";
}
Server {
Listen 80;
Location /{
Proxy_set_header host $ http_host;
Proxy_set_header X-real-IP $ remote_addr;
Proxy_set_header X-forwarded-for $ proxy_add_x_forwarded_for;
Proxy_pass http: // web_pool;
Proxy_connect_timeout 3;
}
Location/STAT {
Healthcheck_status;
}
}
} |
Here, the "proxy_set_header" parameter is set because nginx needs to access the server instead of the client during reverse proxy. Therefore, after the request packet passes through reverse proxy, on the proxy server, the IP packet header of this IP packet is modified. The source IP address of the packet header obtained by the backend web server is the IP address of the proxy server, the IP address statistics function provided by the backend server program makes no sense, or when there are multiple domain name-based virtual hosts on the backend Web server, it is necessary to add the header host, specifies the Domain Name of the request so that the backend web server can identify the virtual host that handles the reverse proxy access request. Finally, I found that many people did not put the image on it. How can I use this tool with my desire?
(5) summary we can see from the above that nginx configuration is actually relatively simple than other web server software, but its functions are indeed quite powerful and rich. Through nginx reverse proxy, flexible Regular Expression matching is supported to achieve dynamic and static separation of websites and allow dynamic PHP and other program webpages to access the PHP Web server, allows cache pages, images, JavaScript, CSS, and Flash to access cache servers or file servers such as squid. In addition, nginx features high static content performance and high concurrency. As a front-end Proxy Server Load balancer, nginx becomes the first solution for more and more architects.