How to configure load balancing and implement Nginx in Linux environment through upstream

Source: Internet
Author: User
Tags sendfile node server nginx load balancing

One, the server prepares the situation, four sets:

1, the previous segment of the server:
192.168.1.112 hosts directional test domain nginx.21yunwei.com
192.168.1.113 Standby front-end server.
Back-end Web server pool Web_pools:
192.168.1.102
192.168.1.103

2, Environment: Unified CentOS 6

Front-End server installation Nginx. Environment installation Here is not written, you can refer to the article "Linux Installation Nginx Environment Configuration" Deployment nginx environment.
The back-end Web server pool installs Apache:yum install Httpd-y and shuts down the firewall. Otherwise, the front-end server will not be able to request a web pool condition.

Second, nginx load balanced configuration.

1, modify the Nginx configuration file.

Egrep-v "#| ^$ "Nginx.conf.default > Nginx.conf
Process the configuration file and output to nginx.conf, where the comment lines and blank lines are cleared to facilitate visual viewing of the Nginx configuration file.
We know that Nginx load balancing is achieved by ngx_http_upstream_module This module, details can be viewed on the website http://nginx.org/en/docs/http/ngx_http_upstream_module.html introduction.
The modified configuration file is as follows:

worker_processes  1;
Events {
    worker_connections  1024;
}
http {
    include       mime.types
    default_type  Application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
 
Upstream web_pools {
    server 192.168.1.102:80      weight=5; //192.168.1.102web node Server, the weight of 5. The greater the weight, the more priority is assigned.
    Server 192.168.1.103:80      weight=5; // 192.168.1.103web node Server, weight 5, test use, can be based on the actual environment weight distribution.
    server 192.168.1.104:80      weight=5   backup;//Standby node server , the standby server will enable access if the previously enabled Web node is hung.

}

server {
Listen 80;
server_name nginx.21yunwei.com;
Location/{
root HTML;
Index index.html index.htm;
Proxy_pass Http://web_pools;
}
}
}


Nginx Module Upstream_module detailed introduction and scheduling algorithm

First, upstream module introduction
Nginx load balancing is dependent on the Ngx_http_upstream_module module, the support agent has proxy_pass (we used in front), Fastcgi_pass, Memcached_pass. The operating principle is the front-end access, the request to which host, the implementation of the definition of the following, such as Proxy_pass (separate note knowledge points), will be based on our defined upstream name to execute the corresponding module, such as the Linux environment Nginx Web_pools This web pool is defined by upstream how to configure load balancing and implementation. It is then distributed and executed according to the server defined by upstream and the set of algorithms.
Second, upstream grammar
Take the file we defined earlier as an example:

HTTP {
Include Mime.types;
Default_type Application/octet-stream;
Sendfile on;
Keepalive_timeout 65;

Upstream Web_pools {
Server 192.168.1.102:80 weight=5 max_fails=3 fail_timeout=3;
Server 192.168.1.103:80 weight=5 max_fails=3 fail_timeout=3;
Server 192.168.1.104:80 weight=5 backup;
KeepAlive 500;
}

server {
Listen 80;
server_name nginx.21yunwei.com;
Location/{
root HTML;
Index index.html index.htm;
Proxy_pass Http://web_pools;
}
}
}
Three, upstream module description
1,upstream must be placed in the http{} tab.
2, the default algorithm is WRR, weight rotation.
3,upstream Internal Parameter Description:
Server 192.168.1.102:80 weight=5; The definition of the real server, can be IP can also be a domain name, the port can not write, the default 80. If you change to another port you need to specify.
Weight: Represents the current specified server load weight, the greater the weight, the greater the chance of being requested. The default weight is 1.
Down: Indicates that the current server is deactivated
Backup: Indicates that the current server is an alternate server, and that the server will be enabled when the non-backup server is hung. Note: The standby server is enabled when the server is down.
Max_fails: Maximum attempt failed, default 1. If set to 0, the attempt is prohibited. Configured according to actual requirements, general configuration 2-3 or higher.
Fail_timeout: Failed timeout time. The default 10s, general configuration 2-3 seconds on it. Often used in combination with max_fails.
Max_conns: Number of concurrent connections. Protect node server parameters.
KeepAlive 300; The maximum number of connections for long connections.
Four, upstream module scheduling algorithm.
The Nginx upstream supports 5 ways of allocating. Among them, the first three types of nginx support for the allocation, the latter two for the Third-party support of the distribution method:
1. RR Polling
Polling is the default allocation for upstream, where each request is assigned to a different back-end server in chronological order, and can be automatically removed if a backend server is down. Follow the 1:1 poll.
Upstream Backend {
Server 192.168.1.101:88;
Server 192.168.1.102:88;
}
2, WRR polling.
Polling of the enhanced version, that is, you can specify the polling ratio, weight and access probability is proportional to, mainly used in the backend server heterogeneous scenarios.
Upstream Backend {
Server 192.168.1.101 weight=1;
Server 192.168.1.102 weight=2;
Server 192.168.1.103 weight=3;
}
3, Ip_hash
Each request is allocated according to the hash result of the access IP (that is, the Nginx predecessor server or client IP), so that each visitor has a fixed access to a back-end server to resolve the session consistency issue.
Upstream Backend {
Ip_hash;
Server 192.168.1.101:81;
Server 192.168.1.102:82;
Server 192.168.1.103:83;
}
4, fair
Fair, as the name suggests, distributes requests fairly in response time (RT) of the backend server, which is a short response time--a small back-end server priority assignment request.
Upstream Backend {
Server 192.168.1.101;
Server 192.168.1.102;
Fair
}
5, Url_hash
Similar to Ip_hash, but allocating requests according to the hash result of the access URL, so that each URL is directed to the same back-end server, which is primarily used in scenarios where the backend server is cached.
Upstream Backend {
Server 192.168.1.101;
Server 192.168.1.102;
Hash $request _uri;
Hash_method CRC32;
}
Hash_method for the use of the hash algorithm, it should be noted that the server statement can not add weight parameters. There are two types of knowledge available.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.