Nginx as a server Load balancer Application

Source: Internet
Author: User
Tags nginx server nginx load balancing
QQ group: 179242260nginx as a server Load balancer Application

The load balancing function of nginx is implemented through the upstream command. Therefore, its load balancing mechanism is relatively simple. It is a layer-7 Load Balancing implementation based on content and applications. Nginx Server Load balancer performs health check on backend servers by default. However, nginx Server Load balancer has poor monitoring capabilities and is limited to port detection. In the case of few backend servers (less than 10 servers) the Load Balancing Capability is outstanding. For load applications with a large number of backend nodes, because all access requests are in and out of a server, it is easy to cause request congestion and lead to connection, so the performance of the backend server cannot be fully utilized.

 

1: nginx Load Balancing Algorithm

The load balancing module of nginx currently supports four scheduling algorithms, which are described below. The latter two are third-party scheduling algorithms.

Round Robin (default): each request is distributed to different backend servers one by one in chronological order. If a backend server crashes, the faulty system is automatically removed so that user access is not affected.

Weight: Specifies the round-robin weight. The higher the weight value, the higher the allocation rate. It is mainly used when the performance of each backend server is not balanced.

Ip_hash: each request is allocated according to the hash result of the access IP address. In this way, visitors from the same IP address access a backend server at a fixed speed, effectively solving the session problem in the dynamic web page.

Fair, which is more intelligent than the above two load balancing algorithms. This algorithm intelligently performs Load Balancing Based on the page size and loading time, that is, requests are allocated based on the response time of the backend server, and requests are prioritized for short response time. Nginx does not support fair. To use this scheduling algorithm, you must download the nginx upstream_fair module.

Url_hash: distribute requests according to the hash result of the access URL so that each URL is directed to the same server, which further improves the efficiency of the backend cache server. Nginx does not support url_hash. To use this scheduling algorithm, you must install the nginx hash package.

In the HTTP upstream module, you can use the server command to specify the IP address and port of the backend server, and set the status of each backend server in the Server Load balancer scheduling. Common statuses include:

Down indicates that the current server is not involved in Server Load balancer.

Backup, reserved backup machine. When all other non-Backup Machines fail or are busy, the backup machines are requested. Therefore, the access pressure on this machine is lightest.

Max_fails: the number of failed requests allowed. The default value is 1. If the maximum number of times is exceeded, an error defined by the proxy_next_upstream module is returned.

Fail_timeout: the time when the service is suspended after a max_fails failure. Max_fails can be used with fail_timeout.

※Note: when the load scheduling algorithm is ip_hash, weight and backup cannot be enabled for backend servers during Server Load balancer scheduling.

2: nginx Server Load balancer configuration instance

The following is an nginx Server Load balancer configuration instance. To Focus On the configuration of Server Load balancer, only the HTTP configuration segments in the nginx configuration file are listed, and other configurations are omitted, it is hereby stated. The server Load balancer configuration parameters are as follows:

[Fanheng ~] # Cat/usr/local/nginx/CONF/nginx. conf

Worker_processes 1;

Events {

Worker_connections 1024;

}

HTTP {

Upstreamtest_one {

Ip_hash;

Server 192.168.100.100: 80;

Server 192.168.100.110: 80;

}

Server {

Listen 80;

SERVER_NAME www. test_one.com;

Location /{

Root HTML;

Index index. php index.htmlindex.htm;

Proxy_pass http: // test_one;

Proxy_set_header host $ host;

}

Error_page 500 502 503 x.html;

Location =/50x.html {

Root HTML;

}

}

}

[Fanheng ~] #

In this section, the upstream keyword indicates that the Server Load balancer configuration starts. upstream is the HTTP upstream module of nginx. This module uses a simple scheduling algorithm to achieve load balancing between the Client IP address and the backend server. In the preceding configuration, the name of a server Load balancer instance is test_one. This name can be specified at will and can be called directly as needed.

In addition, the proxy_set_header parameter is used to set the backend server to obtain the user's host name or real IP address, and the real IP address of the proxy.


This article from the "near Zhu Chi" blog, please be sure to keep this source http://fanheng.blog.51cto.com/974941/1557150

Nginx as a server Load balancer Application

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.