How to load balance in several Nginx implementations

Source: Internet
Author: User
What is load balancing

When the amount of traffic in a single server is greater, the server becomes more stressed and the server crashes when it is larger than it is capable of withstanding. In order to avoid server crashes and give users a better experience, we share server pressure through load balancing.

We can build a lot of servers, a server cluster, when users visit the site, the first access to an intermediary server, the intermediary server in the server cluster to select a less stressful server, and then bring the access request to the server. In this way, each user's access to the server cluster will ensure that each server pressure to balance, sharing the server pressure, to avoid the situation of server crashes.

Load balancing is realized by the principle of reverse proxy.

Several common ways of load balancing

1. Polling (default)
Each request is assigned to a different back-end server in chronological order , and can be automatically rejected if the backend server is down.

Upstream Backserver {    server 192.168.0.14;    Server 192.168.0.15;}

2, Weight
Specifies the polling probability, proportional to the weight and access ratio, for the performance of the backend server
Case

Upstream Backserver {    server 192.168.0.14 weight=3;    Server 192.168.0.15 weight=7;}

The higher the weight, the greater the probability of being accessed, as in the previous example, 30%, 70%, respectively.

3, the above way there is a problem is that, in the load balancing system, if the user is logged on a server, then the second time the user request, because we are a load balancer system, each request will be relocated to a server cluster of one, It is clearly inappropriate for users who have logged in to one server to relocate to another server and their login information will be lost.

We can use the Ip_hash directive to solve this problem, if the customer has access to a server, when the user again access, the request through the hashing algorithm, automatically locate the server.

Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.

Upstream Backserver {    ip_hash;    Server 192.168.0.14:88;    Server 192.168.0.15:80;}

4. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.

Upstream Backserver {    server server1;    Server Server2;    Fair;}

5. Url_hash (Third Party)
Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Upstream Backserver {    server squid1:3128;    Server squid2:3128;    Hash $request _uri;    Hash_method CRC32;}

The status of each device is set to:

1.down indicates that the server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3. max_fails : The number of times that a request failed is allowed defaults to 1. Returns the proxy_next_upstream module-defined error when the maximum number of times is exceeded
4. fail_timeout:max_fails the time of the pause after the second failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Configuration instance:

 #user nobody;worker_processes 4;events {# max concurrent number Worker_connections 1024;}        http{# Select Server List upstream myproject{# Ip_hash instructions to introduce the same user to the same server.        Ip_hash;        Server 125.219.42.4 fail_timeout=60s;        Server 172.31.2.183;                } server{# Listening port Listen 80;                # root directory Location/{# Select which server list Proxy_pass http://myproject; }            }}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.