Distribution methods of nginx upstream

Source: Internet
Author: User

1. Round Robin (default)

Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.

2. Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
For example:

Upstream bakend {
Server 192.168.1.10 Weight = 10;
Server 192.168.1.11 Weight = 10;
}


3. ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
For example:

Upstream resinserver {
Ip_hash;
Server 192.168.1.10: 8080;
Server 192.168.1.11: 8080;
}


4. Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.

Upstream resinserver {
Server server1;
Server server2;
Fair;
}


5. url_hash (third-party)

Requests are allocated based on the hash result of the access URL so that each URL is directed to the same backend server. The backend server is effective when it is cached.

For example, add a hash statement to upstream. Other parameters such as weight cannot be written to server statements. hash_method is the hash algorithm used.


Upstream resinserver {
Server squid1: 3128;
Server squid2: 3128;
Hash $ request_uri;
Hash_method CRC32;
}

TIPS:



Upstream resinserver {# define the IP address and device status of the Server Load balancer Device
Ip_hash;
Server 127.0.0.1: 8000 down;
Server 127.0.0.1: 8080 Weight = 2;
Server 127.0.0.1: 6801;
Server 127.0.0.1: 6802 backup;
}

Add

Proxy_pass http: // resinserver /;


The status of each device is set:
1. Down indicates that the server before a ticket is not involved in the load
2. The default weight value is 1. The larger the weight value, the larger the load weight.
3. max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
4. fail_timeout: The pause time after max_fails fails.
5. Backup: Requests the backup machine when all other non-Backup machines are down or busy. Therefore, this machine is under the least pressure.

Nginx supports setting multiple groups of Server Load balancer instances for unused servers.

Client_body_in_file_only is set to on. You can use the client post data record in the file for debugging.
Client_body_temp_path: Set the directory of the record file to a maximum of three levels.
Location matches the URL. You can perform redirection or perform new proxy load balancing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.