Rational use of nginxhash strategies for more meaningful load balancing

Source: Internet
Author: User
Tags file upload hash nginx server
Preface:At present, many Web applications, or Web interfaces, will be at the entrance of the server, using a server container to listen to the port, and then make request forwarding, such as Nginx Apache. Server containers are critical to the entire Web service, including the ability to manage service processes, perform proxies, pre-process requests, and load balance. The focus of today's discussion is to make more meaningful load balancing in a server cluster by using Nginx's hash strategy rationally.
Overview:When our services are supported by a single server, there is no concept of load balancing in the slightest. Load balancing is used only when the service is supported by multiple servers (that is, the server cluster). Load balancing, as the name implies, is a policy that prevents one server from overloading, while other server idle conditions occur. This policy can make the server load of the same service basically the same. To put it bluntly, when a client initiates a request, the load Balancer forwards the request to a server upstream for processing through a pre-set policy. As shown in the figure:
Load balancing is a very mature technology, in which the back-end server is polled, the client requests the IP hash, and the backend server to specify the weight, etc., is a more common load balancing strategy. Don't repeat it here. It is not reasonable to use the load balancing strategy blindly for the service. Load balancing is a polling policy by default, which is not efficient in some scenarios.
more meaningful load balancing:Today's focus is on two common load-balancing hash strategies, as well as corresponding usage scenarios. 1, Ip_hash (through the client request IP hash, and then through the hash value to select the backend server): When a specific URL path on your server is accessed continuously by the same user, if the load balancing policy or polling, then the user's multiple access will be hit on each server, This is obviously not efficient (problems such as multiple HTTP links are established). Even consider an extreme situation where users need to upload files to the server and then merge the shards by the server, and if the user's request arrives at a different server, the shards will be stored in different server directories, resulting in the inability to merge the shards. Therefore, such scenarios can be considered using nginx-provided Ip_hash strategy. It can satisfy every user request to the same server, and can meet the load balance between different users. The configuration code is as follows:
Upstream backend{
  	Ip_hash;
    	Server 192.168.128.1:8080;
    	Server 192.168.128.2:8080;
    	Server 192.168.128.3:8080 down;
    	Server 192.168.128.4:8080 down;
}
server {
    	listen 8081;
    	server_name test.csdn.net;

    	Root/home/system/test.csdn.net/test;

	Location ^~/upload/upload {
    		proxy_pass http://backend;
	}

}
The above is a minimalist monitoring 8081-port Nginx service, where the request URL is/upload/upload, will go Ip_hash strategy; Upstream is the Nginx load balancer module, here, configured with a policy of Ip_hash, participating in load balancing machine has four, where the end of the two added the Down keyword, indicating the meaning of the downline.

2, Url_hash (through the request URL hash, and then the hash value to select the backend server): Generally speaking, to use the Urlhash, is to match the cache hit to be used. To give an example I encountered: There is a server cluster A, need to provide file download, due to the large amount of file upload, can not be stored on the server disk, so the use of third-party cloud storage to do file storage. After server cluster A receives a client request, it needs to download the file from the cloud storage and then return it, in order to save unnecessary network bandwidth and download time, a temporary cache is made on server cluster A (cached for one months). Because it is a server cluster, multiple requests for the same resource may reach different servers, resulting in unnecessary multiple downloads, low cache hit ratios, and wasted resource time. In such scenarios, in order to improve the cache hit rate, it is appropriate to use the Url_hash policy, the same URL (that is, the same resource request) will reach the same machine, once the resource is cached, and then received the request, it can be read from the cache, both reduce the bandwidth and reduce the download time. The configuration code is as follows:
Upstream Somestream {
        hash $request _uri;
        Server 192.168.244.1:8080;
	Server 192.168.244.2:8080;
        Server 192.168.244.3:8080;
        Server 192.168.244.4:8080;
}
server {
    	listen       8081 default;
    	server_name  test.csdn.net;
    	CharSet Utf-8;
    	Location/get {
		proxy_pass http://somestream;
    }
}
The above is also a minimalist monitoring of the 8081-port Nginx service, when the request URL is/get, will go Url_hash, the same configuration upstream module, hash $request _uri table is based on the URL rules hash strategy.
Summary:The above is to introduce all the content, this article focuses on the use of Ip_hash and Url_hash and basic configuration, in addition, in the Nginx server configuration, can be flexible, different location using different strategies, can make the service strategy more reasonable. I hope this article will bring you a little help.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.