Summary of Nginx Bandwidth Control methods

Source: Internet
Author: User


There is an old project that provides the file download function through Squid and uses delay_parameters to implement bandwidth control. The problem is that I don't want to convert Squid, so I figured out whether similar functions can be found in Nginx.


The good news is that Nginx provides limit_rate and limit_rate_after. For example:

Location/download /{
Limit_rate_after 500 kb;
Limit_rate 50 k;
}
It means that after the download reaches 500 KB, the user can control the download speed within 50 kB.

Bad messages are controlled for a single connection. In other words, you can only limit the bandwidth of a single connection, but not the total bandwidth. However, using the limit_conn module can alleviate the problem to some extent:

Limit_conn_zone $ server_name zone = servers: 10 m;

Server {
Location/download /{
Limit_conn servers 1000;
Limit_rate_after 500 kb;
Limit_rate 50 k;
    }
}
The number of concurrent connections is limited through limit_conn, which limits the total bandwidth. Unfortunately, this solution is not perfect. You can imagine the following example: 1000 users can download data at a speed of 50 k at the same time, so when the total bandwidth remains unchanged, can 2000 users download at a speed of 25 k at the same time? From a business perspective, the answer is certainly yes. In fact, limit_conn and limit_rate are not flexible enough to implement such logic simply.

Of course, the problem must be solved. For example, if you use a third-party module: limit_speed; you can also use the Linux built-in TC command. Limit_speed is relatively simple. Let's talk about it later. Let's take a look at the usage of TC:

Shell> tc qdisc add dev eth0 root handle 1: htb default 10
Shell> tc class add dev eth0 parent 1: classid 1:1 htb rate 10 mbit
Shell> tc filter add dev eth0 protocol ip parent 1:0 prio 1 \
U32 match ip dport 80 0 xffff flowid 1:1

 

This article introduces multiple Nginx restricted access modules. In fact, the limit_req module is also awesome. Although it has little to do with this article, it is recommended that you understand it. For details, refer to "nginx limit_req speed limit setting 」

# Define three buckets with the user's binary IP address. The drop-down rate is 1-3 req/sec. The bucket space is 1 MB and 1 MB can maintain the status of approximately 16000 IP addresses.
Limit_req_zone $ binary_remote_addr zone = qps1: 1 m rate = 1r/s;
Limit_req_zone $ binary_remote_addr zone = qps2: 1 m rate = 2r/s;
Limit_req_zone $ binary_remote_addr zone = qps3: 1 m rate = 3r/s;

Server {

# Qps = 1, peak burst = 5, latency request
# Handle requests per second based on the bucket leakage rate qps = 1
# Concurrent requests within the Peak burst = 5 will be suspended and processed with delay
# If the number of requests exceeds the limit, 503 is returned.
# As long as the client controls the concurrency within the Peak [burst], limit_req_error_log will not be triggered
# Example 1: initiate a concurrent request = 6, reject 1, process 1, and enter the delayed queue 4:
# Time request refuse sucess delay
#00:01 6 1 4
#00:02 0 0 1 3
#00:03 0 0 1 2
#00:04 0 0 1 1
#00:05 0 0 1 0
Location/delay {
Limit_req zone = qps1 burst = 5;
}

# Qps = 1, peak burst = 5, no latency request
# After nodelay is added, the bucket leakage rate is controlled by the average qps for a period of time = the bucket leakage rate, allowing instantaneous peak qps> bucket leakage qps
# So the highest qps = (brust + qps-1) = 5 at the peak
# The request will not be delay, either processed or 503 is returned directly
# The client does not trigger limit_req_error_log only when qps requests are controlled per second.
# Example 2: initiate a concurrent request that reaches the peak value every five seconds. Because the average qps is 1 in the time range, the error bucket rate is still met:
# Time request refuse sucess
#00:01 5 0 5
#5 0 5
#5 0 5
# Example 3: consecutive concurrent requests per second = 5. The request exceeding the limit is denied due to the average qps> 1 in the time range:
# Time request refuse sucess
#00:01 5 0 5
#5 4 1
#00:03 5 4 1

Location/nodelay {
Limit_req zone = qps1 burst = 5 nodelay;
}

}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.