Reference http://blog.goyiyo.com/archives/1941
Nginx has two restricted modules one is Limit_zone the other is Limie_req_zone, two can limit the connection, but what is the difference?
literally, Lit_req_zone's function is to limit the user's connection frequency through the token bucket principle (this module allows you to limit the number of individual addresses to a specified session or special requests)
The Limit_zone function, however, is to limit the number of concurrent connections to a client. (This module can limit the number of concurrent connections for a specified session or special case of a single address.)
one is to limit the concurrent connection one is to limit the frequency of the connection, the surface does not seem to see what difference, then see the actual effect bar ~ ~ ~
Add these two parameters to my test machine below is my partial profile
Test: First add these two modules in the nginx.conf configuration file
http{limit_conn_zone $binary _remote_addr zone=perip:10m; #limit_req_zone $binary _remote_addr zone=req_one:10m rate=1r/s;
Then configure it in the appropriate server
server{limit_conn Perip 1; #limit_req Zone=req_one burst=120; Listen 192.168.127.129:81; server_name www.123.com; Index index.html index.htm; root/usr/local/nginx/html/;}
Explain Limit_conn_zone $binary _remote_addr zone=perip:10m; This
$binary _REMOTE_ADDR is a variable that replaces $remore _addr, and 10m is the space for session state storage
limit_conn Perip 1 , limit the number of concurrent connections for clients to 1
Test the Limit_conn_zone module first
I'm looking for a machine to test with AB. The command format is
Ab-c 20-t http://192.168.1.26/
Then view the Nginx access log Access.log
Look at the inside except 200, there is no other status code, such as 503,499
If there is a description for this configuration to take effect
note: View log, does not seem to be able to limit the live 1 seconds 1 concurrent connections, (some netizens told me this is because the test file itself is too small to be so, there is time to test it), from the log can be seen in addition to a few 200 other than the basic is 503, most of the concurrent access is 503.
I ran with AB more for a while and found another case
Look at the current number of TCP connections
# Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, s[a]} '
########################################################################
The following test limit_req_zone, configuration file Changes
http{limit_req_zone $binary _remote_addr zone=req_one:10m rate=1r/s; then configure server{limit_req Zon in the appropriate server E=req_one burst=120; Listen 192.168.127.129:81; server_name www.123.com; Index index.html index.htm; root/usr/local/nginx/html/;}
restart, Nginx .
A simple explanation, rate=1r/s means that each address can only be requested once per second, that is, according to the token bucket (after the user ice ice is leaking bucket principle) principle burst=120 altogether there are 120 tokens, and only add 1 tokens per second,
after 120 tokens are sent out, those requests will return to 503.
Test it.
Ab-c 100-t http://192.168.1.26/
After the test is finished, still check your access.log access log to see if there is no 503 or 499, exactly say there is no other than 200 thought of the return
See the current number of TCP connections
Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, s[a]} '
Time_wait 51
Fin_wait1 5
Established 155
SYN_RECV 12
Although this will allow Nginx to process only one request a second, but there will still be a lot of waiting in the queue to handle, which will also occupy a lot of TCP connections, from the results of the above command can be seen.
What if it does?
Limit_req Zone=req_one burst=120 Nodelay;
A request that exceeds burst size after Nodelay will return 503 directly,
View the current TCP connection again
# Netstat-n | awk '/^tcp/{++s[$NF]} END {for (a in S) print A, S[a]} '
time_wait
fin_wait1
syn_sent 7
Fin_wait2 1
established
syn_recv Notoginseng
The number of connections is less than the above
Through this test I found that neither of the two modules can be absolute limit, but it has already played a significant role in reducing the concurrency and limit the connection, in the production environment specifically in which or need to use two together to see their own needs
This article is from the "Drifting Away" blog, make sure to keep this source http://825536458.blog.51cto.com/4417836/1811302
How Nginx protects against DDoS attacks