____________ Client
+
|
|
|
|
+ --------------- [LVS] LVS concurrent access volume: 4 million
|
|
|
|
+
|
|
---------------------------
|
Nginx
|
+
Bytes -----------------------------------------------------------------------------------------------------
|
Apache squid tomcat
^ |
|
+ ----------- | --------------- | ---------- + |
+ --------------- | --------------------- + |
+ ------------------------------- +
LB-Nginx:
Client:
CIP: 202.106.0.56
Director-Nginx:
VIP: 202.106.0.254
DIP: 192.168.0.1
Director Configuration:
1. Install nginx
Cd/etc/nginx/conf. d
Vim rip. conf
Upstream squids {
Server 192.168.0.254: 3128;
Server 192.168.0.208: 3128;
Server 192.168.0.39: 3128;
RealServer
}
Upstream apaches {
Server 192.168.0.14: 80;
Server 192.168.0.231: 80;
Server 192.168.0.49: 80;
}
Upstream upload ATS {
Server 192.168.0.119: 8080;
Server 192.168.0.188: 8080;
Server 192.168.0.194: 8080;
}
2. vim/etc/nginx. conf
Location /{
Root/usr/share/nginx/html;
Index index.html index.htm;
If ($ request_uri ~ * ". * \. Html $ ")
{
Proxy_pass http: // squids;
}
If ($ request_uri ~ * ". * \. Php $ ")
{
Proxy_pass http: // apaches;
}
If ($ request_uri ~ * ". * \. Jsp $ ")
{
Proxy_pass http: // invalid ATS;
}
RealServer configuration (squid ):
Vim squid. conf
Http_access allow all
Http_port 3128 vhost
Cache_peer 192.168.0.x parent 80 0 no-query originserver
# Cache_peer 192.168.0.XX parent 80 0 no-query originserver round-robin weight = n
RealServer configuration (apache ):
1, IP
2,
Prepare an index.html
Test. php
RealServer configuration (tomcat ):
Index. jsp
Supplement:
View in Nginx.
Geo $ liu {
Default 1;
192.168.0.247/32 0;
}
If ($ liu ){
Proxy_pass http: // apaches;
}
Test
The default value is 1000.
Umlimit-
Umlimt-n 10000
Valid permanently
Vim/etc/security/limits. conf
* Hard nofile 10000
AB-n 10000-c 5000 http: // localhost/
Based on my own practices, I will introduce three Nginx anti-leeching methods to save your bandwidth.
I. general anti-leech protection is as follows:
Location ~ * \. (Gif | jpg | png | swf | flv) $ {
Valid_referers none blocked www.ingnix.com;
If ($ invalid_referer ){
Rewrite ^/http://www.ingnix.com/retrun.html;
# Return 404;
}
}
Line 1: gif | jpg | png | swf | flv
Anti-leech protection is enabled for gif, jpg, png, swf, and flv files.
Row 2: Judge the two routes www.ingnix.com.
The content in if {} indicates that, if the origin is not specified, it will jump to the http://www.ingnix.com/retrun.html page. Of course, it is also possible to directly return.
Ii. Preventing leeching for image Directories
Location/images /{
Alias/data/images /;
Valid_referers none blocked server_names *. xok. la xok. la;
If ($ invalid_referer) {return 403 ;}
}
Iii. Use the third-party module ngx_http_accesskey_module to implement Nginx anti-leech Protection
The implementation method is as follows:
The implementation method is as follows:
1. Download NginxHttpAccessKeyModule module File: Nginx-accesskey-2.0.3.tar.gz;
2. Unzip the file and find the config file under the nginx-accesskey-2.0.3. Edit this file: Replace "$ HTTP_ACCESSKEY_MODULE" with "ngx_http_accesskey_module ";
3. recompile nginx with the following parameters:
./Configure -- add-module = path/to/nginx-accesskey
4. Modify the nginx conf file and add the following lines:
Location/download {
Accesskey on;
Accesskey_hashmethod md5;
Accesskey_arg "key ";
Accesskey_signature "mypass $ remote_addr ";
}
Where:
The accesskey is a module switch;
Accesskey_hashmethod is the MD5 or SHA-1 encryption method;
Accesskey_arg is the keyword parameter in the url;
Accesskey_signature is the encryption value. Here it is a string consisting of mypass and access IP.
Access test script download. php:
<?
$ Ipkey = md5 ("mypass". $ _ SERVER ['remote _ ADDR ']);
$ Output_add_key = "<a href = http://www.inginx.com/download/G3200507120520LM.rar? Key = ". $ ipkey."> download_add_key </a> <br/> ";
$ Output_org_url = "<a href = http://www.inginx.com/download/G3200507120520LM.rar> download_org_path </a> <br/> ";
Echo $ output_add_key;
Echo $ output_org_url;
?>
The first download_add_key link can be downloaded normally. The second link download_org_path will return the 403 Forbidden error.
Nginx error 502 upstream sent too big header while reading response header from upstream
Sudo gedit/var/log/nginx/error. log
View error logs
Upstream sent too big header while reading response header from upstream
When you search for this error, the explanation on the internet is almost the same. It is almost because there are too many headers carried by cookies, so you can set:
Fastcgi_buffer_size 128 k;
Fastcgi_buffers 8 128 k;
Step by step. Fastcgi_buffers 8 128 k, fastcgi_buffers 32 32 k is better, the memory is allocated and released in the whole block, reduce the unit k number to be used as much as possible.
In addition, if you use nginx for load balancing, it is useless to modify the above parameters. You need to configure forwarding, such as the following settings:
Location @ to_other {
Proxy_buffer_size 128 k;
Proxy_buffers 32 32 k;
Proxy_busy_buffers_size 128 k;
Add_header X-Static transfer;
Proxy_redirect off;
Proxy_set_header Host $ host;
Proxy_set_header X-Real-IP $ remote_addr;
Proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
Proxy_pass http: // backend; # request forwarding
}
The bold three lines will work.
Fastcgi _ * can be understood as the response of nginx when receiving client requests. Proxy is used when nginx is used for client forwarding. If the header is too large, it exceeds the default 1 k,
The above upstream sent too big header will be triggered.
Other search results can be ignored.
Location ~ \. Php $ {
Fastcgi_buffer_size 128 k;
Fastcgi_buffers 32 32 k;
Include/etc/nginx/fastcgi_params;
Fastcgi_pass 127.0.0.1: 9000;
Fastcgi_index index. php;
Fastcgi_param SCRIPT_FILENAME/host/web/$ fastcgi_script_name;
}
View articles
Nginx + fastcgi 502 has many error causes
For example, the website has encountered 502 errors frequently recently, which means it cannot run properly. Most of these problems are caused by php-cgi timeout and no information is returned, or the process is dead,
Refer to Zhang banquet's solution to the 502 error and consult the system administrator. Our nginx has been configured.
To the extreme, these changes have been made earlier, but now they appear again.
After analysis, the error log of nginx is opened, and an error message such as "pstream sent too big header while reading response header from upstream" is found,
After checking the information, the problem was caused by a bug in the nginx buffer zone. The page consumption on our website may be too large. According to the modification method written by foreigners, the buffer size settings are added,
502 the problem was completely solved. Later, the system administrator adjusted the parameters and kept only two set parameters: client head buffer and fastcgi buffer size.
This article is from the "Linux cultivation path" blog