Nginx Series Links:
An introduction and installation of Nginx high Performance Web server series
Www.cnblogs.com/maxtgood/p/9597596.html
The second command management of Nginx High Performance Web server series
Www.cnblogs.com/maxtgood/p/9597990.html
Three-version upgrade of Nginx high Performance Web server series
Www.cnblogs.com/maxtgood/p/9598113.html
Nginx high-Performance Web server series four configuration file
Www.cnblogs.com/maxtgood/p/9598333.html
Nginx High Performance Web server series five--real-world project online nginx multi-site configuration
Www.cnblogs.com/maxtgood/p/9598610.html
Nginx High Performance Web server series six--nginx load Balancing configuration + health Check
Www.cnblogs.com/maxtgood/p/9599068.html
Nginx High Performance Web server series seven--nginx reverse proxy
Www.cnblogs.com/maxtgood/p/9599335.html
Nginx High Performance Web server series eight--nginx log analysis and cutting
Www.cnblogs.com/maxtgood/p/9599542.html
Nginx High Performance Web server series nine--nginx operation and maintenance fault daily solution
Www.cnblogs.com/maxtgood/p/9599752.html
Note: Original works, allow, please be sure to use hyperlinks in the form of the origin of the article, author information and this statement. Otherwise, the legal liability will be investigated.
Nginx's strengths do not need me to elaborate, the first contact with Nginx when it found its strong, and I feel very necessary to have a record of Nginx's various functions and pit points.
Welcome to Nginx interested friends to study together with the timely proposed errors and delays. The problem can be in the comment area @ me.
One: Nginx Load balancer Configuration
In fact, the meaning of load balancing is very simple and clear, a lot of principles on the Internet a large number of explanations, may see the indefinitely. Here I draw a simple rough schematic, for reference only:
Explanation: In fact nginx as a lightweight, high-performance Web server can mainly do two things,
The first thing is to directly act as HTTP server (instead of Apache, PHP requires fastcgi processor support);
The second thing is load balancing as a reverse proxy server
Because Nginx has the advantage of dealing with concurrency, this application is now very common.
Of course, Apache's mod_proxy and Mod_cache can also be used to implement reverse proxy and load balancing for multiple app servers, but Apache does not have the nginx expertise to handle concurrency.
Here's a way to practice load balancing + health checks.
Directly vim/usr/local/nginx/conf/nginx.conf modify the upstream segment configuration file:
http {include mime.types; Default_type Application/octet-stream; Log_format Main'$remote _addr-$remote _user [$time _local] "$request"' '$status $body _bytes_sent "$http _referer"' '"$http _user_agent" "$http _x_forwarded_for "'; Sendfile on; Keepalive_timeout $; Upstream WorldCup {server10.124.25.28:8001; Server10.124.25.29:8001; }
Nginx configuration file in detail when also mentioned, pay attention to upstream behind the key words, if you need a single, the same port load different requests, need to make different upstream, the following provides a practical example.
Examples of practice, for informational purposes only, do not explain:
Worker_processes1; events {Worker_connections1024x768;} HTTP {include mime.types; Default_type Application/octet-stream; #log_format Main'$remote _addr-$remote _user [$time _local] "$request"'sendfile on; Keepalive_timeout $; upstream Keep_one {server192.168.1.1:8080weight=1max_fails=2fail_timeout=30s; Server192.168.1.2:8080weight=1max_fails=2fail_timeout=30s;} Upstream Keep_two {server192.168.1.3:8081weight=1max_fails=2fail_timeout=30s; Server192.168.1.4:8081weight=1max_fails=2fail_timeout=30s;} server {Listen the; server_name Localhost;location/{root HTML; Index index.html index.htm; } Location/One {root html; Index index.html index.htm; Proxy_pass http://keep_one/;proxy_set_header Host $http _host; Proxy_set_header Cookie $http _cookie; Proxy_set_header X-real-IP $remote _addr; Proxy_set_header X-forwarded-For $proxy _add_x_forwarded_for; Proxy_set_header X-forwarded-Proto $scheme; Client_max_body_size 300m; } Location/Both {root HTML; Index index.html index.htm; Proxy_pass http://keep_two/;proxy_set_header Host $http _host; Proxy_set_header Cookie $http _cookie; Proxy_set_header X-real-IP $remote _addr; Proxy_set_header X-forwarded-For $proxy _add_x_forwarded_for; Proxy_set_header X-forwarded-Proto $scheme; Client_max_body_size 300m; } }}
Provides two load interfaces, same server, same IP, same port.
Second: monitoring and inspection of nginx load back end
In fact, the above configuration file has been configured in the Health check example, the following describes the two ways of health check.
Method One:
Add upstream, direct ip+port followed by weight=1 max_fails=2 fail_timeout=30s;
# # #如以下代码
upstream Fastdfs_tracker { 192.168. 1.1:8080weight=1 max_fails=2 fail_timeout=30s; 192.168. 1.2:8080weight=1 max_fails=2 fail_timeout=30s;}
Explanation: Weight for the configured weights, check the number of max_fails in the Fail_timeout, failure to eliminate the balance.
Method Two:
When adding upstream, add it in the last line
# # #如以下代码:
upstream test{#负载均衡配置, Default policy, chronological, other by IP hash, weight server 192.168 . 1.1 : 8080 192.168 . 1.2 : 8080 192.168 . 1.3 : 8080 =3000 rise=2 fall= 3 timeout=7070 ;
}
Explanation: # interval=3000: Interval 3 seconds check, rise=2: Check 2 times OK backend node up,fall=3: Three check failed backend node down,timeout=3000: Timeout 3 seconds, type=http: HTTP Check request type , port=8080 checks the port, which can be omitted, by default and the port in server 192.168.1.1:8080 consistent.
At this point about Nginx most commonly used load balancer + Health Check has been configured to complete, the latter series will also be introduced to the relatively common nginx reverse proxy.