First, Hello world
1, pre-environmental preparation
To prepare two uncompressed versions of Tomcat, how to start two tomcat at the same time, see my other article, "One machine starts multiple tomcat at the same time."
Nginx official website to download the decompression version of Nginx.
Create a simple Web project. To visually differentiate which tomcat to access, Mark 8081, 8082 on the page.
Deploy to the corresponding Tomcat, respectively.
2. Configure Nginx
Enter the nginx-1.10.1conf path and modify the configuration file nginx.conf.
1. Configure the server group to add the upstream configuration between the http{} nodes. (Be careful not to write localhost, otherwise the access will be slow)
Upstream Nginxdemo {
server 127.0.0.1:8081; #服务器地址1server 127.0.0.1:8082; #服务器地址2
}
2. Change the port number of the Nginx monitor to 80 and change to 8080.
server {
listen 8080;......
}
3, in location{}, the use of Proxy_pass configuration reverse proxy address, here "http:/" Can not be less, the following address to be the first step upstream defined name consistency.
location / { root html; index index.html index.htm; proxy_pass http://nginxDemo; #配置方向代理地址 }
Such as:
3. Start Nginx and Tomcat to access
I am a Windows system, so just double-click Nginx.exe in the nginx-1.10.1 directory.
can be viewed in Task Manager
Finally enter the address in the browser: http://localhost:8080/nginxDemo/index.jsp, each visit will take turns to access Tomcat (if F5 refresh is not used, it is recommended to try to put the mouse pointer to the address bar, click the Enter key).
Here, a very simple load balancer is configured to complete, isn't it very simple, O (∩_∩) o
Second, Nginx load Balancing strategy
1. Polling (default)
Each Web request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
upstream nginxDemo { server 127.0.0.1:8081; server 127.0.0.1:8082;}
2. Minimum link
Web requests are forwarded to the server with the fewest number of connections.
upstream nginxDemo { least_conn; server 127.0.0.1:8081; server 127.0.0.1:8082;}
3, Weight weight
Specifies the polling probability, which is proportional to the weight and the access ratio, and is used in cases where the performance of the backend server is uneven, and weight defaults to 1.
#服务器A和服务器B的访问比例为:2-1;比如有3个请求,前两个会访问A,三个访问B,其它规则和轮询一样。upstream nginxDemo { server 127.0.0.1:8081 weight=2; #服务器A server 127.0.0.1:8082; #服务器B}
4, Ip_hash
Each request is allocated according to the hash value of the IP access, so that the same client continuous Web requests are distributed to the same server for processing, which resolves the session problem. When the backend server goes down, it automatically jumps to the other server.
upstream nginxDemo { ip_hash; server 127.0.0.1:8081 weight=2; #服务器A server 127.0.0.1:8082; #服务器B}
Weight-based load balancing and Ip_hash-based load balancing can be combined.
5. Url_hash (Third Party)
Url_hash is nginx third-party module, Nginx itself does not support, need to patch.
Nginx allocates requests by accessing the hash result of the URL, directing each URL to the same back-end server, which is more effective when the backend server is a cache server, a file server, or a static server. The disadvantage is that when the backend server goes down, Url_hash does not automatically jump to other cache servers, but returns a 503 error to the user.
upstream nginxDemo { server 127.0.0.1:8081; #服务器A server 127.0.0.1:8082; #服务器B hash $request_url;}
6. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
upstream nginxDemo { server 127.0.0.1:8081; #服务器A server 127.0.0.1:8082; #服务器B fair;}
Nginx + Tomcat configuration Load Balancer Cluster