Load balancing by Nginx + Tomcat

Source: Internet
Author: User
Tags nginx load balancing
: This article mainly introduces load balancing for Nginx + Tomcat. For more information about PHP tutorials, see. Nginx load balancing

Recently, the project has to be designed with concurrency. Therefore, when designing the project architecture, we should consider using Nginx to build a Tomcat cluster and using Redis to build a distributed Session. next we will share our exploration process step by step.

Although Nginx is small, it is very powerful in terms of functionality. it supports reverse proxy, load balancing, data caching, URL rewriting, read/write splitting, dynamic/static separation, and other aspects. Next we will talk about the configuration of server load balancer. next we will combine the experiment with Redis.

Nginx load balancing scheduling method

The upstream module of the Nginx load balancing module mainly supports the following 4 scheduling algorithms:

1. server round-robin (default): requests are distributed to different servers one by one in chronological order. if a backend server goes down, the faulty system is automatically removed, make user access unaffected. Weight (Weight) specifies the Weight of the round-robin. the larger the Weight value, the higher the access probability allocated to it. it is mainly used when the server performance is uneven.

2. ip_hash: each request is allocated according to the Hash value of the Accessed IP address. users from the same IP address in this line will be fixed to a backend server, fixed servers can effectively solve session sharing issues on webpages.

3. fair: this algorithm intelligently distributes decision-making load balancing based on the page size and loading duration, that is, requests are allocated based on the response time of the backend server, and the response time is prioritized. Nginx itself is not set to the fair module. if you need this scheduling algorithm, you must download the Nginx upstream_fair module and configure the load in config.

4. url_hash: this scheduling algorithm distributes requests based on the hash results of the Accessed url, so that each url is directed to the same backend server, which can further improve the efficiency of the backend server. Nginx itself is not integrated with this module. if you need to install the Nginx hash package, compile and load it to nginx.

Status parameters supported by the Nginx upstream module

In the http upstream module, you can specify the IP address and port of the backend server through the server command, and set the status of each backend server in the server load balancer scheduling. The status parameters are usually set as follows:

1. down: the current server is not involved in server load balancer.

2. backup: reserved backup server. The backup server is requested only when all other non-backup machines fail or are busy. Therefore, this server is under the least pressure.

3. max_fails: number of failed requests allowed. the default value is 1. If the maximum number of times is exceeded, an error defined by the proxy_next_upstream module is returned.

4. fail_timeout: the service suspension time after a max_fails failure. Max_fails can be used with fail_timeout.

Note: When the server load balancer scheduling algorithm uses ip_hash, the status of the backend server in the server load balancer scheduling cannot be weight or backup.

Nginx parameter configuration and description

# User nobody; worker_processes 2; error_log logs/error. log; error_log logs/error. log notice; error_log logs/error. log info; # pid logs/nginx. pid; events {worker_connections 1024;} http {include mime. types; default_type application/octet-stream; log_format main '$ remote_addr-$ remote_user [$ time_local] "$ request" ''$ status $ response" $ http_referer "'' "$ http_user_agent" "$ http_x_forwarded_for "'; access_log logs/access. log main; sendfile on; # tcp_nopush on; # keepalive_timeout 0; keepalive_timeout 65; gzip on; gzip_min_length 1 k; gzip_buffers 4 16 k; gzip_http_version 1.0; gzip_vary on; upstream andy {server 192.168.1.110: 8080 weight = 1 max_fails = 2 fail_timeout = 30 s; server 192.168.1.111: 8080 weight = 1 max_fails = 2 fail_timeout = 30 s; ip_hash ;} server {listen 80; server_name localhost; # charset KOI8-R; # access_log logs/host. access. log main; location/andy_server {response http_502 http_504 error timeout invalid_header; proxy_set_header Host $ host; response X-Real-IP $ remote_addr; response X-Forwarded-For $ response; proxy_pass http://andy ; # The name defined by proxy_pass must be expires 3d consistent with the name defined in upstream; # the following configuration can omit limit 10 m; limit 128 k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4 k; proxy_buffers 4 32 k; limit 64 k; proxy_temp_file_write_size 64 k;} error_page 404/404 .html; error_page 500 502 503 x.html; location =/50x.html {root html ;}}}

Note: For detailed configuration explanation, see the previous article.

Nginx load balancing test

The tomcat servers deployed on 192.168.1.110 are Nginx, 192.168.1.110, and 192.168.111.

1. when http: // 192.168.1.110/andy_server/is enabled, the Nginx load cluster uses the default mode and polls the server each time.

As follows:

This method cannot solve the cluster session problem.

2. when ip_hash is used, refreshing is always a fixed server.

This method solves the session problem. if the 192.168.1.110 server goes down, Nginx forwards the request to the server that is not down (after testing, shutdown the 192.168.1.110 server, this request will jump to the 192.168.1.111 server ). However, there is also a problem. when the hash-to-server goes down and Nginx is transferred to another server, the session will naturally be lost.

3. the remaining two modules required for Nginx installation are not tested in the same way as above.

Summary

No matter which server load balancer method is used, session loss may occur. To solve this problem, it is necessary to store the session separately, whether it is a database, a file, or a distributed memory server. Next, we will test and solve the session problem.

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

The above section describes how to balance the load of Nginx + Tomcat, including some content. I hope my friends who are interested in the PHP Tutorial can help me.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.