Nginx+tomcat Doing load Balancing

Source: Internet
Author: User
Tags nginx load balancing

Nginx Load Balancing

The recent project to design to concurrency, so the design of the project framework to consider the use of Nginx build Tomcat cluster, and then use Redis to build a distributed session, the following will be a step-by-step sharing my groping process.

Nginx Although small, but the function is really very powerful, support reverse proxy, load balancing, data caching, URL rewriting, read and write separation, static and dynamic separation. The next best thing to say about the configuration of the load balancer is to combine the tests with Redis.


the method of Nginx load Balancing dispatch

Nginx Load Balancer module Upstream module mainly supports the following 4 scheduling algorithms:

1. Server polling (by default): Each request access is assigned to a different server side in chronological order, and if a backend server goes down, the fault system is automatically removed and the user access is not affected. Weight (weight) Specifies the weight of the poll, the higher the Weight value, the higher the access probability assigned to it, primarily for server-side performance inequality.


2, Ip_hash: Each request according to the IP of the access to the hash value of the allocation, this line from the same IP users will be fixed to a server on the back end, fixed the server can effectively resolve the existence of the session of the Web page sharing problem.

3, Fair: The algorithm can be based on the size of the page and the length of loading intelligent decision-making load balancing, that is, based on the response time of the backend server to allocate requests, response time period of priority allocation. Nginx itself is not integrated with the Fair module, if the scheduling algorithm is required, the Nginx Upstream_fair module must be downloaded and then configured to load in CONFIG.


4, Url_hash: This scheduling algorithm is based on the URL of the access hash results to allocate requests, so that each URL directed to the same back-end server, can further improve the efficiency of the backend server. Nginx itself is not integrated with the module, if the use of the need to install Nginx hash package, and compiled loaded into Nginx.


the status parameters supported by the Nginx upstream module

In the HTTP upstream module, you can specify the IP address and port of the back-end server through the server directives, and you can also set the state of each back-end server in the load-balancing schedule. The state parameters that are usually set are as follows:

1, down: Indicates that the current server is temporarily not participating in load balancing.

2. Backup: Reserved server. When all other non-backup machines fail or have a busy time, the backup server is requested, so the pressure on this server is the lightest.

3, Max_fails: The number of times the request failed is allowed, the default is 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded.

4, Fail_timeout: After experiencing the max_fails failure, the time to suspend the service. Max_fails can be used with fail_timeout.

Note: When the load balancing scheduling algorithm uses Ip_hash, the state of the back-end server in load balancing scheduling cannot be weight and backup.


parameter configuration and description of Nginx

    

#user nobody;worker_processes 2;error_log logs/error.log;error_log logs/error.log notice;error_log logs/error.log I NFO; #pid logs/nginx.pid;events {worker_connections 1024;}    HTTP {include mime.types;    Default_type Application/octet-stream; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent"    $http _referer "'" $http _user_agent "" $http _x_forwarded_for ";    Access_log Logs/access.log Main;    Sendfile on;    #tcp_nopush on;    #keepalive_timeout 0;    Keepalive_timeout 65;    gzip on;    Gzip_min_length 1k;    Gzip_buffers 4 16k;    Gzip_http_version 1.0;    Gzip_vary on; Upstream Andy {server 192.168.1.110:8080 weight=1 max_fails=2 fail_timeout=30s;server 192.168.1.111:8080 weight=1 max_        fails=2 fail_timeout=30s;    Ip_hash;        } server {Listen 80;        server_name localhost;        #charset Koi8-r; #access_log logs/host.access.log main;location/andy_server {proxy_next_upstream http_502 http_504 error timeout invalid_header;    Proxy_set_header Host $host;    Proxy_set_header X-real-ip $remote _addr;    Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Proxy_pass Http://andy;   The name defined within the #此处proxy_pass定义的name需要跟upstream is consistent expires 3d;   #以下配置可省略 client_max_body_size 10m;   Client_body_buffer_size 128k;   Proxy_connect_timeout 90;   Proxy_send_timeout 90;   Proxy_read_timeout 90;   Proxy_buffer_size 4k;   Proxy_buffers 4 32k;   Proxy_busy_buffers_size 64k;        Proxy_temp_file_write_size 64k;}        Error_page 404/404.html;        Error_page 502 503 504/50x.html;        Location =/50x.html {root html; }    }}

Note: Detailed configuration explanation to view the previous article.


nginx load Balancing test

The Tomcat servers deployed on nginx,192.168.1.110 and 192.168.111 are now deployed on 192.168.1.110.

1. When http://192.168.1.110/andy_server/is turned on, the Nginx load cluster will poll the server every time when the default mode is used.

As follows:

This method is not able to solve the session problem of the cluster.

2, when the use of Ip_hash, refresh is always a fixed server

This method solves the session problem, if the 192.168.1.110 server goes down, Nginx will transfer the request to the server without downtime (tested, will 192.168.1.110 server shutdown, then this request will jump To the 192.168.1.111 server). But there is also a problem, when the hash to the server down, Nginx transferred to another server, natural session will be lost.


3, the remaining two kinds of installation nginx required corresponding modules, and the same as above is not tested.

Summary

Regardless of which load-balancing method is used, the session loss problem occurs. To solve this problem, to store the session separate, whether it is a repository, files, or distributed memory server storage, is necessary for cluster construction. The next chapter will test and resolve the session problem



Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Nginx+tomcat Doing load Balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.