Load Balancing between Nginx and tomcat on Nginx server, nginxtomcat

Source: Internet
Author: User
Tags nginx server

Load Balancing between Nginx and tomcat on Nginx server, nginxtomcat
  

This article explains how to use Nginx for reverse load balancing and load balancing between nginx and tomcat server clusters.

I. Load Balancing between nginx and tomcat

1. Create a file nginx-tomcat.conf in/usr/local/ngnix/conf

File Content:

User nobody; worker_processes 2; events {worker_connections 1024;} http {# configure a group of backend servers for upstream. # after the request is forwarded to upstream, nginx sends requests by policy to a server # That is, the server group information configured for server Load balancer upstream ipvats {fair; server 121.42.41.143: 8080; server 219.133.55.36;} server {listen 80; server_name 121.42.41.143; access_log logs/tomcat-nginx.access.log combined; # reverse proxy settings, send all/path requests to tomcat location/{# root html; index index.html index.htm on the local machine; #=========== proxy provided by Nginx =========== proxy_set_header X-Forwarded-Host $ host; proxy_set_header X-Forwarded-Server $ host; proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; proxy_pass http: // tomcats ;}}}

2. Use this configuration file to start nginx (disable nginx before startup)

[Root @ iZ28b4kreuaZ bin] #/usr/local/nginx/sbin/nginx-c/usr/local/nginx/conf/nginx-tomcat.conf

Ii. Detailed description of the configuration file:
Worker_processes 2; events {worker_connections 1024;} http {include mime. types; default_type application/octet-stream; # configure a group of backend servers for upstream, # after the request is forwarded to upstream, nginx sends a request to a server according to the policy # That is, the server group information configured for Server Load balancer upstream backends {#======== balance policy ==== ========# none round robin (the weight is determined by weight) # ip_hash binds a user's request to the server of the first request through the hash algorithm. All requests of the user will be allocated to the server in the future. Unless the server fails. #================== Third-party balancing policies ==========## fair varies depending on the performance of each server, automatically select a server with high response capability. # Url_hash select a server based on the url. #================== Server set ============== server 192.168.1.62: 8080; server 192.168.1.63; #========== weight policy: the higher the weight, the greater the load ============#server 192.168.1.64 weight = 5; #================== backup: backup machine, enable ==================== server 192.168.1.64 backup only when non-backup machines are down; #================== down: the stop sign will not be accessed (set for servers with temporary maintenance) ============== server 192.168.1.65 down; # max_fails: the server is suspended after a specified number of times; # fail_timeout: how long will it take to test whether server 192.168.1. 66 max_fails = 2 fail_timeout = 60 s;} server {listen 80; server_name localhost; # charset koi8-r; # access_log logs/host. access. log main; location/{root html; index index.html index.htm;} # reverse proxy settings, send all/proxy_test/path requests to tomcat location/proxy_test/{proxy_pass http on the local machine: // localhost: 8080;} # Server Load balancer settings. Send all jsp requests to the server group specified by upstream backends. location ~ \. Jsp $ {proxy_pass http: // backends; # The Real client IP proxy_set_header X-Real-IP $ remote_addr; # The Host information in the Request Header proxy_set_header Host $ host; # The proxy route information, here, the IP address has security risks: proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; # real User Access Protocol proxy_set_header X-Forwarded-Proto $ scheme; # default value, # When the backend response is 302, the location host in the tomcat header is http: // 192.168.1.62: 8080 # because the request received by tomcat is sent by nginx, the request url host initiated by nginx is h Ttp: // 192.168.1.62: 8080 # after it is set to default, nginx automatically replaces the location host part in the response header with the host part of the current user request # Many online tutorials set this value to off, disabled replacement, # In this way, the user's browser will jump to http: // 192.168.1.62: 302 after receiving 8080 and directly expose the backend server to the browser # do not configure proxy_redirect default unless otherwise required ;} # An example of url rewriting: browser request/page. in go, the url is rewritten to/test/page. jsp location ~ \. Go $ {rewrite ^ (. *)\. go $/test/$1 \. jsp last ;}# error_page 404/404 .html; # redirect server error pages to the static page/50x.html # error_page 500 502 503 x.html; location =/50x.html {root html ;}}}
View Code

 

3. fair policy Installation

Fair policy: automatically selects servers with high response ability based on the performance of each server. This policy is provided by a third party, so you must install it first.

Installation Steps

1. Download gnosek-nginx-upstream-fair-a18b409.tar.gz

2. Decompress tar zxvf gnosek-nginx-upstream-fair-a18b409.tar.gz

3. Move the extracted file to the/usr/local directory and change it to nginx-upstream-fair.

4. Add this module to the nginx installation.

A. first go to the nginx-1.8.1 source file directory and execute:

[Root @ iZ28b4kreuaZ nginx-1.8.1] #./configure -- prefix =/usr/local/nginx -- add-module =/usr/local/nginx-upstream-fair/

B. Execute: make to compile

C, enter the nginx-1.8.1/objs/will be the latest nginx boot items cover the original/usr/local/nginx/sbin/nginx boot items.

[Root @ iZ28b4kreuaZ objs] # cp nginx/usr/local/nginx/sbin

D. Enable Nginx.

4. session sharing in Distributed Server Clusters

Problem: when our user logs on to the tomcat1 server, tomcat1 will save the user's login information, but when the user's request is assigned to the tomcat2/tomcat3 server by the proxy server, in this case, tomcat2/tomcat3 cannot obtain the user logon information, which causes the user to log on again. We have three solutions:

1. The requests of the same user are locked on the same server, so that the session will not be shared between different servers. This solution is simple, but lacks fault tolerance (once the server fails, user requests will be allocated to other servers, and you need to log on again)

Implementation Method: Set the cluster policy to ip_hash;

 upstream tomcats{        ip_hash;    }

2. session replication mode: when the session value on any server changes, the change will be broadcast to other servers. When the other servers receive the broadcast, the change will also be made accordingly, in this way, the session is always in all servers. Disadvantages: when there are many tomcat servers in the cluster, the network load is increased and the performance is low. Implementation Method:

A. Configure session broadcast in tomcat server. xml

<! -- Based on the network broadcast policy, a node session changes, and other nodes are synchronized for replication. The performance is low when multiple nodes or large data volumes exist --> <Cluster className = "org. apache. catalina. ha. tcp. simpleTcpCluster "> <Channel className =" org. apache. catalina. tribes. group. groupChannel "> <javaser className =" org. apache. catalina. tribes. transport. nio. nioReceiver "address =" 192.168.6.223 "port =" 8080 "/> </Channel> </Cluster>

B. Add the <distributable/> label to the web. xml file of our distributed application.

<Distributable/>: announces that our applications can be in the cluster environment.

3. Create additional shared space for session management. Generally, we use the distributed cache technology redis and memcached cache technology. Here we use memcached.

A. Install memcached: http://www.cnblogs.com/jalja/p/6121978.html

B. session sharing principles of memcached

Viscosity sharing:

Non-stickiness:

C. tomcat accesses the memcached environment (tomcat 7 is used)

1. Copy the jar package to the tomcat/lib directory. There are three types of jar packages:

1) spymemcached. jar memcached java client

2) memcached-related packages: memcached-session-manager-{version }. jar core package memcached-session-manager-tc {tomcat-version}-{version }. jar Tomcat package

3) serialization toolkit. There are multiple options available. If not set, jdk should be used for built-in serialization. Other options include kryo, javolution, xstream, flexjson, and other msm-{tools}-serializer-{version }. jar other serialization tool packages. Generally, third-party serialization tools do not need to implement the serializable interface.

D. Configure the Context and add the Manager MemcachedBackupSessionManager for session processing.
Context configuration query order:
1) conf/context. xml global configuration, acting on all applications
2) conf/[enginename]/[hostname]/context. xml. default global configuration, acting on all applications under the specified host
3) conf/[enginename]/[hostname]/[contextpath]. xml only applies to the application specified by contextpath.
4) The application META-INF/context. xml only acts on this application
5) conf/server. xml <Host> applies to the application specified by Context docBase
If you only want session management to act on a specific application, it is best to set it in 3, 4 mode. If you want to apply session management to all applications, you can set it in 1, 2, or 5 mode.

Conf/context. xml configuration:

1 <? Xml version = "1.0" encoding = "UTF-8"?> 2 3 <Context> 4 5 <WatchedResource> WEB-INF/web. xml </WatchedResource> 6 <WatchedResource >$ {catalina. base}/conf/web. xml </WatchedResource> 7 8 <! -- Sticky session minimum configuration --> 9 <! -- ClassName manager class name --> 10 <! -- MemcachedNodes memcached server node, which is expressed in the form of node name: Host: port. The node name is randomly named, but must be consistent among tomcat --> 11 <! -- The default value of sticky is true, which is sticky session mode. --> 12 <! -- FailoverNodes is only applicable to sticky mode. n1 indicates that the session is backed up to n2. If n2 is unavailable, n1 --> 13 <! -- The configuration of another server is the opposite. This ensures that the session is saved to other machines and prevents tomcat and session from crashing together when the entire machine crashes --> 14 <Manager className = "de. javakaffee. web. msm. memcachedBackupSessionManager "15 memcachedNodes =" n1: 192.168.1.62: 11211, n2: 192.168.1.63: 11211 "16 failoverNodes =" n1 "17/> 18 19 <! -- Sticky (sticky) mode configuration frequently used in the production environment --> 20 <Manager className = "de. javakaffee. web. msm. memcachedBackupSessionManager "21 memcachedNodes =" n1: 192.168.1.62: 11211, n2: 192.168.1.63: 11211 "22 failoverNodes =" n1 "23 requestUriIgnorePattern = ". *\. (jpg | png | css | js) $"24 memcachedProtocol =" binary "25 transcoderFactoryClass =" de. javakaffee. web. msm. serializer. kryo. kryoTranscoderFactory "26/> 27 28 <! -- Non-sticky (non-sticky mode) mode configuration frequently used in the production environment --> 29 <Manager className = "de. javakaffee. web. msm. memcachedBackupSessionManager "30 memcachedNodes =" n1: 192.168.1.62: 11211, n2: 192.168.1.63: 11211 "31 sticky =" false "32 sessionBackupAsync =" false "33 lockingMode =" auto "34 requestUriIgnorePattern = ". *\. (jpg | png | css | js) $"35 memcachedProtocol =" binary "36 transcoderFactoryClass =" de. javakaffee. web. msm. serializer. kryo. kryoTranscoder Factory "37/> 38 39 <! -- 40 <Manager className = "de. javakaffee. web. msm. memcachedBackupSessionManager "41 memcachedNodes =" n1: 192.168.1.62: 11211, n2: 192.168.1.63: 11211 "42 43 # sticky mode. The default value is true44 sticky =" false "45 46 # Only applicable to sticky mode, n1 indicates that sessions are backed up to n2. If n2 is unavailable, n147 failoverNodes = "n1" 48 49 # The types of ignored requests are not processed by session50 requestUriIgnorePattern = ". *\. (jpg | png | css | js) $"51 52 # For example, when sessionPath =/is set in context, multiple applications under a host may have the same session_id, 53 # at this time Writing to memcached may cause confusion. You can add a prefix to distinguish different applications 54 storageKeyPrefix = "static: name | context | host | webappVersion | context. hash | host. hash | multiple combinations, separated by "55 56 # sets the data transmission mode of the mecached protocol. The default value is text, setting binary to improve performance 57 memcachedProtocol = "binary" 58 59 # Whether to asynchronously store session changes. The default value is true, which has good performance and is applicable to sticky mode, 60 # It is recommended to set non-sticky to false to avoid information inconsistency due to delay 61 sessionBackupAsync = "false" 62 63 # Applicable only to non-sticky mode. To avoid synchronization edit conflicts, lock 64 when modifying the session # synchronous editing a possible situation is that multiple requests on the same page are simultaneously Initiated, may access different backend 65 # auto read mode without locking write mode lock 66 # uriPattern mode, Will URI + "? "+ QueryString matches with the Regex mode. If yes, 67 lockingMode =" none | all | auto | uriPattern: Regex "68 69 # use a third-party serialization tool, improve serialization performance 70 # commonly used third-party tools such as kryo, javolution, and xstream 71 # add relevant jar packages to tomcat/lib 72 transcoderFactoryClass = "de. javakaffee. web. msm. serializer. kryo. kryoTranscoderFactory "73 74/> 75 --> 76 77 </Context>
View Code

Iv. Precautions for Cluster Environment Development

1. object class to be serialized (implements Serializable)

Private static final long serialVersionUID = 3349238980725146825L;

2. Obtain the client request address. Add the following configuration in the nginx-tomcat.conf:

Server {location/{proxy_set_header X-Real-IP $ remote_addr; # Real client IP }}

Java code:

public static String  getIp(HttpServletRequest request){        String remoteIp =request.getRemoteAddr();        String headIp=request.getHeader("X-Real-IP");        return headIp==null?remoteIp:headIp;    }

3. Static and Dynamic Separation
Place static files on the nginx server (css, js, and images)

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.