Nginx + Tomcat Dynamic separation for load balancing

Source: Internet
Author: User
Tags sendfile

0. Pre-Preparation

Use the Debian environment. Install Nginx (default installation), a Web project, install Tomcat (default installation), etc.

1. A copy of the nginx.conf configuration file
# define Nginx Users and user groups if the corresponding server is exposed, it is recommended to use less privileged users to prevent intrusion # user www www; #Nginx进程数, the recommended setting is equal to the total CPU core number worker_processes 8;# Turn on Global error log type Error_log/var/log/nginx/error.log info; #进程文件pid/var/run/nginx.pid; #一个Nginx进程打开的最多文件描述数目 recommendations are consistent with Ulimit-n # If you are aware of this value when dealing with high concurrency ulimit-n There are some system parameters instead of this individual determination Worker_rlimit_nofile 65535;events{#使用epoll模型提高性能 use epoll; #单个进程最大连接数 Worke R_connections 65535;} http{#扩展名与文件类型映射表 include mime.types; #默认类型 default_type application/octet-stream; sendfile on; Tcp_nopush on; Tcp_nodel Ay on; Keepalive_timeout 65; Types_hash_max_size 2048; #日志 Access_log/var/log/nginx/access.log; Error_log/var/log/nginx/error.log; #gzip compressed transport gzip on; Gzip_min_length 1k; #最小1K gzip_buffers 64K; Gzip_http_version 1.1; Gzip_comp_level 6; Gzip_types text/plain application/x-javascript text/css application/xml application/javascript; Gzip_vary on; #负载均衡组 #静态服务器组 upstream static.zh-jieli.com {server 127.0.0.1:808 weight=1;} #动态服务器组 upstream zh-jieli.com {server 127. 0.0.1:8080; #server 192.168.8.203:8080; } #配置代理参数 proxy_redirect off; Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Client_max_body_size 10m; Client_body_buffer_size 128k; Proxy_connect_timeout 65; Proxy_send_timeout 65; Proxy_read_timeout 65; Proxy_buffer_size 4k; Proxy_buffers 4 32k; Proxy_busy_buffers_size 64k; #缓存配置 Proxy_cache_key ' $host: $server _port$request_uri '; Proxy_temp_file_write_size 64k; Proxy_temp_path/dev/shm/jielierp/proxy_temp_path; Proxy_cache_path/dev/shm/jielierp/proxy_cache_path levels=1:2 keys_zone=cache_one:200m inactive=5d max_size=1g; Proxy_ignore_headers x-accel-expires Expires Cache-control set-cookie;server{listen; server_name erp.zh-jieli.com; Location/{Index index, #默认主页为/index #proxy_pass http://jieli;} location ~. *\. (Js|css|ico|png|jpg|eot|svg|ttf|woff) {Proxy_cache cache_one; Proxy_cache_valid 304 302 5d; proxy_cache_valid any 5d; Proxy_cache_key ' $host: $server _port$r Equest_uri '; Add_headEr x-cache ' $upstream _cache_status from $host '; Proxy_pass http://static.zh-jieli.com; #所有静态文件直接读取硬盘 # root/var/lib/tomcat7/webapps/jielierp/web-inf; Expires 30d; #缓存30天} #其他页面反向代理到tomcat容器 Location ~. *$ {index index; proxy_pass http://zh-jieli.com;} server{Listen 808 server_name static; location/{} location ~ *\. ( Js|css|ico|png|jpg|eot|svg|ttf|woff) {#所有静态文件直接读取硬盘 root/var/lib/tomcat7/webapps/jielierp/web-inf; expires 30d; # Cached for 30 Days}}}

The basic configuration of this file, you can implement the load. But it's more troublesome to understand the various relationships inside. This blog, is not a teaching article, is a record, convenient to see themselves later.

2. Basic explanation

Now, if there is a computer 192.168.8.203 this computer, the above deployment of Tomcat, inside 8080 ports have the service of the Java EE, through the browser can browse the Web. Now there is a problem tomcat is a more comprehensive web container, the processing of static Web pages, should be compared to the cost of resources, especially every time to read from the disk static page, and then return. This will consume Tomcat's resources and may affect the performance of those dynamic page parsing. Adhering to the philosophy of Linux, a software does only one thing in principle. Tomcat should only process JSP dynamic pages. Here you use the previously known nginx to reverse proxy. The first step agent, to achieve static and dynamic Web page separation. This is very simple.

worker_processes 8; pid/var/run/nginx.pid; Worker_rlimit_nofile 65535 ; events{use epoll; worker_connections 65535;} http{include mime.types; default_type application/octet-stream; Sendfile o N Tcp_nopush on; Tcp_nodelay on; Keepalive_timeout 65; Types_hash_max_size 2048;proxy_redirect off; Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Client_max_body_size 10m; Client_body_buffer_size 128k; Proxy_connect_timeout 65; Proxy_send_timeout 65; Proxy_read_timeout 65; Proxy_buffer_size 4k; Proxy_buffers 4 32k; Proxy_busy_buffers_size 64k;  server{listen; server_name xxx.com; location/{index index; } location ~. *\. (Js|css|ico|png|jpg|eot|svg|ttf|woff)  {Proxy_pass http://192.168.8.203:8080; expires 30d; } location ~. *$ {index index; proxy_pass http://192.168.8.203:8080;}} }
worker_processes 8;pid/var/run/nginx.pid;worker_rlimit_nofile 65535 ; events{use epoll; worker_connections 65535;} http{include mime.types; Default_type application/octet-stream; sendfile on; Tcp_nopush on; Tcp_nodelay on; Keepalive_ti Meout 65; Types_hash_max_size 2048;proxy_redirect off; Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Client_max_body_size 10m; Client_body_buffer_size 128k; Proxy_connect_timeout 65; Proxy_send_timeout 65; Proxy_read_timeout 65; Proxy_buffer_size 4k; Proxy_buffers 4 32k; Proxy_busy_buffers_size 64k;server{Listen, server_name xxx.com; location/{index index;} location ~. *\. ( Js|css|ico|png|jpg|eot|svg|ttf|woff) {Proxy_pass http://192.168.8.203:8080; expires 30d;} location ~. *$ {index index; Proxy_pass http://192.168.8.203:8080; } } }

Modify Nginx configuration file/etc/nginx/nginx.conf Default has a configuration file. In fact, most of the same, the key is the server segment settings. Here I set the server segment as shown above, and the other segments are ready for replication. The server section explains the following: the 35th behavior listens on the native 80 port. 37-39 lines represents the default home page, where the default home page I am index.jsp corresponds to an index in my project. Here, as needed, you can change

Index index.jsp index.html index.htm index.php

Refer to other articles for details. The key 40th line, this is a regular match, there are many introductions on the Internet. This matches all the static page suffixes used in my project. Line 41st is the proxy address. Here I am acting in my web app. Expires 30d cache for 30 days, where the cache is corresponding to the front page, the user's Cache-control field,

The regular in line 44th is a page that matches no suffix. The JSP page in my project is no suffix. This can be modified as needed. Likewise the agent to 192.168.8.203:8080 here. Here you may ask, I 艹, this has a hairy meaning ah? Of course it's not. The simple implementation of the static separation, we can change the 41st line to

Root   /var/lib/tomcat7/webapps/jielierp/web-inf

Represents the non-proxy, which is taken directly from the local disk. By checking the Tomcat log you can see that the static page is not accessible. But there is another problem. This flexibility is not good, for the following to be discussed in the memory cache and cluster deployment is not friendly, so the following writing. Write one more server segment.

server{listen 808; server_name static; location/{}location ~. *\. ( Js|css|ico|png|jpg|eot|svg|ttf|woff) {#所有静态文件直接读取硬盘 root/var/lib/tomcat7/webapps/jielierp/web-inf; expires 30d; # Cached for 30 Days}}

This time listening to the 808 port, and then the above code 41 lines can be modified to Proxy_pass http://192.168.8.203:808, here to achieve the separation of movement. If you have more than one server, you can modify the corresponding IP. If you find that the connection is not on, to check the firewall, permissions and other external issues, this configuration is the case.

If this is the case, we will find that the direct transmission of the page is too bandwidth-intensive. corresponding to the optimization of the web, it is thought that the page by gzip compression, and then upload to the user, and then decompression, which can effectively reduce bandwidth. This will use the Nginx gzip module. The default nginx is integrated with the Gzip module. Simply add the following configuration to the HTTP segment.

gzip on; Gzip_min_length 1k; #最小1K gzip_buffers 64K; Gzip_http_version 1.1; Gzip_comp_level 6; Gzip_types text/plain application/x-javascript text/css application/xml application/javascript; Gzip_vary on;

Give the homepage a look at the effect

Do not care about the number of requests, the two requests are Google plug-ins. Don't think I'm lying to you.

Caching is definitely an important thing for a website that many people visit. The first is to want through the plug-in, let Nginx and Redis synthesis, and then Nginx use Redis to cache, but found that the configuration is very troublesome, but also to download their own plug-in, re-compile Nginx, more trouble, so here feel that the cache with Nginx is a good choice. Although it's less efficient than Redis, it's better than nothing. The default cache for Nginx is the cache of the disk file system, not the memory-level cache like Redis. At first I thought Nginx was the only way. Later looked up to write the information, just know is I too naïve, to Linux not very understanding caused. Everything in Linux is file. It turns out that we can cache the files in the Linux file system that corresponds to the memory. I may be more difficult to understand, please search/dev/shm this file directory yourself. We cache the files in this file directory, which is actually quite the same as the memory cache. But it's still managed by the file system. So the memory cache is less than the custom format of Redis.

Basic configuration on HTTP segments

1 #缓存配置 2 Proxy_cache_key ' $host: $server _port$request_uri '; 3 Proxy_temp_file_write_size 64k; 4 Proxy_temp_path/dev/shm/jielierp/proxy_temp_path; 5 Proxy_cache_path/dev/shm/jielierp/proxy_cache_path levels=1:2 keys_zone=cache_one:200m inactive=5d max_size=1g; 6 proxy_ignore_headers x-accel-expires Expires Cache-control Set-cookie;
Location ~. *\. (Js|css|ico|png|jpg|eot|svg|ttf|woff) {Proxy_cache cache_one; Proxy_cache_valid 304 302 5d; proxy_cache_valid any 5d; Proxy_cache_key ' $host: $server _port$r Equest_uri '; Add_header X-cache ' $upstream _cache_status from $host '; Proxy_pass http://192.168.8.203:808;expires 30d; #缓存30天}

After these two configuration basically can achieve, here to say a few notes, also troubled me for a long time problem. Line 6th above the first code, proxy_ignore_headers if the HTML in the Web project head header is specified

1 <meta http-equiv= "pragma" content= "No-cache" > 2 <meta http-equiv= "Cache-control" content= "No-cache" > 3 <meta http-equiv= "Expires" content= "0" >

These do not cache, it is necessary to add proxy_ignore_headers configuration items. Another thing is/dev/shm the following file system permissions are given to the root user by default, so it is not safe to chmod 777-r/dev/shm, if the actual line can be given a user group, the setting for the user group is the first line of the configuration

User www www;

The 6th line of the second code above is to add a header field to easily see if the cache is hit.

We rm-rf/dev/shm/jielierp/proxy_* all the files below (note that if you are doing multiple tests here nginx-s reload to re-read the configuration or restart the service, because you rm-rf just deleted the cache file, However, the structure of the cached information is still in the nginx process, the structure is still, if not restarted, will appear to be inaccessible.

So remember to restart OH. Here is the effect of running

First time visit

Second visit, Ctrl+shift+r forced refresh in browser

You can see the effect here. Let's take a look at/dev/shm.

It's almost over here. Finally, it is also a key point of technology, cluster, cluster, cluster. This is going to use the upstream, see the beginning of the configuration file, is that

#负载均衡组 # static server group upstream static {server 127.0.0.1:808 weight=1; server 192.168.8.203:808 weight=1;}
#动态服务器组upstream dynamic {server 127.0.0.1:8080; #server 192.168.8.203:8080;}

The above is the cluster group. Upstream is the keyword, static and dynamic are the names of the two server cluster groups. In the first example, server 127.0.0.1:808 is the server address, and the latter weight=1 is the weight. You can write more than one. Pro Test, one of the cluster is bad, does not affect the system operation. For more polling rules, you can refer to more information on the Web. Not much to say here. As for how to use it? Proxy_pass http://192.168.8.203:808 changed to Proxy_pass http://static; This allows for equalization.

It's over by here. The above parts can be configured according to their own needs to achieve a single room load balance. One of the drawbacks of the above is that in the front of the Nginx if the machine, the back so that the machine lost the ability to be accessed, so the need to implement a number of Nginx multi-room load. This is another topic. There is no research yet. I'll have a chance to say it later.

Dynamic server group above if it is the kind of need to save the user state, there will be problems, that is, the session problem, such as when I log in Server1, the next dynamic server group after polling may be assigned to SERVER2, will cause to log back in. The workaround is to configure the polling rule to hash according to the IP requested by the user, and then assign the corresponding server. The specific configuration is as follows:

Upstream dynamic{Ip_hash; server 127.0.0.1:8080; server 192.168.0.203:8080;}

This enables a user to have a single server node. This will not have the problem of duplicate login. Another way to do this is to use a cache system to manage the session's unified storage. Specific practices I have not tried, reference materials have related articles, you can understand.

Nginx + Tomcat Dynamic separation for load balancing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.