Nginx+tomcat Load Balancing Strategy

Source: Internet
Author: User
Tags epoll hosting

Test environment is local, test software is:

Nginx-1.6.0,apache-tomcat-7.0.42-1. Apache-tomcat-7.0.42-2. Apache-tomcat-7.0.42-3

Using Nginx to do load balancing, three Tomcat do web-detailed business processing.

Nginx Configuration nginx.conf:

#Nginx所用用户和组.  #user NIUMD NIUMD is not specified under window; #user Nobody, #工作的子进程数量 (usually equal to the number of CPUs or twice times the CPU) Worker_processes 2; #错误日志存放路径 #error_log logs/error.log; #error_log logs/ Error.log notice; #error_log logs/error.log info; #指定pid存放文件 #pid logs/nginx.pid;events {#使用网络IO模型linux建议epoll,      The FreeBSD recommendation is not specified under Kqueue,window.            #use Epoll; #同意最大连接数 worker_connections 1024;}    HTTP {include mime.types;    Default_type Application/octet-stream; #定义日志格式 #log_format main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $bod    Y_bytes_sent "$http _referer" ' # ' "$http _user_agent" $http _x_forwarded_for ";    #access_log Logs/access.log Main;        Access_log Logs/access.log;      Client_header_timeout 3m;      Client_body_timeout 3m;         Send_timeout 3m;      Client_header_buffer_size 1k;        Large_client_header_buffers 4 4k;    Sendfile on;    Tcp_nopush on;     Tcp_nodelayOn    #keepalive_timeout 0;    Keepalive_timeout 65;    #gzip on;      Include gzip.conf; Upstream localhost {#依据ip计算将请求分配各那个后端tomcat. Many people mistakenly feel that they can solve the session problem. In fact, it doesn't work.

#同一机器在多网情况下, route switching. IP may differ from server localhost:18081; Server localhost:18082; Server localhost:18083; #依据IP做分配策略 Ip_hash; The #down indicates that the server temporarily does not participate in the load #weight the larger the 1.weight, the greater the weight of the load.

#max_fails: The number of times the consent request failed 1. When the maximum number of times is exceeded, the error #fail_timeout defined by the Proxy_next_upstream module is returned: The time of the pause after max_fails failure. #backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest. #nginx upstream currently supports 4 ways of allocating #1), polling (default) each request is assigned to a different back-end server in chronological order. Assuming that the back end serverdown off, you can voluntarily remove.

#2), weight specifies the polling probability, and the weight is proportional to the rate of access. Used in cases where the backend server performance is uneven.

#2), ip_hash each request to access the hash result of the IP assignment, so that each visitors fixed visit to a backend server, to solve the session problem. #3), fair (third-party) to allocate requests by the response time of the backend server. Priority allocation with short response times.

#4), Url_hash (third-party) server {Listen 80; server_name localhost; #charset Koi8-r; #access_log Logs/host.access.log Main; Location/{Proxy_pass http://localhost; Proxy_redirect off; Proxy_set_header Host $host; Proxy_set_header X-real-ip $remote _addr; Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for; Client_max_body_size 10m; #同意client请求的最大单文件字节数 client_body_buffer_size 128k; #缓冲区代理缓冲用户端请求的最大字节数. Proxy_connect_timeout: #nginx跟后端server连接超时时间 (proxy connection timeout) proxy_send_timeout; #后端se RVer Data return time (proxy send timeout) proxy_read_timeout; #连接成功后, backend server response time (agent receive timeout) proxy_buffer_size 4k; #设置代理server (Nginx) Save the user header information buffer size Proxy_buffers 4 32k, #proxy_buffers缓冲区, the average page under 32k, this Sample Setup Proxy_busy_buffers_size 64k, #高负荷下缓冲大小 (proxy_buffers*2) proxy_temp_file_write_size 64k; #设定缓存目录大小. Greater than this value. will be transmitted from upstreamserver} #error_page 404/404.html; # REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html; Location =/50x.html {root html; } # Proxy The PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ {# ProX Y_pass http://127.0.0.1; #} # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root HTML; # Fastcgi_pass 127.0.0.1:9000; # Fastcgi_index index.php; # Fastcgi_param Script_filename/scripts$fastcgi_script_name; # include Fastcgi_params; #} # Deny access to. htaccess files, if Apache ' s document Root # concurs with Nginx ' s one # #l OcatiOn ~/\.ht {# deny all; #}} # Another virtual host using mix of ip-, name-, and port-based configuration # #server {# listen 8000; # Listen somename:8080; # server_name somename alias Another.alias; # location/{# root HTML; # index index.html index.htm; #} #} # HTTPS Server # #server {# listen 443 SSL; # server_name localhost; # ssl_certificate Cert.pem; # Ssl_certificate_key Cert.key; # Ssl_session_cache shared:ssl:1m; # ssl_session_timeout 5m; # ssl_ciphers high:!anull:! MD5; # ssl_prefer_server_ciphers on; # location/{# root HTML; # index index.html index.htm; # } #}}


TOMCAT1 Configuration Server.xml:

<?

XML version= ' 1.0 ' encoding= ' utf-8 '?

><server port= "18001" shutdown= "shutdown" > <connector port= "18081" protocol= " http/1.1" connectiontimeout= "20000" redirectport= "18441"/> <connector port= "18021" protocol= "AJP/1.3" redirectport= "18441"/> <engine name= "Catalina" defaulthost= "localhost" jvmroute= "TOMCAT1" > </ Service></server>


TOMCAT1 Configuration Server.xml:

<?

XML version= ' 1.0 ' encoding= ' utf-8 '?

><server port= "18002" shutdown= "shutdown" > <connector port= "18082" protocol= " http/1.1" connectiontimeout= "20000" redirectport= "18442"/> <connector port= "18022" protocol= "AJP/1.3" redirectport= "18442"/> <engine name= "Catalina" defaulthost= "localhost" jvmroute= "TOMCAT1" > </ Service></server>


TOMCAT3 Configuration Server.xml:

<?xml version= ' 1.0 ' encoding= ' utf-8 '? ><server port= "18003" shutdown= "shutdown" > <connector port= " 18083 "protocol=" http/1.1 "  connectiontimeout=" 20000 "  redirectport=" 18443 "/>    <connector port=" 18023 "protocol=" ajp/1.3 "redirectport=" 18443 "/>    <engine name=" Catalina "defaulthost=" localhost "jvmroute = "TOMCAT1" >  </Service></Server>

Start three Tomcat separately, and visit the address in order: http://localhost:18081. http://localhost:18082;http://localhost:18083

Then start Nginx, Access address is: http://localhost

(cmd below execute nginx.exe, close cmd below to perform nginx-s stop)


This is based on ClientIP to do load balancing synchronization strategy has been basically to meet the demand, but considering the client multi-network card or client access to different server when the session is different from the problem, you need to use memcached to do the session synchronization strategy. Maybe I'll keep up with the configuration document.

Attached: The following are three methods from the Internet:
1, no session, through cookies and other ways to bypass the session.


There seems to be a problem with this approach, if the client disables cookies.
2, the application server to achieve the sharing
As a single sign-on, the use of central authentication server. or memcache to store the public data that the backend server needs to use. I used to have a program that uses local Ehcache+memcache to cache data. Among them, Ehcache caches some data unrelated to his machine, memcache caches some common data, but it often fails.
3, Ip_hash Way to solve the session sharing
The Ip_hash technology in Nginx can direct the request of an IP to the same back-end, so that a client and a backend in this IP can establish a stable session. Ip_hash is defined in the upstream configuration as an example of how to configure it:
Upstream Backend {
Server 127.0.0.1:8080;
Server 127.0.0.1:9090;
Ip_hash;
}
Ip_hash is easy to understand, but it is only possible to use IP as a factor to allocate the backend. Therefore Ip_hash is defective and cannot be used in some cases:
(1)/Nginx is not the most front-end server. Ip_hash requirements Nginx must be the most front-end server, otherwise nginx can not get the correct IP. Can not be based on IP as a hash. For example, squid is used as the most front-end. Then the Nginx take IP only can get squid ServerIP address, use this address to shunt is definitely confused.


(2)/Nginx backend There are other ways of load balancing.

If the Nginx backend has other load balancing. The request was diverted in another way. Then a client request must not be located on the same session application server.

In this way, the Nginx backend can only point directly to the application server, or another squid. Then point to Application Server. The best way is to use location for a diversion, will need to session part of the request through the Ip_hash shunt, the rest of the other back end.
(3) Upstream_hash (this way has not been tried)

Here is a comparison of Nginx and Apache from the Web:

1, Nginx and Apache advantages and disadvantages of comparison

The advantages of nginx with respect to Apache:
(1) Lightweight, identical Web services. Consumes less memory and resources than Apache
(2) Anti-concurrency. Nginx processing request is asynchronous non-clogging, and Apache is plug-type, in high concurrency, nginx can keep low resource consumption and high performance
(3) Highly modular design, relatively simple to write modules
(4) Community activity. A variety of high-performance modules produced quickly AH
Apache's strengths with respect to Nginx:
(1) Rewrite: rewrite powerful than nginx
(2) dynamic page: Super-module. The basic thought can be found
(3) Fewer bugs, nginx bugs are relatively large
(4) Ultra-stable

Existence is the reason, in general. Web services that require performance, using Nginx. Assuming you don't need performance for stability only, Apache. The latter's various function modules are implemented better than the former, such as the SSL module. There are many configurable items.

One thing to be aware of here. Epoll (FreeBSD is kqueue) network IO model is the fundamental reason for the high performance of nginx processing, but not all of the cases are Epoll victory, assuming that itself provides static services only a few files, Apache Select Models may be more performant than Epoll.

Of course. This is just a hypothesis based on the principle of the network IO model, and the real application still needs to be measured.

2. As a WEB server: Use fewer resources than Apache,nginx. Many other concurrent connections are supported. Higher efficiency, this makes Nginx particularly popular with virtual hosting providers.

In the case of high connection concurrency, Nginx is a good substitute for Apacheserver: Nginx is one of the software platforms that the bosses of the virtual hosting business often choose in the United States. Capable of supporting up to 50,000 concurrent connections, thanks to Nginx for choosing Epoll and Kqueue as the development model. Nginx as Load balancer Server:nginx can support both Rails and PHP programs externally, as well as support external service as HTTP proxy server. Nginx uses C to write, whether it is the system resource overhead or CPU use efficiency is much better than Perlbal.
As the mail agent Server:nginx is also a very good Mail agent server (one of the first to develop this product is also as a mail agent server), Last.fm describes the success and wonderful use of experience. Nginx is a very simple installation, the configuration file is very concise (also capable of supporting Perl syntax), Bugs very few Server:nginx start particularly easy, and almost can be achieved 24x7 uninterrupted execution, even if the implementation of a few months do not need to start again. You can also upgrade the software version number without interruption of service.

3, Nginx Configuration concise, Apache complex
Nginx static processing Performance 3 times times higher than Apache
Apache support for PHP is relatively simple, Nginx need to cooperate with other backend
Apache has more components than Nginx
Now Nginx is the first choice for Web server

4, the most important difference is that Apache is a synchronous multi-process model. A connection to a corresponding process. Nginx is asynchronous. Multiple connections (million-level) can be a corresponding process

5, Nginx processing static file is good, consumes less memory. But Apache is undoubtedly still the mainstream, with a lot of rich features. So it needs to be paired. Of course, it would be more economical to use nginx if you can be sure that Nginx is suitable for your needs.
Apache does not support multi-core handle the lack of chicken ribs, recommend using Nginx to do the front end. Back end with Apache. Large Web site suggestions for the use of nginx-generation cluster function

6, from the personal past use of the situation, nginx load capacity is much higher than the Apache.

The latest server also changed to Nginx. And Nginx changed configuration can-t test configuration There is no problem, Apache restarted when the configuration error found, will be very crash, change will be very cautious now look at a lot of cluster station, front-end nginx anti-concurrency. Back-end Apache cluster, also good match.

7, Nginx processing dynamic request is chicken. General dynamic request to Apache to do, nginx only suitable for static and reverse.



8, from my personal experience to see, Nginx is very good front-end service, the load performance is excellent, in the old Ben on the Nginx, with the Webbench simulation 10,000 static file Please do not have the difficulty. Apache's support for languages such as PHP is very good. In addition Apache has a strong support network. The time of the development is longer than that of Nginx.

9, Nginx is superior to Apache's main two points:
(1). Nginx itself is a reverse proxy server
(2). Nginx supports 7-layer load balancing. Other of course, nginx may support higher concurrency than Apache. But according to Netcraft statistics, April 2011 of statistics, Apache still occupies 62.71%, and Nginx is 7.35%. So it's gotta be. Aapche is still the first of most companies, because its mature technology and development community has also been very good performance.



10. Your need for Web server determines your choice.

In most cases, Nginx is superior to Apache, such as static file processing, php-cgi support, reverse proxy function, front-end cache, maintenance connection, and so on. In apache+php (prefork) mode. Assuming that PHP is slow or the front-end pressure is very large, it is very easy to have a spike in the number of Apache processes and thus denial of service.



11, can look at the Nginx Lua module: Https://github.com/chaoslaw...apache more than Nginx module, can be directly implemented with LUA Apache is the most popular, why? Most people don't bother updating to nginx or learning new things.

12, for Nginx. I like it the configuration file is very concise to write. Regular configuration makes a lot of things simple and efficient and consumes less resources. Powerful proxy, ideal for front-end response server

13, Apache in the processing of dynamic has advantages, nginx concurrency is better. CPU memory consumption is low. Assuming rewrite is frequent, it's Apache.

Nginx+tomcat Load Balancing Strategy

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.