Dynamic and static separation and load balancing _nginx Nginx and Tomcat

Source: Internet
Author: User
Tags epoll sendfile tomcat tomcat server

This paper introduces the static and dynamic separation and load balancing of Nginx and Tomcat, and the so-called separation is through nginx (or Apache, etc.) to handle the client requests of the picture, HTML and other files, Tomcat (or WebLogic) processing JSP, DO, etc. So as to achieve static and dynamic page access through different containers to deal with.

A Nginx Introduction:

Nginx a high-performance http and reverse proxy server, with high stability and support for hot deployment, module expansion is also easy. When the peak of access is encountered, or if someone maliciously initiates a slow connection, it is also likely to cause the server's physical memory to run out of frequent exchange, loss of response, can only restart the server, Nginx took a phased resource allocation technology, processing static files and no cache of Reverse agent acceleration, to achieve load balancing and fault tolerance, In such a high concurrent access situation, can withstand high concurrent processing.

Two Nginx Installation and Configuration

The first step: Download the Nginx installation package http://nginx.org/en/download.html

Step Two: Install the Nginx on Linux

#tar ZXVF nginx-1.7.8.tar.gz/decompression

#cd nginx-1.7.8

#./configure--with-http_stub_status_module--with-http_ ssl_module//Launch Server Status page and HTTPS module

A missing Pcre library error is reported, as shown in the figure:

Then the third step is to install Pcre, and then execute it at 3, and that's it.

4.make && make install//compile and install

5. Test the installation configuration is correct, Nginx installed in the/usr/local/nginx

#/usr/local/nginx/sbin/nginx-t, as shown in the picture:

Step three: Install Pcre on Linux

#tar ZXVF pcre-8.10.tar.gz//Unzip

CD pcre-8.10

./configure make

&& make install//compile and install

Three Nginx +tomcat to realize static and dynamic state separation

Static and dynamic separation is nginx processing the client's request to the passive page (HTML page) or picture, Tomcat processing client requests for the animated page (JSP page), because the nginx processing of static pages more efficient than Tomcat.

The first step: we want to configure the Nginx file

#vi/usr/local/nginx/conf/nginx.conf

 #user nobody; 
Worker_processes 1; 
Error_log Logs/error.log; 
 
PID Logs/nginx.pid; 
  events {use Epoll; 
Worker_connections 1024; 
  } http {include mime.types; 
  Default_type Application/octet-stream; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _refer 
 
  Er "' $http _user_agent" "$http _x_forwarded_for" "; 
  Access_log Logs/access.log Main; 
Sendfile on; 
Keepalive_timeout 65;  
gzip on;  
Gzip_min_length 1k;  
Gzip_buffers 4 16k;  
Gzip_http_version 1.0;  
Gzip_comp_level 2;  
Gzip_types text/plain application/x-javascript text/css application/xml;  
  Gzip_vary on; 
    server {Listen default; 
    server_name localhost; <span style= "color: #ff0000;" > Location ~ *\. (HTML|HTM|GIF|JPG|JPEG|BMP|PNG|ICO|TXT|JS|CSS) $//Nginx processing static page </span> {root/usr/tomcat/apache-  
          Tomcat-8081/webapps/root; Expires 30d; 
 Cache to client for 30 days       } error_page 404/404.html; 
    #redirect Server error pages to the static page/50x.html Error_page 502 503 504/50x.html; 
    Location =/50x.html {root html; } <span style= "color: #ff0000;" > Location ~ \. (Jsp|do) $ {//All JSP dynamic requests are given to Tomcat </span> <span style= "color: #ff0000;" > Proxy_pass http://192.168.74.129:8081; 
      The request from the JSP or do suffix is handed to Tomcat </span> Proxy_redirect off;  Proxy_set_header Host $host; 
      The backend Web server can obtain the user real IP proxy_set_header x-real-ip $remote _addr through x-forwarded-for; 
      Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;  Client_max_body_size 10m; The maximum number of single file bytes allowed to be requested by the client client_body_buffer_size 128k;  The buffer proxy buffers The maximum number of bytes requested by the client Proxy_connect_timeout 90;   Nginx with back-end server connection Timeout Proxy_read_timeout 90;   After successful connection, the back-end server response time Proxy_buffer_size 4k;    Set the buffer size of the proxy server (nginx) to hold user header information Proxy_buffers 6 32k; Proxy_buffers buffer, Web page average in 32kThe following words, such as setting proxy_busy_buffers_size 64k;//high load buffer size (proxy_buffers*2) proxy_temp_file_write_size 64k;
 Set cache folder size, greater than this value, will be passed from upstream server}}}

Step two: Under Tomcat under the Webapps/root new index.html static page, as shown in the figure:

Step three: Start Nginx service

#sbin/nginx as shown in the picture:

Fourth Step: Our page access http://192.168.74.129/index.html can normally display normal content, as shown in the picture:

Step Fifth: Test nginx and Tomcat high concurrency to handle static page performance?

Use the Linux AB site stress test command to test performance

1. Test the performance of nginx processing static pages

Ab-c 100-n 1000 http://192.168.74.129/index.html

This representation handles 100 requests simultaneously and runs 1000 times index.html files, as shown in the figure:

2. Test the performance of Tomcat processing static pages

Ab-c 100-n 1000 http://192.168.74.129:8081/index.html

This representation handles 100 requests simultaneously and runs 1000 times index.html files, as shown in the figure:

With the same processing of static files, the static performance of nginx processing is better than that of Tomcat. Nginx can request 5,388 times per second, while Tomcat requests only 2,609 times.

Summary: In the Nginx configuration file, configure static to Nginx processing, dynamic request to Tomcat, providing performance.

Four Load balancing and fault-tolerant of Nginx +tomcat

We in the case of high concurrency, in order to improve the performance of the server, reducing the concurrency pressure of a single server, we adopted a cluster deployment, but also to avoid a single server to hang out, the service can not access this situation, handling fault-tolerant problems.

Step one: We've deployed a two-day Tomcat server, 192.168.74.129:8081 and 192.168.74.129:8082.

The second step: Nginx as a proxy server, customer service end of the request server, the use of load balancing to deal with, so that the average customer service request distribution to the server every day, so as to reduce the pressure on the server side. Configure the nginx.conf file under Nginx.

#vi/usr/local/nginx/conf/nginx.conf

 #user nobody; 
Worker_processes 1; 
Error_log Logs/error.log; 
 
PID Logs/nginx.pid; 
  events {use Epoll; 
Worker_connections 1024; 
  } http {include mime.types; 
  Default_type Application/octet-stream; Log_format Main ' $remote _addr-$remote _user [$time _local] "$request" "$status $body _bytes_sent" $http _refer 
 
  Er "' $http _user_agent" "$http _x_forwarded_for" "; 
  Access_log Logs/access.log Main; 
Sendfile on; 
Keepalive_timeout 65;  
gzip on;  
Gzip_min_length 1k;  
Gzip_buffers 4 16k;  
Gzip_http_version 1.0;  
Gzip_comp_level 2;  
Gzip_types text/plain application/x-javascript text/css application/xml;  
Gzip_vary on; <span style= "color: #ff0000;" 
    >upstream Localhost_server {ip_hash; 
    Server 192.168.74.129:8081; 
  Server 192.168.74.129:8082; 
    }</span> server {Listen default; 
    server_name localhost; <span style= "color: #ff0000;" > Location ~ *\. (HTML|HTM|GIF|JPG|JPEG|BMP|PNG|ICO|TXT|JS|CSS) $//Nginx processing static page </span> {root/usr/tomcat/apache-tomcat-8081/webapps/root; Expires 30d; 
 
    Cache to client 30 days} Error_page 404/404.html; 
    #redirect Server error pages to the static page/50x.html Error_page 502 503 504/50x.html; 
    Location =/50x.html {root html; } <span style= "color: #ff0000;" >location ~ \. (Jsp|do) $ {//All JSP dynamic requests are given to Tomcat </span> <span style= "color: #ff0000;" >proxy_pass Http://localhost_server; 
      The request from the JSP or do suffix is handed to Tomcat </span> Proxy_redirect off;  Proxy_set_header Host $host; 
      The backend Web server can obtain the user real IP proxy_set_header x-real-ip $remote _addr through x-forwarded-for; 
      Proxy_set_header x-forwarded-for $proxy _add_x_forwarded_for;  Client_max_body_size 10m; The maximum number of single file bytes allowed to be requested by the client client_body_buffer_size 128k;  The buffer proxy buffers The maximum number of bytes requested by the client Proxy_connect_timeout 90; Nginx to back-end server connection Timeout time proxy_read_timeout 90;   After successful connection, the back-end server response time Proxy_buffer_size 4k;    Set the buffer size of the proxy server (nginx) to hold user header information Proxy_buffers 6 32k; Proxy_buffers buffer, Web page average below 32k, so set proxy_busy_buffers_size 64k;//high load buffer size (proxy_buffers*2) proxy_temp_file _write_size 64k;
 Set cache folder size, greater than this value, will be passed from upstream server}}}

Description

The server in 1.upstream is the IP (domain name) and port that points to the servers, with the following parameters

1) Weight: Set the forwarding weight of the server default value is 1.

2) Max_fails: is used in conjunction with fail_timeout, refers to the fail_timeout time period, if the server forwarding failure more than the Max_fails set value, this server is not available, max_fails default Value is 1

3) Fail_timeout: Indicates that the server is not available for the number of times that the forwarding failed within that time period.

4) Down: means this server is not available.

5) Backup: Indicates that the Ip_hash setting is not valid for this server and will forward requests to the server only after all servers that are not backed up are invalidated.

The 2.ip_hash setting is on a clustered server, if the same client request is forwarded to multiple servers, each server may cache the same information, which can result in waste of resources, and the Ip_hash setting will use the same client to request the same information the second time. is forwarded to the server side of the first request. But Ip_hash cannot be used with weight at the same time.

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.