NGINX+TOMCAT distributed cluster Deployment under Windows

Source: Internet
Author: User

First download http://nginx.org/en/download.html, my local environment for

Implementation of the schema:

As can be seen from the diagram, Nginx as a load balancer request dispatcher, when the request a application, distributed to a cluster, with the reasonable request B application, when the distribution to the B cluster.

nginx.conf Configuration

#Nginx所用用户和组, #user NIUMD NIUMD is not specified under window;    #工作的子进程数量 (usually equal to the number of CPUs or twice times the CPU) Worker_processes 2;  #错误日志存放路径 #error_log Logs/error.log;  #error_log Logs/error.log Notice;    Error_log Logs/error.log Info;    #指定pid存放文件 PID Logs/nginx.pid;      Events {#使用网络IO模型linux建议epoll, FreeBSD recommends using Kqueue,window not specified.            #use Epoll;  #允许最大连接数 worker_connections 2048;      } http {include mime.types;            Default_type Application/octet-stream; #定义日志格式 #log_format main ' $remote _addr-$remote _user [$time _local] $request ' # ' "$status" $        Body_bytes_sent "$http _referer" ' # ' "$http _user_agent" $http _x_forwarded_for ";      #access_log off;        Access_log Logs/access.log;      Client_header_timeout 3m;      Client_body_timeout 3m;         Send_timeout 3m;      Client_header_buffer_size 1k;        Large_client_header_buffers 4 4k;      Sendfile on;     Tcp_nopush On        Tcp_nodelay on;        #keepalive_timeout 75 20;    Include gzip.conf;        Upstream localhost {#根据ip计算将请求分配各那个后端tomcat, many people mistakenly think that can solve the session problem, in fact, can not.         #同一机器在多网情况下, routing switching, IP may be different #ip_hash;        Server localhost:18001;      Server localhost:18002;        Server localhost:18003;         Server localhost:18004;       } upstream T1 {server localhost:18001;       } upstream t2{server localhost:18002;        } server {Listen 80;        server_name localhost;        #charset Koi8-r;        #access_log Logs/host.access.log Main;         Location/{# root HTML;         # index index.html index.htm;           Proxy_connect_timeout 3;           Proxy_send_timeout 30;           Proxy_read_timeout 30;        Proxy_pass http://localhost;         } location/t1.jsp {# root HTML;         # index index.html index.htm;   Proxy_connect_timeout 3;        Proxy_send_timeout 30;           Proxy_read_timeout 30;        Proxy_pass http://t1;         } location/t2.jsp {# root HTML;         # index index.html index.htm;           Proxy_connect_timeout 3;           Proxy_send_timeout 30;           Proxy_read_timeout 30;        Proxy_pass http://t2;        } #error_page 404/404.html;        # REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html;        Location =/50x.html {root html; } # Proxy The PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ {# ProX        Y_pass http://127.0.0.1;        #} # Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ {        # root HTML;        # Fastcgi_pass 127.0.0.1:9000;        # Fastcgi_index index.php; # Fastcgi_paraM Script_filename/scripts$fastcgi_script_name;        # include Fastcgi_params; #} # Deny access to. htaccess files, if Apache ' s document Root # concurs with Nginx ' s one # #l        ocation ~/\.ht {# deny all;       #}} # Another virtual host using mix of ip-, name-, and port-based configuration # #server {# listen    8000;    # Listen somename:8080;    # server_name somename alias Another.alias;    # location/{# root HTML;    # index index.html index.htm;    #} #} # HTTPS Server # #server {# listen 443;    # server_name localhost;    # SSL on;    # ssl_certificate Cert.pem;    # Ssl_certificate_key Cert.key;    # ssl_session_timeout 5m;    # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers high:!anull:!    MD5;    # ssl_prefer_server_ciphers on; # location/{# root HTML;   # index index.html index.htm; #    }    #}}

Proxy.conf

Proxy_redirect          off;  Proxy_set_header        Host $host;  Proxy_set_header        x-real-ip $remote _addr;  Proxy_set_header        x-forwarded-for $proxy _add_x_forwarded_for;  Client_max_body_size    10m;  Client_body_buffer_size 128k;  Proxy_connect_timeout   ;  Proxy_send_timeout      ;  Proxy_read_timeout      ;  Proxy_buffer_size       4k;  Proxy_buffers           4 32k;  Proxy_busy_buffers_size 64k;  Proxy_temp_file_write_size 64k;  

Gzip.conf

gzip on              ;  Gzip_min_length      ;  Gzip_types         text/plain text/css application/x-javascript;

Because Tomcat is only a single host, Server.xml will change the port number to a different one.

This place is each Tomcat plus a logo tomcat1,tomcat2,tomcat3,tomcat4

Then in D:\nginxtomcat\apache-tomcat-6.0.18_1\webapps\ROOT, my local environment path, write a JSP file, the content is tomcat1,tomcat2,tomcat3,tomcat4 ... Used to test nginx distribution

JSP content:

After the tomcat port number has been modified and the JSP file is added, it can be tested.

Start Nginx:

NGINX-T Test nginx.conf file, if there is an error, will prompt such as the following error

Start Nginx. The next stage to run Nginx mode boot (recommended in this way)

Start TOMCAT1,TOMCAT2,TOMCAT3,TOMCAT4 separately ... Used to test nginx distribution

Visit http://localhost/hello.jsp

The page appears in turn tomcat1,2,3, content. Proved successful ...

Because Nginx by default by rotation way to distribute, if you want to change the load mode, modify here

About upstream:

1. Polling (default)

Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.

2, Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:

Upstream Bakend {
Server 192.168.159.10 weight=10;
Server 192.168.159.11 weight=10;
}


3, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:

Upstream resinserver{
Ip_hash;
Server 192.168.159.10:8080;
Server 192.168.159.11:8080;
}


4. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.

Upstream resinserver{
server Server1;
Server Server2;
Fair
}


5. Url_hash (Third Party)

Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.

Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm


Upstream resinserver{
Server squid1:3128;
Server squid2:3128;
Hash $request _uri;
Hash_method CRC32;
}

Tips

Upstream resinserver{#定义负载均衡设备的Ip及设备状态
Ip_hash;
Server 127.0.0.1:8000 down;
Server 127.0.0.1:8080 weight=2;
Server 127.0.0.1:6801;
Server 127.0.0.1:6802 backup;
}

In servers that need to use load balancing, add

Proxy_pass http://resinserver/;


The status of each device is set to:
1.down indicates that the server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
4.fail_timeout:max_fails the time of the pause after the failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.

Nginx supports multiple sets of load balancing at the same time, which is used for unused servers.

Client_body_in_file_only set to On can speak the client post data logged to the file to do debug
Client_body_temp_path setting a directory of record files can be set up to 3 levels of directories
The location matches the URL. Can redirect or perform new proxy load balancing

To this end, in the production environment, an nginx is certainly not perfect, if the nginx hangs, is not the entire application down ... Production environment, we can deploy more than one nginx, using F5 or lvs+keepalive to achieve the load.

The F5 is the transport layer load.

Nginx is the application layer load.

NGINX+TOMCAT distributed cluster Deployment under Windows

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.