On an article to tell you the installation of Nginx, then this article for you to talk about Nginx+tomcat to achieve load balance.
First of all, why use Ngnix to do load balancing, in fact, the most famous for the load balance is F5, F5 is in the hardware facilities, often tens of thousands of, hundreds of thousands of, millions of, for the general small companies, this is a lot of overhead, so can try to use software, or use software, Although the effect will be somewhat poor, but still can play a certain role. Environmental Preparedness
Three machines equipped with centos6.5 systems. Of these two machines are equipped with TOMCAT7, a machine equipped with nginx3.0.2, as for the specific how to install Tomcat and Ngnix, here is no longer introduced, please go online to find information.
In this way we also need to prepare a server with Redis service, Redis is best configured as a cluster, here to demonstrate the Tomcat cluster, the use of a single server. Only with the installation of Redis, you can refer to the small series of Linux installation Redis and set up services . Test Program
When the environment is ready, we write a simple test program to see if our two Tomcat services are using the same Redis service as the session storage medium. The application is as follows, two tomcat servers should have deployed the same application, but here to differentiate, we deployed a program, but the page inconsistency plus the last three bits of IP to differentiate between different services.
<body>
This is the first page 128
<%= session.getid ()%>
</body>
<body>
the second page 129
<%= Session.getid ()%>
</body>
As you can see, if two SessionID are the same, then we can prove that two Tomcat servers have shared session with Redis. Configure Tomcat
We're going to use Tomcat to share the session's jar package with Redis, all ready for three, as shown below. This jar is not very easy to find, here provides a download address, Tomcat+redis shared session.
After the jar package is downloaded, we put the jar packages into Tomcat, then modify the Tomcat/conf/context.xml file and add the following code to the last </Context> above:
<valveclassnamevalveclassname= "Com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve"/>
<managerclassnamemanagerclassname= "Com.orangefunction.tomcat.redissessions.RedisSessionManager"
host= " 192.168.20.128 "<!-redis IP address-->
port=" 6379 "database="
0 "maxinactiveinterval=
"/> "
Here we have the Tomcat configured. nginx Configuration
Nginx after installation, modify the/usr/local/conf/nginx.conf configuration file, the bottom is the most simple configuration. Mainly configure our Tomcat server address + port number. and their weights.
#user nobody;
Worker_processes 1;
#error_log Logs/error.log;
#error_log Logs/error.log Notice;
#error_log Logs/error.log Info;
#pid Logs/nginx.pid;
Events {worker_connections 1024;}
HTTP {include mime.types;
Default_type Application/octet-stream; #log_format Main ' $remote _addr-$remote _user [$time _local] "$request" ' # ' $status $body _bytes_sen
T "$http _referer" "$http _user_agent" "$http _x_forwarded_for";
#access_log Logs/access.log Main;
Sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
Keepalive_timeout 65;
Upstream myserver{server 192.168.96.130:8080;
Server 192.168.96.131:8080;
} #gzip on;
server {Listen 80;
server_name localhost;
#charset Koi8-r;
#access_log Logs/host.access.log Main;
Location/{# Proxy_pass http://192.168.96.130:8080; # root HTML;
# index index.html index.htm;
Proxy_pass Http://myServer;
} #error_page 404/404.html;
# REDIRECT Server error pages to the static page/50x.html # Error_page 502 503 504/50x.html;
Location =/50x.html {root html; # Proxy The PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ {#
Proxy_pass http://127.0.0.1;
# Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ {
# root HTML;
# Fastcgi_pass 127.0.0.1:9000;
# Fastcgi_index index.php;
# Fastcgi_param Script_filename/scripts$fastcgi_script_name;
# include Fastcgi_params;
#} # Deny access to. htaccess files, if Apache ' s document Root # concurs with Nginx ' s one # #location ~/\.ht {# deny all; #}} # Another virtual host using mix of ip-, name-, and port-based configuration # #server {# l
Isten 8000;
# Listen somename:8080;
# server_name somename alias Another.alias;
# location/{# root HTML;
# index index.html index.htm;
#} # HTTPS Server # #server {# listen 443 SSL;
# server_name localhost;
# ssl_certificate Cert.pem;
# Ssl_certificate_key Cert.key;
# Ssl_session_cache shared:ssl:1m;
# ssl_session_timeout 5m; # ssl_ciphers high:!anull:!
MD5;
# ssl_prefer_server_ciphers on;
# location/{# root HTML;
# index index.html index.htm;
# }
#}
}
So all of our preparations are done. Next we're going to test. Validation Results
We first access the IP 128, and then to 129 of the visit, we will find that SessionID is not the same.
Then we visit through Nginx, we refresh a few times, will find that he will randomly select the server, load the page, but we can find whether the access IP is 128 or 129, his sessionid is a, So we conclude that two Tomcat servers have shared session.
So our Tomcat uses Redis to implement session sharing, and with Ngnix to achieve load balancing, but we think, if we use a nginx, if the nginx on the server, downtime, then our program is dead. So we should implement a highly available scenario as follows. Article will introduce us to nginx+keepalived to achieve high availability load balancing.