Today on the Internet to check the data, before using the Apache JK module to do load balancing. Later found that JK's load configuration is a bit rigid, only according to the load weight value to the distribution of requests, not to achieve a more intelligent load balancing, and use MOD_JK access to the page is actually relatively slow. It may be that JK routes to the Real node server to compare time. Combined with the author's shortcomings of JK, today we use mod_proxy for load balancing and routing.
Previously presented JK-related drawbacks
1): Load balancing weights are written dead in the configuration file. Can not be based on the actual running time machine environment to determine the load balancing strategy, looks more rigid
2): Although session sharing is configured in Apache, the session is actually not shared on node. If a machine is hung, then the client session of this machine disappears, the fault tolerance is poor
The author's environment is as follows:
Os:windows7
Httpserver:apache Http Server2.2.17
tomcat:apache-tomcat-6.0.29
Let's see how to load the Mod_proxy module
- Loading the relevant Apache modules
To release annotations in the configuration file httpd.conf
#加载mod_proxyLoadModule proxy_module modules/mod_proxy.soloadmodule proxy_ajp_module modules/mod_proxy_ Ajp.soloadmodule proxy_balancer_module modules/mod_proxy_balancer.soloadmodule proxy_connect_module modules/mod_ Proxy_connect.soloadmodule proxy_ftp_module modules/mod_proxy_ftp.soloadmodule proxy_http_module modules/mod_proxy _http.so
Because these modules are automatically available in the apache2.2.x version, you can open the comments directly.
Modify <ifmodule dir_module> content as follows
<ifmodule dir_module> directoryindex index.html index.jsp</ifmodule>
At the end of this configuration file, add the following:
<virtualhost *:8011> ServerAdmin [email][email protected][/email] ServerName localhost Serveralias localhost proxypass/balancer://mycluster/stickysession=jsessionid nofailover=off Proxypassreverse/balancer://mycluster/errorlog "Logs/error.log" Customlog "Logs/access.log" Common</VirtualHost >
Where VirtualHost *:8011 represents the author's native HTTP server port.
Proxypass/balancer://mycluster/will redirect to balancer://mycluster/processing on behalf of all requests. The balancer is a built-in load.
proxypassreverse/balancer://mycluster/is the reverse proxy, which is the reverse proxy of all requests to the load-balanced application URL path.
Stickysession=jsessionid Nofailover=off is used for session reproduction.
Then add the following to the end of this configuration file to configure the node
Proxyrequests off<proxy balancer://mycluster>balancermember ajp://127.0.0.1:18009 loadfactor=1 route=tomcat7_ Node1balancermember ajp://127.0.0.1:28009 loadfactor=1 route=tomcat7_node2# status=+h To configure hot standby, the machine is requested when all machines are over Balancermember http://192.168.1.218:8009 status=+h# According to the number of requests balanced (default) #ProxySet lbmethod=byrequests# by weight #proxyset lbmethod=bytraffic# by load, that is, to load a new request #proxyset Lbmethod=bybusynessproxyset lbmethod=bybusyness</proxy>
This not only configures 2 Tomcat node nodes, but also configures the relevant load algorithm policies. Proxyset Lbmethod is the load balancing algorithm strategy. The use of this is based on the amount of load, the throughput of less node after you can be careful, assigned to your task is much more. The byrequests strategy is more about the number of times.
Here also to explain is <proxy balancer://mycluster>, and above the proxypass to correspond.
- Then prepare 2 tomcat for node configuration
The author of the Tomcat version is apache-tomcat-6.0.29, originally to use apache-tomcat-7.0.6, this version of a bit of a problem, and then look at the official website said, indeed this 7.0.6 version some problems.
First look at the configuration file for Apache-tomcat-6.0.29.2-node1
Modify the Shutdown port
<server port= "8005" shutdown= "Shutdown" >
Modify the HTTP service port
<connector port= "18080" protocol= "http/1.1" connectiontimeout= "20000" redirectport= "18443"/>
Modify the AJP protocol port
<connector port= "18009" protocol= "ajp/1.3" redirectport= "18443"/>
This port is actually a channel to communicate with Apache Http Server, Apache will communicate with Tomcat via the AJP protocol, so the node port configured in Apache is the port here.
Add Jvmroute Name
<engine name= "Catalina" defaulthost= "localhost" jvmroute= "tomcat7_node1" >
Then the most important is to release the Tomcat cluster configuration as follows
<cluster classname= "Org.apache.catalina.ha.tcp.SimpleTcpCluster" channelsendoptions= "8" > &L T Manager classname= "Org.apache.catalina.ha.session.DeltaManager" expiresessionsonshutdown= "false" Notifylistenersonreplication= "true"/> <channel classname= "Org.apache.catalina.tribes.group.Group Channel "> <membership classname=" Org.apache.catalina.tribes.membership.McastService " address= "228.0.0.4" mcastbindaddress= "127.0.0.1" port= "45564" frequency= "5 xx "Droptime="/> <receiver classname= "Org.apache.catalina.tribes.transport.ni O.nioreceiver "address=" Auto "tcplistenaddress=" 127.0.0.1 "port=" 4000 " Autobind= "selectortimeout=" maxthreads= "6"/> <sender ClassName= "Org.apache.catalina.tribes.transport.ReplicationTransmitter" > <transport classname= "Org.apache.catali Na.tribes.transport.nio.PooledParallelSender "/> </Sender> <interceptor classname=" Org.ap Ache.catalina.tribes.group.interceptors.TcpFailureDetector "/> <interceptor classname=" Org.apache.catalina . Tribes.group.interceptors.MessageDispatch15Interceptor "/> </Channel> <valve classname=" org. Apache.catalina.ha.tcp.ReplicationValve "filter=" "/> <valve classname=" Org.apache.catalina. Ha.session.JvmRouteBinderValve "/> <deployer classname=" Org.apache.catalina.ha.deploy.FarmWarDeployer " Tempdir= "/tmp/war-temp/" deploydir= "/tmp/war-deploy/" watchdir= "/tmp/w ar-listen/"watchenabled=" false "/> <clusterlistener classname=" org.apache.catalina.ha.se Ssion. JvmroutesessionidbInderlistener "/> <clusterlistener classname=" Org.apache.catalina.ha.session.ClusterSessionListener "/>&L T;/cluster>
After the Apache-tomcat-6.0.29.2-node2 configuration and it almost, only some port differences, the difference is as follows
<server port= "8006" shutdown= "shutdown" >..................<connector port= "28080" protocol= "HTTP/1.1 " connectiontimeout= "20000" redirectport= "28443"/>..................<connector port= "28009" protocol= "AJP/1.3" Redirectport= "28443"/>..................<engine name= "Catalina" defaulthost= "localhost" jvmroute= "tomcat7_node2"; ..... <receiver classname= "Org.apache.catalina.tribes.transport.nio.NioReceiver" address= "Auto" tcplistenaddress= "127.0.0.1" port= "4001" autobind= "" selectortimeout= " 6" maxthreads= " >
So Tomcat is a complete
- Write a Web project test
The test page code is as follows
<%@ page language= "java" contenttype= "text/html; Charset=utf-8 "pageencoding=" UTF-8 "%><! DOCTYPE HTML PUBLIC "-//w3c//dtd HTML 4.01 transitional//en" HTTP://WWW.W3.ORG/TR/HTML4/LOOSE.DTD "><% @page Import= "java.util.*"%><% @page import= "java.net.InetAddress;" %>Web-inf\web.xml as follows
<?xml version= "1.0" encoding= "UTF-8"? ><web-app version= "2.5" xmlns= "Http://java.sun.com/xml/ns/javaee" Xmlns:xsi= "Http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation= "Http://java.sun.com/xml/ns/javaee http ://java.sun.com/xml/ns/javaee/web-app_2_5.xsd "><distributable/><welcome-file-list>< Welcome-file>index.jsp</welcome-file></welcome-file-list></web-app>
Added <distributable/> for session replication.
- Web Project Test Results
The test effect is the same browser IE session in different node shared a session, one of the node hangs also does not matter, the other is immediately copied over, and in different node switch also does not matter, the session will not be lost. Change a browser Firefox to create a new session, do not affect each other. Solve the previous 2 questions, 1, the load algorithm can be based on the actual demand for pressure to share; 2,session The session can be copied without a very complicated configuration, and it doesn't matter if a node hangs up, and the session will work on the other node. and node switching in the different stages of the request is okay and transparent to the user.
Apache_proxy load Balancing and session replication