In a complex enterprise application environment, a application server is often unable to assume all service requests, so many organizations have multiple server instances for this purpose. Together, these server instances can be organized into a robust enterprise operating environment that is easy to extend, support the load banlance, support fail over, and make backend server failure transparent to customers. Such an enterprise environment is what we often say cluster. Weblogic Cluster provides the possibility of a variety of load banlance, such as Web application request processing, which can be implemented by proxy (e.g. Apache, Httpclusterservlet, IIS), Different Java EE component have different load banlance implementations in WebLogic. Let's take a look at the following
1:http request the load Banlance implemented via proxy
When the client accesses the business page in cluster through proxy, Proxy uses its own algorithm (Round-robin) to implement the load banlance. These requests are, of course, initiated from different clients (or requests from the same client without a session). For the same client, if a session is used on the page, the request WebLogic to implement the same client through session adhesion is dispatch to the primary server. If the primary server is unable to provide a service, the request is dispatch to another server. Session adhesion can be achieved in the following ways:
1.1:browser supports cookies, WebLogic writes the Jsession ID to the cookie, and the Jseeion ID is submitted to the proxy at the next request submission, and the proxy passes the Jseeion ID To determine the dispatch of the request.
1.2:browser does not support cookie,server the call to Response.encodeurl () to attach the session ID to the URL when processing the return page.
1.3:post-data method, the session as a direct data, post to the proxy end.
Let's take a look at how WebLogic provides httpclusterservlet to implement the load banlance,
public synchronized Server next() {
if (list.size() == 0) return null;
if (index == -1) index = (int)(java.lang.Math.random() * list.size());
else index = ++index % list.size();
Object[] servers = list.values().toArray();
return (Server) servers[index];
}
Httpclusterservlet maintains a managed servlet list, and whenever a request is dispatch to a managed server, the index plus 1 of the server list The request will then be dispatch to the other server in the server list at the next dispatch request. Logic is very simple, but basically also implement the load Banlance function.
2:initialcontext's Load Banlance
We know that every time we need to get a JDBC connection, a JMS connection,ejb stub such as RMI, we have to initialize a context, which is when we initialize the context, what exactly is the connection managed Server?
When initializing the context, we need to provide a provider URL, as follows:
Provider_url = "t3://localhost:7011";
This writing is simple, directly connected to the 7001 corresponding server, but if the writing is as follows?
Cluster_provider_url= "t3://localhost:7011,localhost:7021";
At this time, load banlance came again. 10 clients (WebLogic server or thin client) New 10 InitialContext, the 10 clients connect 55 to the back-end two servers respectively. In fact, when the client is in new InitialContext, WebLogic creates a T3 connection to the corresponding managed server (rjvmconnection), noting that the rjvmconnection is a long connection, In the same JVM, there is only one connection to the same managed server. That is, if a client, the new 10 consecutive initialcontext, the 10 context is actually the same object, WebLogic server will not communicate with the backend server at this time, because the object is already in the client JVM.
The new InitialContext load banlance algorithm is basically the same as the proxy algorithm, which maintains a server list, which is implemented by the index increment method. The difference is that the proxy can recover the server list when the connection that connects to a managed server encounters peer gone, and the load banlance algorithm for the JNDI context cannot. That is, if the backend has three managed server, Server1, server2 failure, all the client context will be connected to the Server3, even if Server1, Server2 can recover, Subsequent requests will not be connected to them unless there is a problem with Server3 later.
It is worth mentioning that all the relevant operations in the context of the server affinity, rather than the load banlance. For example: 2 clients, respectively, new context, connected to Server1 and Server2, connected to the Server1 context, did 10 times lookup, then these 10 operations, are completed on the Server1, No action will be made on the server2. So the Jndi-level load banlance is not absolutely balanced.