This two days a project in the stress test, found that as long as the number of concurrent more than 250, continuous testing two rounds will have a connection anomaly, the more the number of test rounds appear more frequently, the Exception log is as follows:
[Plain]View PlainCopy
- caused by:com.caucho.hessian.client.hessianconnectionexception:500:java.io.ioexception:error writing to server
- At Com.caucho.hessian.client.HessianURLConnection.sendRequest (hessianurlconnection.java:142)
- At Com.caucho.hessian.client.HessianProxy.sendRequest (hessianproxy.java:283)
- At Com.caucho.hessian.client.HessianProxy.invoke (hessianproxy.java:170)
- At $Proxy 168.sendOpenAcctInfo (Unknown Source)
- At Sun.reflect.GeneratedMethodAccessor750.invoke (Unknown Source)
- At Sun.reflect.DelegatingMethodAccessorImpl.invoke (delegatingmethodaccessorimpl.java:25)
- At Java.lang.reflect.Method.invoke (method.java:597)
- At Org.springframework.remoting.caucho.HessianClientInterceptor.invoke (hessianclientinterceptor.java:219)
- At Org.springframework.aop.framework.ReflectiveMethodInvocation.proceed (reflectivemethodinvocation.java:171)
- At Org.springframework.aop.framework.JdkDynamicAopProxy.invoke (jdkdynamicaopproxy.java:204)
- At $Proxy 169.sendOpenAcctInfo (Unknown Source)
- At Com.shine.web.bean.OpenAcctBeanImpl.sendOpenAcctInfo (openacctbeanimpl.java:62)
- ... More
- caused by:java.io.IOException:Error writing to server
- At Sun.reflect.NativeConstructorAccessorImpl.newInstance0 (Native Method)
- At Sun.reflect.NativeConstructorAccessorImpl.newInstance (nativeconstructoraccessorimpl.java:39)
- At Sun.reflect.DelegatingConstructorAccessorImpl.newInstance (delegatingconstructoraccessorimpl.java:27)
- At Java.lang.reflect.Constructor.newInstance (constructor.java:513)
- At Sun.net.www.protocol.http.httpurlconnection$6.run (httpurlconnection.java:1345)
- At java.security.AccessController.doPrivileged (Native Method)
- At Sun.net.www.protocol.http.HttpURLConnection.getChainedException (httpurlconnection.java:1339)
- At Sun.net.www.protocol.http.HttpURLConnection.getInputStream (httpurlconnection.java:993)
- At Com.caucho.hessian.client.HessianURLConnection.sendRequest (hessianurlconnection.java:122)
- ... More
- caused by:java.io.IOException:Error writing to server
- At Sun.net.www.protocol.http.HttpURLConnection.writeRequests (httpurlconnection.java:453)
- At Sun.net.www.protocol.http.HttpURLConnection.writeRequests (httpurlconnection.java:465)
- At Sun.net.www.protocol.http.HttpURLConnection.getInputStream (httpurlconnection.java:1047)
- At Java.net.HttpURLConnection.getResponseCode (httpurlconnection.java:373)
- At Com.caucho.hessian.client.HessianURLConnection.sendRequest (hessianurlconnection.java:109)
- ... More
Start by using the error writing to the server to find the reason on the network, and find that the basic is irrelevant.
Later search hessian and spring compatibility issues, found spring2.5.6 and hessian4.0.7 incompatible. Reduce the Hessian version number to 3.1.3, the situation has improved, but after testing 10 rounds, the anomaly appeared again.
With memory monitoring, troubleshoot virtual machine memory issues first. The virtual machine memory is configured as-xms1024m-xmx1024m, which is monitored to find that less than half of the actual memory is consumed.
Then through the NETSTAT-NA monitoring operating system port occupancy, found that the port occupies less than 500 peak, this reason also ruled out. (The test server has modified the registry to reduce the time_wait time to 30 seconds, so there is basically no port occupancy problem).
With CPU monitoring, the CPU usage is less than 50% when the concurrency spike is confirmed.
There are indications that these are not the cause of disconnection, so where is the bottleneck?
So we turned our gaze to JBoss's configuration.
First confirm the database connection pool configuration, the maximum number of connections set to 50, the previous rounds can be normal operation, the database connection should be sufficient;
Then confirm the JBoss thread pool configuration and find the default configuration as follows:
[HTML]View PlainCopy
- <Mbean code="Org.jboss.util.threadpool.BasicThreadPool"
- name="Jboss.system:service=threadpool">
- <attribute name="name">jboss System Threads</attribute>
- <attribute name="Threadgroupname">system Threads</attribute>
- <!--How long a thread would live without any tasks in MS --
- <attribute name="KeepAliveTime">60000</attribute>
- <!--the max number of threads in the pool--
- <attribute name="maximumpoolsize">10</attribute>
- <!--The max number of tasks before the queue is full--
- <attribute name="maximumqueuesize">1000</attribute>
- <!--the behavior of the pool when a task is added and the queue are full.
- Abort-a RuntimeException is thrown
- Run-the calling thread executes the task
- Wait-the calling thread blocks until the queue has a
- Discard-the task is silently discarded without being run
- Discardoldest-check to see if a task was about to complete and enque
- The new task if possible, else run the task in the calling thread
- -->
- <attribute name="Blockingmode">run</attribute>
- </Mbean>
Search for a description of the relevant configuration, in the high concurrency, it is recommended to modify the size of the maximumpoolsize 125% of the number of concurrent.
Because we are not testing the continuous concurrency, so the thread pool size modified to 200 test first, found that the concurrency number at 300 can be normal operation, and the number of concurrent changes to 500, continuous testing for 6 hours, no exception found.
Now more curious is, why 250 concurrent time can be without error, more than 250 concurrency, will be frequent error, this value and maximumpoolsize parameters in the end what is the connection?
http://blog.csdn.net/nicholas_lin/article/details/20639481
Http://wenku.baidu.com/link?url=eUQiTt73bQN_ xbhvnpahdnsmyfldfqqxk1af5pp2dhtgbro4nhaws7rem8wzy5wviioeuax5uquuqtncm9drsnmjetboto1nniklsetzh6s
JBoss configuration resolves high concurrency connection exception issues (GO)