On the line encountered a case, the application structure is nginx----->resin---java, where nginx do Lb,resin as a Java container. In the Nginx level to do HTTP code monitoring, found that there is a high ratio of 4xx alarm:
650) this.width=650; "src=" http://s3.51cto.com/wyfs02/M01/48/2B/wKiom1QF1YrS2pyTAAMlQYAH7oY528.jpg "title=" 1111. PNG "alt=" wkiom1qf1yrs2pytaamlqyah7oy528.jpg "/>
Analysis of Nginx logs, found to be due to a high 499 ratio caused
XXX xxxxx-[29/oct/2012:04:10:03 +0800] "get/getconfiguration.jsp?peer_version=2.3.0.4779&peer_id= e00b3b81b458d7d5a3c2e2bd85865354 http/1.0 "499 0"-""-""-""-"xxxxx:8080" "-" 0.001 "xxxxx xxxxx-[29/oct/2012:04:10:0 3 +0800] "get/getconfiguration.jsp?peer_version=2.3.0.4779&peer_id=e00b3b81b458d7d5a3c2e2bd85865354 HTTP/1.0" 499 0 "-" "-" "-" "-" xxxx:8080 ""-"" 0.000 "
4,991 is due to the back-end response timeout, but here the 499 response time is within milliseconds, proving that it is not the time-out caused, but the back-end directly can not respond to throw. There are two scenarios for Java applications, one being the thread lock and the other being the stack.
Using Jstack to print out a stack analysis, a thread similar to the following is found to be locked:
"Http-0.0.0.0:8080-65096$1864960835" daemon prio=10 tid=0x000000004c169800 nid=0x4bf8 waiting for monitor entry [0x0000000043e8e000. 0x0000000043e8ed10] java.lang.thread.state: blocked (On object monitor) at com.caucho.server.log.accesslog.log ( accesslog.java:345) - waiting to lock <0x00002aaab522a638> (A java.lang.object) at com.caucho.server.webapp.webappfilterchain.dofilter (webappfilterchain.java:223) at com.caucho.server.dispatch.servletinvocation.service ( servletinvocation.java:265) at Com.caucho.server.http.HttpRequest.handleRequest (httprequest.java:273) &NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSp; at com.caucho.server.port.tcpconnection.run (tcpconnection.java:682) - locked <0x00002aab3a6542f8> (a Java.lang.Object) at com.caucho.util.threadpool$ Item.runtasks (threadpool.java:730) at Com.caucho.util.threadpool$item.run (threadpool.java:649) At java.lang.thread.run (thread.java:619)
This is actually a bug, in resin 3.1.9 and 3.1.11 log rollover will encounter an access log lock problem, causing the sync lock to not properly release the thread is locked.
BugID:
http://bugs.caucho.com/view.php?id=3509
http://bugs.caucho.com/view.php?id=4821
The workaround is also relatively simple:
1. Upgrade resin to 4.0.2 or higher
2. If you do not care resin log, you can diable off the function of logging
This article is from the "Food and Light Blog" blog, please make sure to keep this source http://caiguangguang.blog.51cto.com/1652935/1548047
Resin An example of a lock problem caused by Access log