Improve program performance by using JDK JPS, jstack tools to detect code problems

Source: Internet
Author: User
Tags log log stack trace apache log

??


Today we share with you how to use the JPS and jstack tools of JDK to combine the problem of locating code to improve the stability, robustness and performance of the program.


In the JDK bin, there are a lot of tools underneath the directory



JPS, Jstack Tools Introduction:


JPs

is a JDK1.5 provides a command to display all the current Java process PID, simple and practical, very suitable for the Linux/unix platform to simply look at the current Java process some simple situation.

Command format: JPS [options] [HostID]

[Options] Options:

-Q: Output VM identifiers only, not including Classname,jar name,arguments in Main method

-M: Output The parameters of Main method

-L: Outputs the full package name, applies the main class name, and the full path name of the jar

-V: Output JVM parameters

-V: Outputs the parameters that are passed to the JVM by the flag file (the. hotspotrc file or the file specified by-xx:flags=

-joption: Pass parameters to the VM, for example:-j-xms1024m

Use-L as an example:


In the diagram, the first example is the process number, and the second is the process name (the running program), such as ActiveMq (/home/activemq/apache-activemq-5.5.1/bin/run.jar), Dubbo

Services (Com.alibaba.dubbo.container.Main), Tomcat (Org.apache.catalina.startup.Bootstrap), and more.


Jstack:


Jstack is a stack trace tool that comes with Java virtual machines. Jstack is used to generate a thread snapshot of the current moment of the Java virtual machine. Thread snapshots are every line in the current Java Virtual machine is impersonating

In the collection of method stacks that are executed, the primary purpose of generating a thread snapshot is to locate the cause of a thread's long pauses, such as inter-thread deadlocks, dead loops, and long-time requests for external resources

Waiting.


When a thread pauses, you can see the call stack of each thread by Jstack, and you know what the unresponsive thread is doing in the background, or what resources it waits for.

Command format: jstack [option] pid

[Options] Options:

-F: Force print stack information when ' jstack [-l] PID ' is not responding;

-L: Long list. Prints additional information about the lock, such as the ownable synchronizers list belonging to Java.util.concurrent.

-M: Prints all of the stack information for Java and Native/C + + frameworks. -H | -help Printing Help information

PID: the Java process ID that needs to be printed configuration information can be queried using the JPS tool

Use-L as an example:


Command:

Operation Result:


We can look at the call stacks of each thread to see what the unresponsive thread is doing in the background, or what resources it waits for.


JPS, Jstack Find code problems


Principle:


Through the JPS command to find the process of the corresponding program, record the process number, through the jstack command the process number is located in the program's thread stack information output to a file, and then the information inside the file

Analysis, find the cause and solve the problem.

Note : This is a way to make it easier to find code problems when used concurrently.


Steps:


1). Go to the Bin directory of the JDK,


2). Use the JPS tool to view the Tomcat process (if the Linux environment recommends using the PS command);

Because only one tomcat is installed, use the JPS-M command (if more than one Tomcat is installed, use the JPS-V command, which shows the path to each tomcat in the result of running the command)



3). Execute the jstack command and output the output to the log file;


Command:


[Email protected] bin]# jstack 28501 >log1.log

Open file Log1.log file look inside the content:

[[email protected] bin]# tail-200f log1.log-locked <0x00000007058fc4b0> (a org.apache.tomcat.util.net . Jioendpoint$worker) at Org.apache.tomcat.util.net.jioendpoint$worker.run (jioendpoint.java:484) at Java.lang. Thread.run (thread.java:662) "http-8090-11" daemon prio=10 tid=0x00002aaab4047000 nid=0x7181 in Object.wait () [        0X000000004238B000] Java.lang.Thread.State:WAITING (on object monitor) at java.lang.Object.wait (Native Method) -Waiting on <0x00000007056433f0> (a org.apache.tomcat.util.net.jioendpoint$worker) at Java.lang.Object  . Wait (object.java:485) at org.apache.tomcat.util.net.jioendpoint$worker.await (jioendpoint.java:458)-locked <0x00000007056433f0> (a org.apache.tomcat.util.net.jioendpoint$worker) at ORG.APACHE.TOMCAT.UTIL.NET.JIOENDP Oint$worker.run (jioendpoint.java:484) at Java.lang.Thread.run (thread.java:662) "http-8090-10" Daemon prio=10 tid=0x0 0002aaab46ec000 nid=0x7180 in Object. Wait () [0x000000004228a000] Java.lang.Thread.State:WAITING (on object monitor) at Java.lang.Object.wait (Native Method)-Waiting on <0x0000000705643610> (a org.apache.tomcat.util.net.jioendpoint$worker) at java.la Ng. Object.wait (object.java:485) at org.apache.tomcat.util.net.jioendpoint$worker.await (JIoEndpoint.java:458)- Locked <0x0000000705643610> (a org.apache.tomcat.util.net.jioendpoint$worker) at ORG.APACHE.TOMCAT.UTIL.NET.J Ioendpoint$worker.run (jioendpoint.java:484) at Java.lang.Thread.run (thread.java:662) "http-8090-9" Daemon prio=10 ti d=0x00002aaab4525800 nid=0x717f in Object.wait () [0x000000004078a000] Java.lang.Thread.State:WAITING (on Object Monito R) at java.lang.Object.wait (Native Method) – Waiting on <0x0000000705643898> (a org.apache.tomcat.util . Net. Jioendpoint$worker) at java.lang.Object.wait (object.java:485) at Org.apache.tomcat.util.net.jioendpoint$work Er.await (JIOENDPOint.java:458)-Locked <0x0000000705643898> (a org.apache.tomcat.util.net.jioendpoint$worker) at ORG.A Pache.tomcat.util.net.jioendpoint$worker.run (jioendpoint.java:484) at Java.lang.Thread.run (thread.java:662)


4). log file for analysis


Since the above Tomcat is not under stress testing, it is seen that the Tomcat thread does not have its own program code thread. So the use of previous time in the

After the stress test, the exported log is analyzed. Some of the log contents are as follows:

"Dubboclienthandler-192.168.6.162:2099-thread-2" daemon prio=10 tid=0x00002aaac069a000 nid=0x14c1 waiting on condition [0x000000004867f000] Java.lang.Thread.State:TIMED_WAITING (parking) at Sun.misc.Unsafe.park (Native Method) -Parking to wait for <0x0000000782c4d8e0> (a java.util.concurrent.synchronousqueue$transferstack) at Java.util.concurrent.locks.LockSupport.parkNanos (locksupport.java:198) at java.util.concurrent.synchronousqueue$ Transferstack.awaitfulfill (synchronousqueue.java:424) at java.util.concurrent.synchronousqueue$ Transferstack.transfer (synchronousqueue.java:323) at Java.util.concurrent.SynchronousQueue.poll ( synchronousqueue.java:874) at Java.util.concurrent.ThreadPoolExecutor.getTask (threadpoolexecutor.java:945) at Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:907) at Java.lang.Thread.run ( thread.java:662) "dubboclienthandler-192.168.6.162:2099-thread-2" Daemon prio=10 tid=0x000000005be86000 nid=0x14c0 Waiting on condition [0x000000004863e000] Java.lang.Thread.State:TIMED_WAITING (parking) at Sun.misc.Unsafe.park (Native Method) – Parking to wait for <0x0000 000782c61ac0> (a java.util.concurrent.synchronousqueue$transferstack) at Java.util.concurrent.locks.LockSupport.parkNanos (locksupport.java:198) at java.util.concurrent.synchronousqueue$ Transferstack.awaitfulfill (synchronousqueue.java:424) at java.util.concurrent.synchronousqueue$ Transferstack.transfer (synchronousqueue.java:323) at Java.util.concurrent.SynchronousQueue.poll ( synchronousqueue.java:874) at Java.util.concurrent.ThreadPoolExecutor.getTask (threadpoolexecutor.java:945) at Java.util.concurrent.threadpoolexecutor$worker.run (threadpoolexecutor.java:907) at Java.lang.Thread.run ( thread.java:662) "http-portal%2f192.168.6.162-8097-184" daemon prio=10 tid=0x00002aaab0ecc800 nid=0x2d7a waiting for Monitor entry [0x0000000045fa9000] Java.lang.Thread.State:BLOCKED (on object monitor) at Org.apache.log4j.Categor Y.callappenders (category.java:204)-Waiting to LOck <0x00000007800020a0> (a Org.apache.log4j.spi.RootLogger) at Org.apache.log4j.Category.forcedLog ( category.java:391) at Org.apache.log4j.Category.log (category.java:856) at Org.slf4j.impl.Log4jLoggerAdapter.info   (log4jloggeradapter.java:304)


From the last log we can see that this log thread is in the blocked state, into the Callappenders method of the Org.apache.log4j.Category class,

The code is as follows:

/** call the appenders in the     hierrachy starting at     <code>this</code>.  If no appenders could be found, emit a     warning.     <p>this method calls all the Appenders inherited from the     hierarchy circumventing any evaluation of whether to L og or not to     log the particular log request.     @param event to log.  *  /public void Callappenders (Loggingevent event) {    int writes = 0;    for (Category C = this; c! = null; c=c.parent) {      //Protected against simultaneous call to Addappender, Removeappender ,...      Synchronized (c) {if (C.aai! = null) {  writes + = C.aai.appendlooponappenders (event);} if (!c.additive) {break  ;}      }    }    if (writes = = 0) {      repository.emitnoappenderwarning (this);}  }


It can be seen from the above that there is a synchronized synchronous lock in the method, the synchronization lock causes the thread to compete, then in the case of large concurrency there will be performance problems, while

Can cause thread blocked problems.

You can use Apache log to solve this problem with the following code:

Private static final Log log = Logfactory.getlog ("xxx");  

By testing, Org.apache.log4j.Category.callAppenders thread blocked problem is gone.


The above is just a very simple example, the actual development process can be based on the state of the thread to find a lot of problems. You might as well try it.


---------------------------------------------------------------------------Copyright Notice------------------------------------------ ------------------------------------------------


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced. Blog Address: http://blog.csdn.net/mr_smile2014




Improve program performance by using JDK JPS, jstack tools to detect code problems

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.