One: Jstack
Syntax format for jstack command: Jstack <pid>. You can view the Java process ID with JPS. Here's what to note:
1. Different Java Virtual machine thread dump creation method and file format is not the same, different JVM version, dump information also differs. In this article, just take Sun's hotspot JVM 5.0_06 as an example.
2. In the actual operation, often a dump of information, not enough to confirm the problem. It is recommended to generate three dump messages, and if each dump points to the same problem, we determine the typical problem.
Two: Threading Analysis
2.1. JVM Threads
In the thread, there are background threads inside the JVM that perform tasks such as garbage collection, or low memory detection, which are often present when the JVM is initialized, as follows:
1<span style= "Font-size:small;" > "Low Memory Detector" daemon prio=10 tid=0x081465f8 nid=0x7 runnable [0x00000000. 0x00000000]2"CompilerThread0" daemon prio=10 tid=0x08143c58 nid=0x6 waiting on condition [0x00000000. 0xfb5fd798]3"Signal Dispatcher" daemon prio=10 tid=0x08142f08 nid=0x5 waiting on condition [0x00000000. 0x00000000]4"Finalizer" daemon prio=10 tid=0x08137ca0 nid=0x4 in object.wait () [0xfbeed000. 0xfbeeddb8]5 6 At java.lang.Object.wait (Native Method)7 8-Waiting on <0xef600848>(a java.lang.ref.referencequeue$lock)9 TenAt Java.lang.ref.ReferenceQueue.remove (referencequeue.java:116) One A-Locked <0xef600848>(a java.lang.ref.referencequeue$lock) - -At Java.lang.ref.ReferenceQueue.remove (referencequeue.java:132) the -At Java.lang.ref.finalizer$finalizerthread.run (finalizer.java:159) - -"Reference Handler" daemon prio=10 tid=0x081370f0 nid=0x3 in object.wait () [0xfbf4a000]. 0xfbf4aa38] + - At java.lang.Object.wait (Native Method) + A-Waiting on <0xef600758>(a java.lang.ref.reference$lock) at -At Java.lang.Object.wait (object.java:474) - -At Java.lang.ref.reference$referencehandler.run (reference.java:116) - --Locked <0xef600758>(a java.lang.ref.reference$lock) in -"VM Thread" prio=10 tid=0x08134878 nid=0x2runnable to +"VM periodic Task Thread" prio=10 tid=0x08147768 nid=0x8 waiting on condition</span>
View Code
We can see:
* Thread Status: Waiting on condition
* Thread's Call stack
* Thread's currently locked resource: <0xef63d600>
2.2. Thread State Analysis
As we just saw, the state of the thread is an important indicator, which shows where the header line of the StackTrace ends. So what are the common states of threads? Under what circumstances will a thread enter this state? What clues can we find? </span>
1.1 Runnable
This state indicates that the thread has all the running conditions, prepares the operating system for scheduling in the running queue, or is running.
1.2 Wait on condition
The status occurs when a thread waits for a condition to occur. What is the specific reason, can combine stacktrace to analyze. The most common scenario is that the thread waits for the network to read and write, such as when the network data is not ready for reading, and the thread is in this wait state, and once the data is ready to read, the thread is reactivated, reading and processing the data. Before Java introduced Newio, for each network connection, there is a corresponding thread to handle the read and write operation of the network, even if there is no data to read and write, the thread is still blocking the read and write operations, which may result in the waste of resources and stress on the thread scheduling of the operating system. New mechanisms have been introduced in Newio, and the performance and scalability of the programmed server programs has been improved.
If you find that a large number of threads are in wait on condition, looking from the thread stack, waiting for the network to read and write, this can be a symptom of a network bottleneck. The thread could not execute because of network blocking. One situation is that the network is very busy, almost consumes all the bandwidth, there is still a large amount of data waiting for the network to read and write, another situation may be the network idle, but due to problems such as routing, the packet does not arrive properly. Therefore, we should combine some performance observation tools of the system to synthesize the analysis, such as the number of sending packets of netstat statistic unit time, if it is obviously more than the limit of the network bandwidth; Observe CPU utilization, if the CPU time of the system state is higher than the CPU time of the user state; If the program is running on the Solaris 10 platform, you can use the DTrace tool to see what the system calls, if you observe read/ The number of write system calls or run times is far ahead, which points to network bottlenecks caused by network bandwidth constraints. Another common scenario in which wait on condition occurs is when the thread is in sleep, waiting for the time of sleep to be awakened.
1.3 Waiting for monitor entry and in Object.wait ()
In multithreaded Java programs, to implement synchronization between threads, it is necessary to talk about Monitor. Monitor is the primary means of using Java to achieve mutual exclusion and collaboration between threads, which can be seen as an object or class lock. Every object has, and only one monitor. Each monitor can be owned by only one thread at a time, and the thread is "Active thread", while the other threads are "waiting thread", waiting in the two queue "Entry set" and "Wait set" respectively. The state of the thread waiting in "Entry set" is "Waiting for monitor Entry", while the thread state waiting in "wait set" is "in Object.wait ()".
First look at the thread inside the "Entry Set". We call the code snippet protected by synchronized as a critical section. When a thread requests to enter a critical section, it enters the "Entry Set" queue. The corresponding code is like this:
Synchronized (obj) {
.........
}
There are two possibilities:
The monitor is not owned by other threads, and there are no other waiting threads in the Entry set. This thread becomes the Owner of the corresponding class or object's monitor, and the code that executes the critical section
The monitor is owned by another thread, and this thread waits in the Entry set queue.
In the first case, the thread will be in the "Runnable" state, and in the second case, the thread dump will be displayed in "Waiting for monitor entry". As shown below:
1 for monitor entry [0xf927b000. 0XF927BDB8]23 at Testthread. Waitthread.run (waitthread.java:39)45 -Waiting to lock <0xef63bf08> (a Java.lang.Object)67 -locked <0xef63beb8> (a java.util.ArrayList)8 9 at Java.lang.Thread.run (thread.java:595)
View Code
The critical section is set to ensure the atomicity and integrity of code execution within it. But because the critical section only allows threads to pass through at any time, this is contrary to the original intention of our multithreaded program. If in a multithreaded program, a large number of use of synchronized, or improper use of it, will cause a large number of threads in the critical section of the entrance waiting, resulting in a significant decrease in performance of the system. If this situation is found in the thread dump, the source code should be reviewed and the program improved.
Now let's see why threads are entering "Wait Set". When the thread obtains the monitor and enters the critical section, if the condition that the thread continues to run is not satisfied, it calls the Wait () method of the object (typically the object being synchronized), discards the monitor, and enters the "Wait Set" queue. Only if another thread calls notify () or Notifyall () on the object, the "Wait Set" queue is given the opportunity to compete, but only one thread obtains the object's Monitor and reverts to the running state. In the "Wait Set" thread, dump behaves as: in Object.wait (), similar to:
1"Thread-1" prio=10 tid=0x08223250 nid=0xa in object.wait () [0xef47a000. 0xef47aa38]2 3 At java.lang.Object.wait (Native Method)4 5-Waiting on <0xef63beb8>(a java.util.ArrayList)6 7At Java.lang.Object.wait (object.java:474)8 9At Testthread. Mywaitthread.run (mywaitthread.java:40)Ten One-Locked <0xef63beb8>(a java.util.ArrayList) A -At Java.lang.Thread.run (thread.java:595)
View Code
Looking closely at the dump information above, you will find that it has the following two lines:
-Locked <0xef63beb8> (a java.util.ArrayList)
-Waiting on <0xef63beb8> (a java.util.ArrayList)
Here's an explanation for why you lock the object first and then waiting on the same object? Let's take a look at the code that this thread corresponds to:
1 synchronized (obj) {2 ..... 3 obj.wait (); 4 ......... 5
View Code
In the execution of the thread, the Monitor (corresponding to locked <0xef63beb8>) of this object is obtained with synchronized. When executed to Obj.wait (), the thread discards the ownership of monitor and enters the "Wait set" queue (corresponding to waiting on <0xef63beb8>).
Often in your program, there will be multiple similar threads, and they all have similar dump information. This may also be normal. For example, in a program, there are multiple service threads designed to read request data from a queue. This queue is the lock and waiting on object. When the queue is empty, these threads wait on this queue until the queue has data, the threads are Notify, and of course only one thread gets lock, continues execution, while the other threads continue to wait.
3. The lock of JDK 5.0
We mentioned above that if the synchronized and monitor mechanisms are not used properly, they may cause performance problems with multithreaded applications. In JDK 5.0, the lock mechanism was introduced to enable developers to develop high-performance concurrent multithreaded programs more flexibly, replacing the synchronized and monitor mechanisms of previous JDK. However, it is important to note that because the lock class is just a normal class, the JVM is not aware of the lock object occupancy, so the thread dump will not contain information about lock, such as deadlock, and so on, it is not as easy to use synchronized programming way to identify.
4. Case Studies
1. Deadlock
In the programming of multi-thread programs, if the synchronization mechanism is not properly applied, it is possible to cause the deadlock of the program, which often manifests as the pause of the program, or no longer responds to the user's request. In this example, for example, it is a typical deadlock condition:
1"Thread-1" prio=5 tid=0x00acc490 nid=0xe50 waiting forMonitor entry [0x02d3f0002 3.. 0x02d3fd68]4 5At Deadlockthreads. Testthread.run (testthread.java:31)6 7-Waiting to lock <0x22c19f18>(a java.lang.Object)8 9-Locked <0x22c19f20>(a java.lang.Object)Ten One"Thread-0" prio=5 tid=0x00accdb0 nid=0xdec waiting forMonitor entry [0x02cff000 A -.. 0x02cff9e8] - theAt Deadlockthreads. Testthread.run (testthread.java:31) - --Waiting to lock <0x22c19f20>(a java.lang.Object) - +-Locked <0x22c19f18>(a java.lang.Object) - +
View Code
In JAVA 5, the detection of deadlocks has been enhanced. A Java-level deadlock can be reported directly in thread dump as follows:
1 Found one java-leveldeadlock:2 =============================3 "Thread-1": 4 waiting to lock monitor 0x0003f334 (object 0x22c19f18, a java.lang.Object),5 which is he LD by "Thread-0"6 "Thread-0":7 waiting to lock monitor 0x0003f314 (object 0x22c19f20, a Java.lang.Object),8
View Code
2. Hot Lock
Jes is also often the main factor leading to system performance bottlenecks. Its performance is characterized by the fact that multiple threads are competing against a critical section, or lock, which may occur:
* Frequent thread context switching: From the operating system to the thread scheduling, when the thread waits for resources to block, the operating system will switch it out, put it in the waiting queue, when the thread obtains the resources, the scheduling algorithm will switch the thread into the execution queue.
* A large number of system calls: because of the context switching of threads, the competition of hot locks, or the frequent entry and exit of critical sections, can result in a large number of system calls.
* Most CPU overhead is used in "System State": Thread context switch, and system call, will cause the CPU to run in "System State", in other words, although the system is busy, but the CPU is used in the "User state" ratio is small, the application does not get sufficient CPU resources.
* As the number of CPUs increases, the performance of the system decreases. Because of the large number of CPUs, the more threads running concurrently, the more frequent thread context switching and the CPU overhead of the system state can result in worse performance.
The above description is a scalability (extensibility) Poor performance of the system. From the overall performance index, the response time of the program will be longer and the throughput will be reduced due to the existence of the line FSO lock. </span>
So, how do you know where "hot locks" appear? An important method is to observe the system resource usage with the various tools of the operating system, and collect the dump information of Java thread, see what method the thread is blocking, and understand the reason, in order to find the corresponding solution.
We have encountered such an example, when the program runs, there are various phenomena mentioned above, by observing the operating system's resource usage statistics, and thread dump information, determine the existence of hot locks in the program, and found that most of the thread state is waiting for monitor entry or Wait on Monitor, and is blocking on compression and decompression methods. The performance of the system was increased several times after the third-party compression package javalib instead of the package from the JDK.
Jstack and Thread Dump analysis