Deep JVM profiling Java thread stack _java

Source: Internet
Author: User
Tags garbage collection memory usage stack trace cpu usage high cpu usage

In this article I will teach you how to analyze the JVM's thread stack and how to isolate the root cause of the problem from the stack information. It seems to me that thread stack analysis technology is a technology that Java EE Product support engineers must master. The information stored in the thread stack is usually far beyond your imagination and we can make good use of this information at work.

My goal is to share the knowledge and experience that I have accumulated over the past more than 10 years in online analytics. This knowledge and experience was obtained in an in-depth analysis of various versions of the JVM and vendor JVM vendors, and in the process I also summed up a number of common problem templates.

So, ready, now bookmark this article, and I'll bring you this series of articles in the next few weeks. Please share this thread analysis training plan with your colleagues and friends.

Sounds good, I really should improve my thread stack analysis skills ... But where do I start?

My advice is to follow me through this thread analysis training program. Here's what we're going to cover with the training. At the same time, I will share the actual cases I have dealt with to everyone in order to learn and understand.

1 Overview of the thread stack and basics
2 The thread stack generation principle and related tools
3 differences in the format of different JVM thread stacks (Sun HotSpot, IBM JRE, oracal JRockit)
4 Thread Stack Log introduction and parsing method
5 Analysis of the thread stack and related techniques
6 Common problem templates (thread state, deadlock, Io call hangs, garbage collection/outofmemoryerror problem, dead loop, etc.)
7 example analysis of thread stack problem

I hope this series of training will give you real help, so keep an eye on the weekly article updates.

But what if I have questions in the learning process or can't understand the content in the article?

Don't worry, just think of me as your mentor. Any questions about the thread stack can be consulted (if the problem is not too low). Please feel free to choose the following ways to get in touch with me:

1) directly under this article to comment (if you are embarrassed to be anonymous)
2 Submit your thread stack data to root cause analysis forum
3 Email to me, address is @phcharbonneau @hotmail.com

Can you help me to analyze the problems in our products?

Sure, if you like, you can send me your stack site data via email or forum Root Cause analysis Forum. Dealing with practical problems is the only way to learn to improve your skills.

I sincerely hope that you will enjoy the training. So I will do my best to provide you with high quality materials and answer all kinds of questions.

Before introducing the thread stack analysis techniques and problem patterns, let's talk about the basics. So in this post, I'll cover the basics so that you can better understand the interaction between the JVM, the middleware, and the Java EE container.

Java VM Overview

Java Virtual machines are the foundation of the Jave EE platform. It is where the middleware and applications are deployed and run.

The JVM provides the following for middleware software and your Java/java EE program:

– (binary form) Java/java EE program operating environment
– some program features and tools (IO infrastructure, data structure, thread management, security, monitoring, etc.)
– Dynamic memory allocation and management with garbage collection

Your JVM can reside in a number of operating systems (Solaris, AIX, Windows, and so on.) And can be configured on your physical server, you can install 1 to more JVM processes on each physical/virtual server.

Interaction between the JVM and the middleware

The following diagram shows a high-level interaction model between the JVM, the middleware, and the application.

Some simple and typical interactions between the JVM, middleware, and application pieces shown in the diagram. As you can see, the thread allocation for standard Java EE applications is actually done between the middleware kernel and the JVM. (There are exceptions, of course, where applications can invoke the API directly to create threads, which is not uncommon and is particularly cautious in the process of use)

Also, note that some threads are managed internally by the JVM, typically the garbage collection thread, which is used internally by the JVM to do parallel garbage collection.

Because most of the thread allocations are done by the Java EE container, it is important for you to understand and recognize the thread stack trace and recognize it from the thread stack data. This will give you a quick idea of what type of request the Java EE container is going to execute.

From an analysis point of view of a thread dump stack, you will be able to understand the differences between the thread pools found from the JVM and identify the type of request.

The last section gives you an overview of what a JVM thread stack is for a hotsop VM, and a variety of threads that you will encounter. The details of the IBM VM thread stack form will be provided to you in section fourth.

Note that you can obtain a sample thread stack for this article from the root Cause analysis forum.

JVM thread Stack--what is it?

The JVM thread stack is a snapshot of a given time, and it provides you with a complete list of all the Java threads that were created.

Every Java thread that is found will give you the following information:

– The name of the thread, often used by the middleware vendor to identify the thread, and typically with the assigned thread pool name and state (run, block, and so on.)

– Thread type & priority, for example: Daemon prio=3 * * Middleware programs generally create their threads in the form of daemon, which means that they run in the background; they provide services to their users, for example: to your Java EE application * *

–java thread ID, for example: tid=0x000000011e52a800 * * This is the Java thread ID obtained through JAVA.LANG.THREAD.GETID (), which is often implemented with self-growing long shaping 1..n**


– The native thread ID, for example: nid=0x251c**, is critical because the native thread ID allows you to obtain information such as the operating system's point of view that the thread uses most of the CPU time in your JVM. **

–java thread state and details, such as: Waiting for monitor entry [0xfffffffea5afb000] Java.lang.Thread.State:BLOCKED (on object monitor)
* * Can quickly understand the possible causes of thread state extremely current blocking * *

–java thread stack trace; This is the most important data you can find from the thread stack so far. This is where you spend the most analysis time, because the Java stack trace provides 90% of the information that you will learn at a later session about the root cause of many types of problems.


–java heap memory decomposition; Starting with the Hotspot VM version 1.6, you can see hotspot memory usage at the end of the thread stack, such as Java heap memory (Younggen, Oldgen) & PermGen space. This information is useful for analyzing problems caused by frequent GC. You can use known thread data or patterns to make a quick location.

Heap
Psyounggen Total   466944K, used 178734K [0xffffffff45c00000, 0xffffffff70800000, 0xffffffff70800000]
Eden Space 233472K, 76% used [0xffffffff45c00000,0xffffffff50ab7c50,0xffffffff54000000]
from space 233472K, 0 % used [0xffffffff62400000,0xffffffff62400000,0xffffffff70800000)  to Space 233472K, 0% used [ 0xffffffff54000000,0xffffffff54000000,0xffffffff62400000)
Psoldgen total    1400832K, used 1400831K [ 0xfffffffef0400000, 0xffffffff45c00000, 0xffffffff45c00000)
object Space 1400832K, 99% used [0xfffffffef0400000, 0xffffffff45bfffb8,0xffffffff45c00000)
Pspermgen total    262144K, used 248475K [0xfffffffed0400000, 0xfffffffee0400000, 0xfffffffef0400000)
object Space 262144K, 94% used [0xfffffffed0400000,0xfffffffedf6a6f08, 0xfffffffee0400000)

Thread stack information Large disassembly

In order to give you a better understanding, we provide you with the following diagram, in which the thread stack information and threads pool on the hotspot VM are disassembled in detail, as shown in the following illustration:

The diagram above shows that the thread stack is made up of several different parts. This information is important for problem analysis, but the analysis of different problem patterns uses different parts (the problem pattern is modeled and demonstrated in later articles). )


Now through this analysis sample, we give you a detailed explanation of the various components of the hotespot thread stack information:

# Full Thread Dump indicator

Full thread dump is a globally unique keyword that you can find in the output log of the middleware and stand-alone version Java thread stack information (for example, under Unix: Kill-3 <PID>). This is the beginning portion of the thread stack snapshot.

Full thread dump Java HotSpot (TM) 64-bit Server VM (20.0-b11 mixed mode):

# Java EE Middleware, third-party, and custom threads in application software

This part is the core of the entire thread stack and is the part that usually takes the most time to parse. The number of threads in the stack depends on the middleware you use, third party libraries (which may have separate threads), and your application (which is usually not a good practice if you create custom threads).


In our sample thread stack, WebLogic is the middleware we use. Starting with WebLogic 9.2, you will use a managed thread pool that is uniquely identified with "Weblogic.kernel.Default" (self-tuning)

"[STANDBY] Executethread: ' 414 ' for queue: ' Weblogic.kernel.Default (self-tuning) '" Daemon prio=3 tid= 0x000000010916a800 nid=0x2613 in Object.wait () [0xfffffffe9edff000]
  Java.lang.Thread.State:WAITING (on Object Monitor) at
    java.lang.Object.wait (Native method)
    -Waiting on <0xffffffff27d44de0> (a Weblogic.work.ExecuteThread) at
    java.lang.Object.wait (object.java:485)
    at Weblogic.work.ExecuteThread.waitForRequest (executethread.java:160)
    -locked <0xffffffff27d44de0> (a Weblogic.work.ExecuteThread) at
    Weblogic.work.ExecuteThread.run (executethread.java:181)

# HotSpot VM Threads
This is an internal thread with Hotspot VM management, which is used to perform internal native operations. You don't have to be too paranoid about this, unless you find a high CPU usage (via the associated thread stack and prstat or native thread IDs).

"VM periodic Task Thread" prio=3 tid=0x0000000101238800 nid=0x19 waiting on condition

# HotSpot GC Thread
when using HotSpot for parallel GC (now common in environments that use multiple physical cores), the HotSpot VM created by default or each JVM manages a GC thread with a specific identity. These GC threads allow the VM to perform its periodic GC cleanup in parallel, which can result in a total decrease in GC time, while at the expense of increased CPU usage.

"GC Task thread#0 (PARALLELGC)" Prio=3 tid=0x0000000100120000 nid=0x3 runnable
"GC task Thread#1 (PARALLELGC)" prio=3 tid=0x0000000100131000 nid=0x4 runnable ....... ...... ...... ..... ...... ..... ..... ..... ... .......... ....... ....... ...... ...... ...... ... .......... ....... ....... ...... ...... ...... ... .......... ....... ....... ...... ...... ...
... ...


This is very critical data, because when you have problems with GC, such as over GC, memory leaks, and so on, you will be able to use the thread's native ID values associated with the operating system or Java thread, and then find any high consumption of the CPI time. Future articles you will learn how to identify and diagnose such problems.

# JNI Global Reference count
a global reference to JNI (Java Local interface) is the basic object reference from local code to Java objects managed by the Java garbage collector. Its role is to block garbage collection of objects that are still being used by local code but are technically not "active" in Java code.

It is also important to note JNI references in order to detect JNI-related leaks. If your program uses JNI directly, or a third-party tool such as a listener, it can easily cause a local memory leak.

JNI Global references:1925

# Java Stack Usage view


This data is added to JDK 1.6 to provide you with a brief and quick view of the hotspot stack. I find it useful when I'm dealing with a GC-related problem with a high CPU footprint, You can see both the thread stack and the Java heap in a separate snapshot, so that you can parse (or exclude) any key points in a particular Java heap memory space. As you can see in our sample thread stack, Java heap Oldgen exceeds the maximum value!
 

Heap
 Psyounggen Total   466944K, used 178734K [0xffffffff45c00000, 0xffffffff70800000, 0xffffffff70800000]
 Eden Space 233472K, 76% used [0xffffffff45c00000,0xffffffff50ab7c50,0xffffffff54000000]
 from space 233472K, 0 % used [0xffffffff62400000,0xffffffff62400000,0xffffffff70800000)  to Space 233472K, 0% used [ 0xffffffff54000000,0xffffffff54000000,0xffffffff62400000)
 Psoldgen total    1400832K, used 1400831K [ 0xfffffffef0400000, 0xffffffff45c00000, 0xffffffff45c00000)
 object Space 1400832K, 99% used [0xfffffffef0400000, 0xffffffff45bfffb8,0xffffffff45c00000)
 Pspermgen total    262144K, used 248475K [0xfffffffed0400000, 0xfffffffee0400000, 0xfffffffef0400000)
 object Space 262144K, 94% used [0xfffffffed0400000,0xfffffffedf6a6f08, 0xfffffffee040000

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.