JVM-related parameter configuration and problem diagnosis < turn >

Source: Internet
Author: User
Tags log log websphere application server ibm support

Original link: http://blog.csdn.net/chjttony/article/details/6240457

1.Websphere JVM-related problem diagnosis:

The WebSphere problems caused by the JVM mainly include application server outage and performance degradation, and the characteristics of JVM-related problems are as follows:

(1). WebSphere Application Server stops responding:

A.websphere server outage.

The b.websphere process hangs.

C.JVM memory overflow.

(2). Performance degradation:

The JVM process ID is continually changing.

2. Files required to diagnose JVM-related issues:

(1). core files:

A. Process snapshot or core file of the system.

B. A complete JVM memory snapshot, and so on.

Note: The file is very large and requires the parsing of the Log Analysis tool for ISA (IBM support Assistant).

(2). javacore File:

A. A snapshot of the running Java process.

B.websphere automatically generated files when an error occurs on the application server.

The storage path is:<was_install_root>/profiles/<profile>.

(3). The JVM's verbose garbage collector log.

(4). JVM Heap Snapshot.

3.JVM garbage collector logs:

(1). Set the JVM garbage collector step in WebSphere:

In the WebSphere Administration control window, click: Servers->application servers-><server_name>->javaand Process Management Process Definition->java Virtual Machine, check the "Verbose Garbage Collection" checkbox to restart WebSphere.

(2). The JVM detailed garbage collector logs are written in the system error log file (Native_stderr).

(3). After the product is released, it is recommended to open the WebSphere JVM garbage collector log, which consumes very little resources.

4.JVM related parameter settings for the heap:

(1). JVM Maximum heap memory size (maximum heap,-xmx):

Setting a reasonable maximum heap helps the JVM optimize performance, the larger the maximum heap, the longer it takes the JVM garbage collector to collect garbage, the smaller the maximum heap, and the more frequently the JVM garbage collector runs.

The reasonable maximum heap should be slightly larger than the maximum heap capacity required when the program is running stably.

(2). JVM initializes heap memory size (minimum HEAP,-XMS):

Setting a reasonable minimum heap can increase the startup time of the WebSphere application server.

The minimum heap is too small, and the JVM may constantly adjust the minimum heap to reset at the server start-up process, which affects the boot speed.

The minimum heap is too large, and the garbage collector needs to reclaim large memory space, which is prone to memory fragmentation. At the same time, because the initial heap memory is too large, the allocation of heap memory takes much longer and the program responds slowly.

5.JVM Garbage collector performance indicators:

The garbage collector is the main cause of memory performance bottlenecks in the JVM, and the JVM's garbage collector performance metrics:

(1). Throughput (throughput):

Refers to the percentage of the JVM that is not spent on the garbage collector, which is the percentage of time that the JVM spends working on the entire JVM while the handler is running.

(2). Pause (pauses):

Refers to the percentage of the JVM's garbage collector running time, which is the percentage of time that the application processing time is suspended because the JVM garbage collector is running.

Recovery policy for the JVM garbage collector in 6.Websphere (GC policy):

(1). –-xgcpolicy:optthruput

Let the JVM spend as much time as possible with the application, minimizing the runtime of the garbage collector.

(2). –-xgcpolicy:optavgpause

Allow the JVM to recycle as much garbage as possible, and application response times are faster when unpredictable conditions occur.

(3). –-xgcpolicy:gencon

The garbage collector uses a generational replication algorithm to allow the garbage collector to reclaim dead old generations of objects as quickly as possible when the application needs to allocate a lot of heap memory to short-lived objects.

(4). –-xgcpolicy:subpool

Ideal for applications that frequently allocate heap memory to large objects in multiple threads.

7. The thread-snapshot diagnostics process hangs the problem through the JVM:

When the JVM process is suspected of suspending, the following methods can help diagnose the JVM process hangs related issues:

(1). Collect JVM thread snapshots or javacore files:

WebSphere is open by default, and you can send signals to WebSphere via "kill-3" on the command line (Linux) to generate JVM thread snapshots and javacore related files.

(2). When the process hangs, the JVM thread snapshot is collected every few minutes:

When a process hangs, it takes a few minutes to gather a snapshot of the JVM threads to help analyze the running inside the process.

(3). View JVM thread snapshot files manually or through the ISA thread Profiler:

A. Check if thread deadlock occurs.

b. Check all thread runs that wait for a response after sending the request.

8. Diagnose the process hangs with the JVM Javacore file:

The view Javacore file is primarily based on the health status of the thread to diagnose the problem:

(1). Thread is in a blocked state:

A. Inaccessible resources, or thread synchronization with logical errors, can cause threads to be blocked.

B. Deadlocks can also cause threads to be blocked.

(2). Thread is in the running state:

A. Check the call stack of the method through multiple Javacore files.

B. If a method produces many threads, there is a problem with the loop logic of the possible methods.

(3). Thread is in a wait state:

The thread may wait for the resource to be suspended.

9.Websphere Thread Hang Diagnostics:

WebSphere includes the ability to probe thread hangs, it does not kill the suspended thread, only notifies the thread to hang in the following 3 ways:

(1). A JMX notification is sent to the JMX listener, and the JMX event processing thread is suspended through a 3rd party tool capture.

(2). Send thread suspend notifications to PMI customers via thread pool traffic monitoring system.

(3). Write thread-pending messages to the WebSphere SystemOut.log log in the following format:

[4/17/04 11:51:30:243 EST] 2d757854 threadmonitor W cwwsr0605w:threadservlet.engine.transports:0 has been active fo R 14,198 milliseconds and May is hung. There is 1threads in the server that maybe hung.

(4). When a thread is reported as pending, and if the thread is running properly, WebSphere writes a message to the SystemOut.log log that the thread is working correctly, in the following format:

[2/17/04 11:51:47:210 EST] 76e0b856 threadmonitor W wsvr0606w:threadservlet.engine.transports:0 was previously reported To was hung but had completed. It was active forapproximately 31,166 milliseconds. Thereare 0 Threads in the "the server that still is hung.

(5). WebSphere monitors can intelligently adjust the decision and warning handling of thread hangs, that is, WebSphere adjusts the decision condition accordingly if the last thread that caused the hang alarm is later confirmed to be functioning correctly.

Symptoms and common causes of 10.Websphere downtime:

The main performance of WebSphere outages is that the WebSphere process terminates because of Java exceptions or operating system local signals.

Common causes of WebSphere outages:

(1). JVM Memory Overflow exception.

(2). JVM Stack Overflow.

(3). Unexpected other anomalies, such as insufficient disk space.

(4). JVM performance optimizations fail, such as problems caused by JIT.

(5). The Java Local method invocation (JNI) produces an error, or the class library produces a problem.

(6). A memory segment conflict occurs when the JVM invokes local machine code execution.

11.Websphere Outage Problem Diagnosis:

(1). Core file:

A. Process snapshots and system core files.

B. Full virtual memory snapshot file in binary format.

Note: The core file can be very large, and some of the binary formats are unreadable, so you need to use the ISA Log Analysis tool for parsing.

(2). Javacore file:

Java memory snapshot files and thread snapshot files.

(3). Configure the –xdump parameter for the JVM:

You can specify that system snapshots, Java snapshots, and memory snapshots are generated.

13.Websphere memory Overflow Common causes:

Memory overflow generation refers to the fact that the JVM does not have enough memory space to allocate memory for the object, common reasons are:

(1). The JVM's Java heap is too small.

(2). The memory space in the JVM is sufficient, but it is fragmented, that there is no piece of capacity that can meet the heap memory requirements of the newly generated object (common in JDK1.4.2 and its previous versions).

(3). JVM memory leaks caused by Java code.

(4). The machine itself is not in enough memory.

14. How to diagnose memory overflow problem:

(1). First analyze memory-related issues in the Javacore file.

A. Check the heap memory size information.

B. Review the memory overflow exception stack.

(2). analyze file and heap information generated by the garbage collector:

A. Check the garbage collector's verbose log.

B. View a snapshot of the memory.

15.Websphere Memory Overflow Symptoms:

(1). JVM heap memory often grows continuously at a constant rate of growth until the maximum heap memory value is reached.

(2). The JVM heap memory has never been in a stable state.

16.Websphere Memory Overflow Diagnostics and workarounds:

(1). In the WebSphere Administration Console window, increase the JVM's maximum heap memory value (XMX) by doing the following:

In the WebSphere Administration Console window, click: Servers, Application Servers, server-> Java and processmanagement, Process Definiti On, Java virtualmachine, Maximum heap size.

(2). Check the JVM's detailed garbage collector log to review the memory allocation failure Reason:

A. Find memory allocation failure due to memory overflow.

B. Check the size of the allocated heap memory objects.

C. Confirm the heap memory size of the JVM.

D. View JVM Heap Memory Idle utilization (the JVM heap is not allocated as a percentage of the total JVM heap).

(3). Check for a similar memory allocation failure problem:

A. If the memory allocation failure continues to occur, the JVM heap is too small to increase the JVM heap size.

B. If it is occasionally because of a failure to allocate memory for a large object, it is a separate event, consider a comprehensive solution.

17.Websphere Monitor and adjust heap memory size:

In the WebSphere Administration Console window, click: Servers, Application Servers, server-> Java and processmanagement, Process Definiti On, Java virtualmachine, Maximum heap size opens the following interface:

Depending on the heap memory used in the JVM (the orange-yellow line), the JVM's maximum heap memory value (the Red line) is adjusted until the heap memory used in the JVM stabilizes (the Orange Line tends to stabilize) and the JVM's maximum heap memory is slightly larger than the heap memory value used (the red line is slightly elegant and smooth after the Orange Line).

JVM-related parameter configuration and problem diagnosis < turn >

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.