Why some of the previous techniques were not available for the new version
I've been giving presentations to clients over the past few months at multiple Ibm®websphere application Server V7 seminars, and performance is always a popular topic; Specifically, many people want to know how to debug for optimal performance. Given the popularity of the topic in these workshops and the general misunderstanding about how to debug and debug, I think it is necessary to briefly describe the guidelines for Application server debugging.
Don't Tune Up!
Even if you have not used this phrase, you must have heard of it. A few years ago, when commercials on television and radio began to appear, this phrase often appeared. With a digital receiver, we no longer use the adjustment knob to tune the TV or broadcast (unless your TV or radio is old), but this phrase is a good guide when you debug your WebSphere application server. The reason is that, over time, as the WebSphere application server is running, the default size of the various threads and connection pools becomes smaller, because fewer of these shared resources are needed to perform the same (or more) work than before the Run-time implementations.
One example of this improvement in the WebSphere application Server runtime is the WEB container thread pool. Before WebSphere application Server v6.x, there is a one-to-one mapping between the number of concurrent client connections and the number of threads in the Web container thread pool. In other words, if 50 clients are accessing an application, then 50 threads are required to service these requests. This has changed in WebSphere application Server V6.0 because of the introduction of NIO (native IO) and AIO (asynchronous IO), which enables connection management to be handled by a few threads and can actually work with fewer threads.
In a recent customer interaction, I found that the company mistakenly thought "the larger the pool size, the better the performance" and the larger the pool size than the default value. After observing the actual thread usage and connection pool usage through the IBM tivoli®performance Viewer in a test run, I was able to improve performance by 30% by actually reducing the size of the WEB container thread pool and the JDBC connection pool. Lowering the pool size means that the cost of WebSphere application Server in managing Run-time resources is reduced; In this case, no threads and connection objects are required, so you can release the CPU and memory for processing application requests.
Be sure to debug jvm!
This is not a common language on TV and radio, but this may be the most important adjustment you can make through the WebSphere application server. Adjusting the JVM properly (the most common is to properly adjust the JVM size based on the workload) typically brings the maximum performance improvements that single debugging can achieve in the WebSphere application server.
The default heap size in the WebSphere application server is the initial heap size of MB, the maximum heap size 256 MB. These values may not be optimal for your environment, but they are conservative and choose them to avoid the problem of excessive memory usage. The result is that you are likely to increase the JVM size (assuming you have enough physical memory for the JVM).
Correct sizing needs enable VERBOSEGC (verbose garbage collection statistics, verbose garbage collection data statistics), run tests, and then analyze VERBOSEGC output to determine how the heap is resized. You can use IBM Pattern modeling and analyze Tool for Java™garbage Collector (pmat) to parse VERBOSEGC output or use IBM Support Assistan The IBM monitoring and diagnostic Tools for Java-garbage Collection and Memory, which are included with T, are analyzed.